This is the mail archive of the gdb-patches@sourceware.org mailing list for the GDB project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: [rfc][3/3] Remote core file generation: memory map


I wrote:
> Jan Kratochvil wrote:
> > On Fri, 21 Oct 2011 20:57:04 +0200, Ulrich Weigand wrote:
> > > Note that there already is a qXfer:memory-map:read packet, but this
> > > is not usable as-is to implement target_find_memory_regions, since
> > > it is really intended for a *system* memory map for some naked
> > > embedded targets instead of a per-process virtual address space map.
> > > 
> > > For example:
> > > 
> > > - the memory map is read into a single global mem_region list; it is not
> > >   switched for multiple inferiors
> > 
> > Without extended-remote there is a single address map only.  Is the memory map
> > already useful with extended-remote using separate address spaces?
> > 
> > I do not have the embedded memory map experience but it seems to me the memory
> > map should be specified for each address map, therefore for each inferior it
> > is OK (maybe only possibly more duplicates are sent if the address spaces are
> > the same).  If GDB uses the memory map it uses it already for some inferior
> > and therefore its address space.
> 
> The problem is that the way GDB uses the memory map is completely
> incompatible with the presence of multiple address spaces.
> 
> There is a single instance of the map (kept in a global variable
> mem_region_list in memattr.c), which is used for any access in
> any address space.  lookup_mem_region takes only a CORE_ADDR;
> the "info mem" commands only operate on addresses with no notion
> of address spaces.  The remote protocol also does not specify
> which address space a map is requested for.

Another problem just occurred to me: the memory region list is
cached during the whole duration of existence of the inferior.
This caching is really necessary, since the map is consulted
during each single memory access.  And it seems quite valid to
cache the map as long as it describes fixed features of the
architecture (i.e. RAM/ROM/Flash layout).

However, once the map describes VMA mappings in a process context,
it becomes highly dynamic as memory maps come and go ...  It is
no longer really feasible to cache the map contents then.

This seems to me to be an argument *for* splitting the contents into
two maps; the system map which is static and cached (and used for
each memory access), and the per-process map which is dynamic
and uncached (and only used rarely, in response to unfrequently
used user commands) ...

Thoughts?

Bye,
Ulrich

-- 
  Dr. Ulrich Weigand
  GNU Toolchain for Linux on System z and Cell BE
  Ulrich.Weigand@de.ibm.com


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]