This is the mail archive of the
mailing list for the GDB project.
Re: [rfc][3/3] Remote core file generation: memory map
- From: Pedro Alves <pedro at codesourcery dot com>
- To: gdb-patches at sourceware dot org
- Cc: "Ulrich Weigand" <uweigand at de dot ibm dot com>, jan dot kratochvil at redhat dot com
- Date: Wed, 9 Nov 2011 16:37:22 +0000
- Subject: Re: [rfc][3/3] Remote core file generation: memory map
- References: <201111081725.pA8HPaFc003696@d06av02.portsmouth.uk.ibm.com>
On Tuesday 08 November 2011 17:25:36, Ulrich Weigand wrote:
> I wrote:
> > Jan Kratochvil wrote:
> > > On Fri, 21 Oct 2011 20:57:04 +0200, Ulrich Weigand wrote:
> > > > Note that there already is a qXfer:memory-map:read packet, but this
> > > > is not usable as-is to implement target_find_memory_regions, since
> > > > it is really intended for a *system* memory map for some naked
> > > > embedded targets instead of a per-process virtual address space map.
> > > >
> > > > For example:
> > > >
> > > > - the memory map is read into a single global mem_region list; it is not
> > > > switched for multiple inferiors
> > >
> > > Without extended-remote there is a single address map only. Is the memory map
> > > already useful with extended-remote using separate address spaces?
> > >
> > > I do not have the embedded memory map experience but it seems to me the memory
> > > map should be specified for each address map, therefore for each inferior it
> > > is OK (maybe only possibly more duplicates are sent if the address spaces are
> > > the same). If GDB uses the memory map it uses it already for some inferior
> > > and therefore its address space.
> > The problem is that the way GDB uses the memory map is completely
> > incompatible with the presence of multiple address spaces.
> > There is a single instance of the map (kept in a global variable
> > mem_region_list in memattr.c), which is used for any access in
> > any address space. lookup_mem_region takes only a CORE_ADDR;
> > the "info mem" commands only operate on addresses with no notion
> > of address spaces.
That's mostly because we never really needed to consider making it
per multi-process/inferior/exec before, and managed to just look the
other way. Targets that do multi-process don't use the map presently.
I'm sure there are other things that live in globals but that should
be per-inferior or address space, waiting for someone to trip on
them, and eventually get fixed. :-)
> Another problem just occurred to me: the memory region list is
> cached during the whole duration of existence of the inferior.
> This caching is really necessary, since the map is consulted
> during each single memory access. And it seems quite valid to
> cache the map as long as it describes fixed features of the
> architecture (i.e. RAM/ROM/Flash layout).
> However, once the map describes VMA mappings in a process context,
> it becomes highly dynamic as memory maps come and go ... It is
> no longer really feasible to cache the map contents then.
> This seems to me to be an argument *for* splitting the contents into
> two maps; the system map which is static and cached (and used for
> each memory access), and the per-process map which is dynamic
> and uncached (and only used rarely, in response to unfrequently
> used user commands) ...
On e.g., uclinux / no mmu, you could have both the
system memory map returning the properties of memory of the
whole system, and gdb could use that for all memory accesses,
but, when generating a core of a single process, we're only
interested in the memory "mapped" to that process. So I tend
We could also make the existing memory map be per-process/aspace,
and define it describe only the process's map (a process is a means
of a virtualization of the system resources after all). The dynamic
issue with process's memory maps then becomes a cache management
policy decision. E.g., at times we know the map can't change (all is
stopped, or by user knob), this would automatically enable the dcache
for all RO regions (mostly .text). We can still do this while having
two maps mechanism though.
It doesn't seem there's a true answer to this, but I'm leaning
on a new target object.