This is the mail archive of the mailing list for the GDB project.

Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: [rfc][3/3] Remote core file generation: memory map

Hi Ulrich,

not sure if there is anything to do from this mail but at least some comments:

On Fri, 21 Oct 2011 20:57:04 +0200, Ulrich Weigand wrote:
> Note that there already is a qXfer:memory-map:read packet, but this
> is not usable as-is to implement target_find_memory_regions, since
> it is really intended for a *system* memory map for some naked
> embedded targets instead of a per-process virtual address space map.
> For example:
> - the memory map is read into a single global mem_region list; it is not
>   switched for multiple inferiors

Without extended-remote there is a single address map only.  Is the memory map
already useful with extended-remote using separate address spaces?

I do not have the embedded memory map experience but it seems to me the memory
map should be specified for each address map, therefore for each inferior it
is OK (maybe only possibly more duplicates are sent if the address spaces are
the same).  If GDB uses the memory map it uses it already for some inferior
and therefore its address space.

> - native or gdbserver Linux targets do not have a memory map today,
>   and just enabling it changes memory access behaviour in unexpected
>   ways, e.g. accesses outside of memory regions in /proc//maps are
>   now no longer possible; also caching behaviour is different
> - the memory attribute format is insufficient to express properties
>   of a virtual memory mapping (e.g. permissions; mapped filename ...)
> I guess longer term it might be nicer to always have a memory map,
> and also use it for native targets, and then use the same map also
> for core file generation ...
> I'd appreciate suggestions how to move forward on this; is having a
> new qXfer type just for core file generation OK, or should we rather
> attempt to move towards an always-active memory map -- if the latter,
> how can we get there?

I need to implement core files reading support into gdbserver in a foreseeable
future for performance reasons.  For the core file case everything can be
indefinitely cached (and it is more significant to cache it than in the local
core file case).  The caching can+should be improved even in the normal live
process case (by setting default_mem_attrib->cache = 1) but it needs to be
temporary (with the prepare_execute_command flushing).  For embedded targets
the caching should be disabled for memory-I/O regions even if it would get
enabled otherwise.

The caching should probably stay in the memory map and not be moved into the
process map.  This all suggests me separation in the submitted patch may
complicate it all a bit.

> +const struct gdb_xml_attribute vma_attributes[] = {
> +const struct gdb_xml_element process_map_children[] = {
> +const struct gdb_xml_element process_map_elements[] = {

These should be static; it is already a bug in memory-map.c but there are too
many such bugs, someone could spend some time fixing them, one could use my:;a=blob_plain;hb=HEAD;f=bin/checkstatic

> +static int
> +read_mapping (FILE *mapfile,
> +	      long long *addr,
> +	      long long *endaddr,
> +	      char *permissions,
> +	      long long *offset,
> +	      char *device, long long *inode, char *filename)
> +{
> +  int ret = fscanf (mapfile, "%llx-%llx %s %llx %s %llx",
> +                    addr, endaddr, permissions, offset, device, inode);

There will be /proc/PID/maps reader for gdbserver also from:
	[patch 3/3] Implement qXfer:libraries for Linux/gdbserver #3

But maybe it is simple enough code and it has different output anyway it does
not matter much to unify it.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]