This is the mail archive of the
mailing list for the GDB project.
Re: [rfc] Options for "info mappings" etc. (Re: [PATCH] Implement new `info core mappings' command)
- From: Pedro Alves <pedro at codesourcery dot com>
- To: "Ulrich Weigand" <uweigand at de dot ibm dot com>
- Cc: gdb-patches at sourceware dot org, jan dot kratochvil at redhat dot com, sergiodj at redhat dot com
- Date: Tue, 6 Dec 2011 15:57:23 +0000
- Subject: Re: [rfc] Options for "info mappings" etc. (Re: [PATCH] Implement new `info core mappings' command)
- References: <201112051452.pB5Eq47S009157@d06av02.portsmouth.uk.ibm.com>
On Monday 05 December 2011 14:52:04, Ulrich Weigand wrote:
> Pedro Alves wrote:
> > Hi Ulrich, sorry for the delay. I haven't managed to read your patches yet.
> > I'll reaply here first.
> Thanks for your comments!
> > First, the reason Sergio added "info mappings" instead of making
> > "info proc mappings" work for core files, was that using "info proc"
> > for cores was objected. I'm personally okay with it, but I think
> > that should be understood as not being a limitation of the design.
> Actually, Sergio's latest patches added "info core" with a number
> of subcommands, including "info core mappings".
> I didn't really like this. IMO the underlying problem with "info proc"
> is that it is a target-specific command in the first place. If you want
> to look at a process' memory map, you don't really care whether the
> process is running natively, or running on a remote machine accesses
> via gdbserver, or even if you're just looking at a core file. As a
> user, I don't really see why looking at the memory map should require
> use of different commands depending on the particular mode of operation,
> or even why it shouldn't work at all in some of those modes.
> "info proc" doesn't work that way; it is completely tied to native
> operation. Adding an "info core" does make some information available
> for core files, but it doesn't really solve the underlying problem:
> you still need to remember to use a different command, and even so
> it doesn't work for remote/gdbserver targets at all.
IIRC, the rationale given for the objections was that "info proc" was
originally intended as just a frontend for /proc (hence it accepting
PIDs not being debugged), and, that there are other core-specific info
bits that we could attach as "info core" subcommands.
Playing devil's advicate, under this perpective, another way to look
at it, is to consider that "info proc PID" should still read /proc
info from the running system, even when debugging a core, or just an exec.
> That's why my suggestion was to instead move "info proc" to be
> target-independent. That is to say, it would still show Linux-specific
> information about a process, but it would no longer depend on whether
> you look at that Linux process natively, remotely, or post-mortem.
One could argue that generic info like that should be under
the "inferior" moniker, not "process".
> (Of course in the core file case, that goal can only reached approximately,
> since some information is simply not available -- but that situation also
> applies in some other aspects to core files ...)
> Now, instead of this change to "info proc", we could add a new
> "info mappings" command. (Note that this would correspond to the
> "option 1" in my email.) This would have the same advantage of
> providing the same command for all modes of operation, and could
> actually be even more general, beyond just Linux targets, to cover
> any target that has a notion of memory-mapping files.
> However, at least in my opinion, this option still has drawbacks:
> Users today are used to "info proc mappings". Should we remove that
> command? Or else forward it to the new "info mappings"? But in either
> case there is now a disconnect to the other "info proc" commands; why
> should "info proc mappings" work on remote/gdbserver, but not the other
> commands? In the end, this still doesn't solve the fundamental problem
> of having a native-target-implemented command "info proc".
I definitely agree that "info proc FOO" should be forwarded. Debugging
against remote or native should ideally provide the same experience,
you're just connected to a different host (localhost or remote).
> > I have to say that I have reservations on this new TARGET_OBJECT_PROC
> > option. It adds yet another generic abstraction (it almost looks like
> > TARGET_OBJECT_OSDATA reinvented), yet the data it sends over is not
> > really structured. This maps nicely on linux, but, e.g, on procfs targets,
> > you have a bunch of different /proc implementations, it can actually be
> > awkward, in that you may need to translate whatever the real /proc/pid/foo
> > gives you into a format gdb expects. The answer to that seems to be that GDB
> > will install a gdbarch handler for each OS that understands
> > /proc/pid/foo for the target OS, but that means that either or both of core
> > gdb and the target (can result in weird wrong turns, if we add coping to
> > both ends) will have to keep up with whatever format changes happen
> > between OS revisions on the target side instead of being given a
> > structured, abstracted and simplified view (that could in turn be easily
> > marshalled in structured form to MI or python, if so desired). And then
> > there's no default gdb-side gdbarch implementation, so GDB needs to be
> > aware of the random embedded OSs to be able to support "info proc foo".
> > A structured format means that its definitely the target that gets to keep up.
> Well, IMO there's two "flavours" of target objects, if you will. Some objects
> are completely generic, fully defined by GDB -usually via XML-, and implemented
> (or implementable) across all target machines and operating systems.
> But there is another class of target objects whose contents are not generic,
> but specific to a particular architecture and/or operating system; they are
> still implemented as target object in order to give architecture code a way
> to get at those contents in a homogeneous way no matter whether GDB operates
> in native, remote, or core file mode. This class includes things like
> TARGET_OBJECT_SPU, TARGET_OBJECT_WCOOKIE, TARGET_OBJECT_DARWIN_DYLD_INFO,
Right. Generally, blobs, binary objects. Descriptive, meta- objects tend
to be xml.
> and probably something like the new TARGET_OBJECT_LIBRARIES_SVR4 ...
(Nope, that's xml.)
> In my mind, the proposed TARGET_OBJECT_PROC would fall into the second
> category, that is, it provides access to pre-existing, operating-system
> defined contents, while simply abstracting the means of delivery. In
> particular, I would not expect the "provider side" (Linux native target
> or gdbserver code) to ever implement any sort of "conversion" of the
> contents. If there ever should be changes to the contents of /proc
> files, the task of adapting to those changes should lie solely on
> the gdbarch code that consumes the TARGET_OBJECT_PROC objects.
How are we making "info proc map" work with core files
with this? I'd imagine the core target falling back to the gdbarch
method, but are we then making the core target synthesize TARGET_OBJECT_PROC
objects for the gdbarch method to consume? That's where the bit
about "I don't expect the "provider side (...) to ever implement any
sort of "conversion" of the contents" seems to fall short.
> Of course, as you say, this means that TARGET_OBJECT_PROC really only
> can ever be consumed by OS-specific, usually gdbarch code. (But that's
> still better than having *native-target-only* code IMO.)
If GDB already needs to know what it is reading, then this could also be
implemented by having the gdbarch hook open/read remote:/proc/PID/maps ?
No new target object or packets necessary? Because I'm not seeing what
TARGET_OBJECT_PROC brings over that (though I'm still confused on how
"info proc map" on cores is meant to be implemented with this).
> I wouldn't mind renaming the object to TARGET_OBJECT_LINUX_PROC to make
> the intention about the objects contents clearer. (I thought that maybe
> other procfs targets could also use TARGET_OBJECT_PROC, but since of
> course the contents would be different, it might be better to use a
> new object type if and when we ever do that ...)
I think the name is fine. There's something bugging
me that may affect the decision though. Up until very recently, the
FOO in "info proc FOO" was more freeform than it is now:
But IIUC, while the TARGET_OBJECT_PROC object still takes the
FOO to return as annex, the set of possible FOOs (map, exec, etc.)
will now be hardcoded, instead of leaving those to the
backend/target as well.
> > This is not a NAK, but I'm just saying I'm not fully convinced yet.
> Did this help convince you? :-)
Not yet. :-)
> B.t.w. for the purpose I'm immediately interested in myself right now,
> which is to provide for remote/gdbserver core file generation, either
> my "option 1" via TARGET_OBJECT_ADDRESS_MAP or my "option 2" via
> TARGET_OBJECT_PROC would work just as well, and I've already implemented
> code for either ... So I'm not really very tied to TARGET_OBJECT_PROC;
> I've just come to thinking this would be the more general solution by
> getting rid of the native-target-implemented "info proc".