[RFC] Is skip_prologue_using_sal actually usable?
Daniel Jacobowitz
drow@false.org
Tue Nov 9 10:16:00 GMT 2004
On Sun, Nov 07, 2004 at 02:55:41PM +0100, Mark Kettenis wrote:
> The drawback of implementation 1 is mainly that the prologue isn't
> completely finished when GDB stops. This means that GDB might not
> print function arguments correctly because the stack frame hasn't been
> fully setup yet.
>
> The problem we're facing here's that our testsuite has many tests that
> break down if we skip to little but almost none that break down if we
> skip too much. However, in real life, it's better to skip too little
> than to much, as I argued above.
>
> Things are complicated further by trying to skip the prologue by using
> line number information. Some compilers generate line number info for
> the first instruction of a function, others don't. In addition to
> that, compilers generate buggy line number info. It seems that
> everytime GCC gains a new optimization that moves more stuff into the
> prologue, the line number info generated for these instructions isn't
> quite right. As such, I think we should be very conservative when
> skipping prologues solely using line number info, stopping at the line
> containing the lowest address instead of the lowest line number as
> skip_prologue_using_sal does.
>
> Thoughts?
If you want the first instruction that is not part of the prologue,
then you have no more reason to skip prologues at all. My
understanding is that prologue skipping accomplishes two things:
- Get the arguments into their save slots so that we can find
and display them.
- Get the frame pointer into a sane state so we can backtrace.
Well, we've taken care of (A) already - the new frame code requires
being able to backtrace from the first instruction of a function,
and we do it. (I think we fall down more often in the epilogue than we
do in the prologue now.)
What we need is a coherent approach to (B). Future versions of GCC
will make this much easier, by emitting location lists. But for
existing code, and non-dwarf2 targets, I think we could do better than
we do now. Here's a possible approach.
We'd need a gdbarch method describing where incoming arguments were
placed. This could be unified with the function calling code -
cleanest might be to implement a proper "location" data type and then
have the code return a list of locations to either store the parameters
or fetch them depending on context. Then, we'd need a modified sort of
prologue analyzer that told us whether the incoming location for a
particular parameter was likely still valid, or whether the
debug-info-provided location had been initialized. With enough
architecture-independent support code, instead of cramming it all into
the backend, I think this would not be terribly complicated either.
Then, when we arrive at a function - from the very first instruction -
we can display arguments correctly. The user doesn't have to worry
about the copy.
How does that sound? Pipe dream?
--
Daniel Jacobowitz
More information about the Gdb
mailing list