This is the mail archive of the
mailing list for the GDB project.
Re: Is stub support for the 's' packet optional or required?
On Tue, Feb 18, 2003 at 09:29:58AM -0700, Kevin Buettner wrote:
On Feb 17, 9:04pm, Andrew Cagney wrote:
> If GDB implements software single step, then the `s' packet is never
> used. Consequently, requiring the unconditional implementation of "s"
> makes little sense.
It should but the interaction is weird. remote.c doesn't see the ""
reply until target_wait() is called. This means that the target_wait()
method would need to be modified to handle this. I guess it could
record this and then return immedatly with a TARGET_WAITKIND_SPURIOUS.
Kind of vaguely like how some of the other packets are handled.
What about the situation where GDB implements software single step AND
the stub implements the 's' packet? Shouldn't GDB at least attempt to
see if the stub supports the 's' packet before deciding to never send
But note, I'm guessing. Just having commands to disable it would be a
good first draft.
Oh, and yes. I really have seen targets that neither had h/w single
step nor had the space to implement s/w single step locally.
No. The relevant comments read:
In my humble opinion, SOFTWARE_SINGLE_STEP should affect native code
and not remote;
# FIXME/cagney/2001-01-18: This should be split in two. A target method
that indicates if the target needs software single step. An ISA method
to implement it.
# FIXME/cagney/2001-01-18: This should be replaced with something that
inserts breakpoints using the breakpoint system instead of blatting
memory directly (as with rs6000).
# FIXME/cagney/2001-01-18: The logic is backwards. It should be asking
if the target can single step. If not, then implement single step using
(All taken with a grain of salt.)
So, from the point of view of GDB's architecture, there is no difference.
> I'm much too intimidated by the stop and resume logic
> to actually change it myself, though. If there were less global state
> around infrun this might be easier.
[For remote MIPS/Linux targets, I've found some cases where GDB's
implementation of software singlestep causes some undesirable behavior
when doing the 'stepi' operation through some code that's hit by a number
of threads. Yet, when software single step is implemented in the debug
agent (and disabled in GDB), the debugging behavior is much more useful
Is it just slow, or do different things actually happen?
It is just very slow.