breakpoint extension for remote protocol, take II

J.T. Conklin jtc@redback.com
Mon Jun 14 16:14:00 GMT 1999


>>>>> "Andrew" == Andrew Cagney <ac131313@cygnus.com> writes:
Andrew> Have you considered what to do when there is a choice of
Andrew> software breakpoints?  Jim Ingham's pointed out to me that in
Andrew> the case of the MIPS/MIPS16 and ARM/THUMB the breakpoint might
Andrew> be 2 bytes or 4 bytes in size (and the value different in each
Andrew> case).

I was unaware processors with multiple software breakpoints existed.
I assume that the 2 byte breakpoint instructions have to be inserted
in "high-density" code segments and 4 byte breakpoints insns have to
be inserted in "low-density" segments.

We can almost get away without specifying breakpoint types.  Since in
most cases the stub is bound into the executable, thus can determine
whether the breakpoint is within a high or low density code segment.
However, this would not be suitable for applications like monitors
with a remote protocol front end, as those would not have any prior
knowledge of an arbitrary program's memory map.

I was hoping to avoid exposing implementation details of breakpoints
managed by this extension.  This was intended to provide flexibility
so that breakpoints could be implemented using mechanisms beyond what
might be present in the CPU alone.  For example, the stub could use a
trap insn of some type instead of the traditional breakpoint insn, or
use some hardware assistance (perhaps in a memory controller ASIC).
As a result, although implementations could be quite different, GDB
would neither know nor care.

Andrew> For such targets, always sending the length would be easiest.
Andrew> Can any one see problems with sending the length regardless?

I was considering the possibility of interpreting the length field of
software breakpoints so as to represent address ranges.  The converse,
a breakpoint that traps whenever the PC falls out of a range is
probably more useful though.

I'm also thinking of ideas of how to do more breakpoint processing in
the stub.  The obvious thing is thread specific breakpoints --- it is
very inefficent for the target to stop and return control to GDB, for
GDB to query the current thread and determine that the breakpoint is
not appropriate, and then for GDB to continue execution.  If the stub
could determine the whether the breakpoint had really tripped,
performance would be much improved.  

In fact, GDB may not need to be modified significantly.  Since the
breakpoint has fired and program state is going to be examined (either
interactively or via a breakpoint command script), overhead of (re-)
confirming that the breakpoint is either global or for the current
thread should be negligable.  This would also allow for "thinner" stub
implement- ations that don't contain per-thread management of
breakpoints if code size is more important than performance.

Similarly, a simple (stack based?) scheme for evaluating conditional
breakpoints could be created, although I suspect this would require
significant changes to GDB internals.  It might be useful to nail down
(or at least think about) this even if it won't be implemented in the
near term, so it can be cleanly added to the protocol.

	--jtc

-- 
J.T. Conklin
RedBack Networks


More information about the Gdb mailing list