Why enforcing sw_breakpoint_from_kind() implementation in GDBserver targets

Metzger, Markus T markus.t.metzger@intel.com
Mon Jun 15 08:54:47 GMT 2020

> >>> The ARC GDB client inserts the breakpoint by writing to memory (the
> >>> legacy way). With your explanations, I plan to add the Z0 packet
> >>> support to it.  Nevertheless, should it be still necessary to have
> >>> "sw_breakpoint_from_kind" in GDBserver as a mandatory method?
> >
> > Simon, I thought about this a little. Are we aiming for deprecating
> > the old way? Then I guess that's the way to go.
> If all the gdbserver targets we support do support Z0, then yes I think
> we could consider doing that.  How would we do it?  Make insert_point
> and remove_point virtual pure to force sub-classes to implement them
> with something meaningful?
> Note that this would only concern GDBserver, other server implementations
> of the remote protocol are free to support Z0 or not.  But we could decide
> that all GDBserver ports have to support it.

The Intel Graphics architecture uses breakpoint bits inside instructions.  There
is no single breakpoint opcode as there is INT3 on IA, for example.

The breakpoint can be ignored one time, which allows stepping over breakpoints
without having to  remove them.  This obviously only works if the breakpoint bit
in the original instruction is set and the instruction is not replaced with a fixed
breakpoint pattern.

I've been looking into z packets and insert/remove_point () target methods.
Since struct raw_breakpoint is opaque, it would not allow me to store a shadow
copy - unless I extended mem-break.cc to do that for me.

I ended up using the gdbarch methods.


Intel Deutschland GmbH
Registered Address: Am Campeon 10-12, 85579 Neubiberg, Germany
Tel: +49 89 99 8853-0, www.intel.de
Managing Directors: Christin Eisenschmid, Gary Kershaw
Chairperson of the Supervisory Board: Nicole Lau
Registered Office: Munich
Commercial Register: Amtsgericht Muenchen HRB 186928

More information about the Gdb mailing list