This is the mail archive of the gdb@sourceware.org mailing list for the GDB project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Breakpoints in delay slots


Hi all,

There is an occasional issue debugging programs on processors that use delay slots - in my case the SH4.

The problem occurs when a breakpoint is placed on the delay slot instruction. This can happen when this instruction happens to be the first instruction of a source line, or when the user sets the breakpoint on a specific address.

In the case of the SH4, the breakpoint instruction (at least the one we use) is illegal in a delay slot. This means that, instead of triggering the breakpoint, an illegal slot exception is raised which the user program is expected to handle and usually results in a panic.

In any case, even if the breakpoint were handled as normal, there is the problem of where the program should be resumed. It is incorrect to set the PC to the slot instruction because this will ignore the branch. The correct thing is to set the PC to the address of the branch/slot pair - i.e. 2 bytes back in the case of the SH4.

There is no general way to identify a delay slot from instruction analysis - any instruction may be preceded by data which looks like a branch with a slot, and there is the danger of reading addresses outside memory - so there is no way to avoid the situation in the first place. Similarly, there is no way to identify that a breakpoint just hit was in a slot unless you make a note of how it was hit.

I need a way to solve this problem. Any suggestions?

In a bare machine context, I have access to the running program's exception handler so I considered putting an exception handler in which would identify exceptions caused by breakpoints and trigger an artificial breakpoint. This would stop the program and return control to the debugger, but GDB would not identify this as a breakpoint it knows about and the registers and source code location would all be confused. Then, upon restarting the program, the handler would return to the branch/slot pair, but the breakpoint would still be there, and the program would enter an infinite loop.

The above technique might work if GDB could be taught to understand the artificial breakpoint. Perhaps it could check unknown traps to see if they occur at a particular symbol name or there is a particular pattern at that location (something non-specific defined by the *-tdep file), and then take steps to fix-up the situation.

Alternatively, in some configurations at least, GDB could set a hidden breakpoint on the exception handler and somehow prevent the user program from ever seeing the exception. However, although this might work for me, it won't work for any configuration in which a remote stub uses the normal trap mechanism for breakpoints - exceptions in exception handlers are bad.

In Linux, or some other operating system where the program does not own the exception handler, part of this problem will have to be solved in the kernel, but I don't believe it can be fixed-up transparently so that the debugger can't tell - there is still the issue of where to restart; GDB will tell it to restart at the breakpoint address, not the branch address.

Andrew Stubbs


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]