This is the mail archive of the archer@sourceware.org mailing list for the Archer project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Q: ugdb && watchpoints


In short: I do not understand what should software watchpoint
actually do to be "correct" (as much as possible without hardware
support) and useful.

Just in case, ugdb implements watchpoints via single-stepping,
like gdb does with can-use-hw-watchpoints=0. The only difference
is that it doesn't report the step to gdb until it notices the
memory was changed. I do not see a more clever method.



Now. Consider a multitraced tracee and a single watchpoint.
Say, we have two threads T1 and T2 and "long VAR".

	(gdb) watch VAR
	(gdb) c -a

each thread does a step and checks if VAR was changed. However,
it is not possible to figure out who changed VAR. I thought that
it would be better if both threads report T05watch to gdb, in this
case a user can look at both and see what the code/insn does.

But this doesn't help. Even if both thread report T05watch
simultaneously, gdb picks a "random" thread to report and "ignores"
all other watch reports (this is because it updates its copy of
VAR after the first notification and always ignores T05watch if
it doesn't see that VAR was changed).

So, what should ugdb do? Looks like, it doesn't make sense to report
more than one T05watch to gdb, ugdb can pick a random thread (say,
the first one who noticed the change) for report with the same effect.
This can simplify the code, but looks very ugly. OTOH, whatever it
does it can't improve things in this respect.



Now suppose we have a single thread and two watchpoints. The problem
is, a single instruction can change both but there is no way to report
this. remote_parse_stop_reply() doesn't expect multiple watchpoints,
and there is the single stop_reply->watch_data_address.

Just think about sys_read(), no matter how many watchpoints we have,
a single syscall insn can change them all.



Finally, a multithread tracee and multiple watchpoints. What should
ugdb do if some thread detects the memory change after the step?
Which watchpoint it should report in stop-reply? What should other
threads do when they notice the change? Again, I do not see anything
better than /dev/urandom to choose the thread/watchpoint pair.



I was even thinking about serializing. That is, ugdb schedules only
one thread to step every time. This way at least we can always know
who changes the memory. But this is non-trivial, very bad from
perfomance pov, and doesn't work with syscalls.


Any advice is very much appreciated. Most probably, there is no any
clever solution. Once a traced sub-thread detects that a watchpoint
was changed, it should mark this wp as "reported" for other threads
and report it to gdb. IOW, we report the random thread and random wp.

Please confirm if this is what we want.

----------------------------------------------------------------------

Another question. I guess, ugdb should implement hardware watchpoints
as well? Otherwise, there is no any improvement compared to gdbserver
in the likely case (at least I think that a-lot-of-wps is not that
common). But we only have Z2 for both. So I assume that ugdb should
try to use the hardware watchpoints, but silently fall back to
emulating?

(Btw. With or without gdbserver, hardware watchpoints do not work if
 the tracee changes the memory in syscall. Perhaps gdb/gdbserver should
 use PTRACE_SYSCALL).

The last (minor) problem, gdb never sends Z2 to ugdb if
default_region_ok_for_hw_watchpoint() thinks the size of variable is
too large.

Oleg.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]