This is the mail archive of the gdb-patches@sources.redhat.com mailing list for the GDB project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: [rfa:breakpoint] Correctly count watchpoints


[I had an unscheduled e-mail outage]  My reply is attached.
--- Begin Message ---
<div class="moz-text-flowed" style="font-family: -moz-fixed">>> Date: Mon, 30 Sep 2002 12:34:53 -0400
From: Andrew Cagney <ac131313@redhat.com>

Each watch element / location / value in the watchpoint expression is assumed to consume one watch resource.

Given that this assumption doesn't hold on at least one very popular
architecture, is it a useful assumption?
I think the model holds. It just leads to an inefficient allocation of watchpoint resources. On the i386, one watch resource is two registers.

Anyway, the problem you refer to is why I was thinking of re-defining TARGET_REGION_OK_FOR_HW_WATCHPOINT() so that it returns the number of watchpoint resources required to watch addr/len. If {&a, sizeof a} required two registers it could return two.

But this is very hard or even impossible to do in practice.  For
example, on a i386, if there are two watchpoint that watch the same
4-byte aligned int variable, you need only one debug register to watch
them both, so counting each one as taking one resource is incorrect.
That is a bug. A further change would be to accumulate all the regions and eliminate any overlap from the count. I don't know how often this happens in real life.

But you cannot return the correct result unless you are presented with
the entire list of watchpoints GDB would like to set.  Alas, GDB's
application code examines the watchpoints one by one and queries the
target vector about each one of them in order.  Thus, the target
vector doesn't see the whole picture and therefore cannot give the
right answer.
For an architecture to try and optimally allocate watchpoint resources, I don't think (cf opencore code) a list of ADDR:LEN pairs is sufficient. Instead it should be provided with all the watchpoint expressions.

What is the value of the result you get if we _know_ in advance that
it will be incorrect, sometimes grossly incorrect, in some not very
rare cases?
I think the model is sufficient for the common case - a few independant variables and no complex expressions. To follow through the opencore's code, an extension would be to let an architecture define its own more complex model, overriding this default.

For instance, the hw_resources_used_count() function in my other patch could be made part of the architecture vector so that architectures, such as the i386, could override the default model using some other type of allocation scheme.

I think it would be helpful if, at least in maintainer mode, the user could see how many resources have been allocated to a watchpoint.

If this is for maintainers, the count should be accurate.  The i386
native debugging implements a maintainer-mode command to do that, but
it manipulates target-side data, and only works after all watchpoints
have been inserted.
True, there are several pieces of information:
- how many resources GDB thinks it is consuming
- how efficiently GDB is assigning those resources to hardware.
Sounds like the information is watchpoint model dependant.

(I've a sinking feeling that hardware breakpoints have the same problem ...).

Indeed they do.
I'll revise the counts.

Andrew


</div>
--- End Message ---

Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]