I think you give me too much credit!
What happened was I had a target without h/w watchpoints, I ran the
GDB testsuite and had a set of passes and fails. After some
investigation I realised that I'd neglected to mark the target as not
supporting h/w watchpoints in the board file.
Once I'd added the no h/w watchpoint flag in the board file I reran
the tests, and mostly things looked better. Failures, or unresolved
tests had become unsupported.
However.... in watchpoint-reuse-slot.exp a number of tests that used
to pass had gone away, so I went looking at the test script.
What I saw was that though the test declared a need for h/w
watchpoints, the test would run perfectly fine without them.
You'll notice that with my change if the board file says that h/w
watchpoints are supported then we still look for the full "Hardware
watchpoint" pattern in the output, that is, my change does not mean
that if GDB broke and h/w watchpoints changed to s/w watchpoints (when
they shouldn't) the test would pass. I think that after my change all
targets that previously ran this test are just as well tested as they
ever were.
But, we have additional s/w watchpoint testing for targets that don't
support h/w watchpoints. Is this testing anything that's not covered
elsewhere? Honestly, I don't know. There probably is a lot of test
duplication, but I can't guarantee that there's nothing unique in
here.
I guess my question is, what's the harm from broadening the test in
this way? If I've missed something and this change could mean a bug
can now slip into GDB then absolutely, this is not acceptable. But, I
can't see how (yet)...