This is the mail archive of the gdb@sourceware.org mailing list for the GDB project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: Test suite docs


> Date: Sat, 13 Jan 2007 12:26:11 +0200
> From: Eli Zaretskii <eliz@gnu.org>
> 
> This is my first experience running the test suite, and it is quite
> frustrating.  All I wanted was to run the tests before and after a
> change I'm about to suggest on gdb-patches.  Unfortunately, I ended up
> wasting my scarce free time on figuring out several gory details.
> 
> While I'm no newcomer to Free Software, and I expect to spend some
> time figuring out things on my own when it comes to using a new piece
> of software, the test suite makes it exceptionally hard, IMHO.  Some
> of the reasons are out of our control: the tests use several software
> packages (Dejagnu which uses Expect which uses TCL), so answers are
> potentially scattered across several unrelated packages, and the fact
> that none of them has GNU standard Info manuals (or at least I
> couldn't find them on fencepost.gnu.org) doesn't help.

DejaGnu has an info manual, although it isn't too helpful.  But really
for a normal Unix-like systems, once you've installed DejaGnu and its
dependencies, running the testsuite is as easy as typing "make
check-gdb" in the toplevel directory.  Up to now, that has always
worked for me.

> But that's just one more reason to have a good user-level
> documentation in GDB to help overcome these difficulties.

I wonder if that effort isn't better spent on improving the DejaGnu
manual.

> Here are the questions I couldn't find answers to:
> 
>   . Where do I find the canonical results for my platform?

In theory one should not see any FAILS, and one should work on
eliminating any KFAILS.

> 
>     People talk about XFAILs and ``unexpected failures'', but there
>     seems to be no place to consult the expected results for all the
>     tests and see if what you get is okay or not.  The test suite
>     prints a summary of the tests, but how do I find out what are
>     those ``unexpected successes'' and ``expected failures''?  What
>     are those XPASS, XFAIL, UNTESTED, and other indications displayed
>     while the suite runs?

Apart from the obvious PASS and FAIL, we have:

XFAIL

  The test failed but this was expected because of problems out of our
  control, for example OS or compiler bugs that cannot be easiliy
  worked around.

XPASS

  The test was expected to fail, but passed.  Shouldn't happen, but
  sometimes we accidentally fix bugs.  It could also be that an OS or
  compiler bug got fixed, and the testsuite needs to be adjusted to
  recognize that.

KFAIL

  The test was known to fail.  This is not a new bug but a known bug
  in gdb.

UNTESTED

  The system lacks functionality to run the test, for example because
  of a missing compiler, or an unimplemented feature in the OS or the
  particular GDB config under test.

>   . How do I compare two runs?  If diff'ing testsuite/gdb.sum is the
>     right way, it seems to not be documented anywhere, and gdb.sum
>     doesn't seem to be preserved across runs, so one must manually
>     copy it to avoid overwriting it.  Am I missing something?

This is what I always do.

>   . How does one disable a specific test?  Suppose some test takes an
>     exceptionally long time -- how do I run the suite without it?

All tests should complete within a reasonable amount of time.  If you
see any FAILS because of timeouts, there's a reasonable chance it's
actually the test itself that is broken.

If running a test on a particular platform really is a bad idea, you
can add some code to make it bail out.  Many tests in
testsuite/gdb.arch do this.

>     gdbint.texinfo tells how to _run_ a specific test or a short list
>     of test, but that method is not practical for _disabling_ a small
>     number of tests and running all the rest.  gdbint.texinfo also
>     says something about not ``adding expected failures lightly'', but
>     keeps silent about how does one make a test an expected failure.
>     In general, the language in that section of gdbint assumes the
>     reader is already an experienced writer of Dejagnu tests, which is
>     not a good assumption for a manual.

Well, that information should be found in the DejaGnu manual isn't it?
In fact that manual documents the setup_xfail command.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]