This is the mail archive of the libc-alpha@sourceware.org mailing list for the glibc project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: [2.20] [2/6] Generate .test-result files for tests with special rules


On 01/09/2014 06:11 PM, Joseph S. Myers wrote:
This patch extends the generation of PASS and FAIL status in
.test-result files for individual tests to cover tests with their own
custom makefile rules.
[...]
There are a few stylistic questions here:

* Should $(evaluate-test) go on the same line as the previous part of
   the command if it fits, or always on a separate line (or on a
   separate line unless the whole thing fits on a single line)?  (Patch
   1 uses a single line; this patch generally puts it on the same line
   as the previous part of the command if it fits.)

IMO, the "on a separate line unless it all fits on one line" is a bit more readable; putting it on the same line when there are multiple lines makes it easy for me to miss.

With that said, this is not a strongly-held opinion, and I won't object to doing it other ways.

* Where && is used between commands, should it go at the start of a
   line (normal GNU style, and used in this patch) or at the end of a
   line (used in various places in existing makefiles), and if at the
   start, should the continuation line be indented?

I find "at the start of the line" to, again, be more readable and easier to not miss. Likewise a preference for indentation, but I think consistency with normal GNU style trumps that if it says no indentation.

* Should all tests that generate a .out file and then further examine
   it (e.g. comparing with an expected file) have their makefile rules
   split (in a separate patch) into part that generates the file and
   part that examines its contents (the two parts each having their own
   test results)?  This would reduce the need for use of && in test
   commands.

This would also enable us to XFAIL the examination step separately from the execution step, which strikes me as valuable. In general I think the ideal is for the XFAIL processing to be able to detect when a test has failed in a different way than what was expected, and this is a place where that's easy.

An somewhat-hypothetical example would be that we have a local patch to change the negative-NaN printf behaviour, which (I suppose) breaks the comparison step, but we still want to make sure the test doesn't crash. I could easily imagine a similar case where it's a platform-specific bug and fixing the test so it passes is the wrong answer.

* Should all the tests generating output files not named
   <something>.out be changed (in a separate patch) to use the .out
   naming convention?

IMO, yes. Conventions like this are a useful step towards a system that can download them from a remote target, when we eventually get there.

Tested x86_64.

I haven't done a detailed review, but aside from the above formatting preferences this generally looks fine to me. I wonder if it's possible to get things to a point where new tests that don't run $(evaluate-test) are visibly broken in some way, so that we don't have to rely on manual checks that this stays correct and rules don't get missed. Perhaps "your test doesn't show up in the summary file" is sufficient.

- Brooks


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]