This is the mail archive of the libc-alpha@sourceware.org mailing list for the glibc project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: [2.20] [3/6] Support expected failures in .test-result files


On 02/14/2014 08:30 AM, Joseph S. Myers wrote:
> On Thu, 13 Feb 2014, Carlos O'Donell wrote:
> 
>> IMO we're going to have to extend this to be more flexible for
>> XFAILS, and I think that data will have to be kept outside of
>> glibc.
> 
> If (as I think we should, but which is a separate matter) we start to use 
> XFAILs for architecture-specific cases, I think we should put as much in 
> glibc as possible.  That is, where we presently put information about 
> conditions for known failures on per-release wiki pages, as much as 
> possible of that should go in conditions in glibc for XFAILing tests (or 
> skipping relevant parts of them, etc.), and associated comments, so that 
> distributors have less work comparing failures with our lists of known 
> issues, and generally the out-of-the-box experience with the testsuite is 
> as good as possible.

I agree with that. However, my worry is that this will lead to an overly
complex meta-language to describe when XFAILs are expected. If instead a
distribution uses an easy to use method of listing their XFAILs in some
file with some format, that can be used as a drop-in without the required
meta-langauge to describe when XFAILs are valid.

> If a failure is expected by more than one distributor, they shouldn't need 
> to track it separately; glibc should be automatically marking it expected 
> under whatever the relevant conditions are, sharing the work of updating 
> the expectations in glibc among the distributors.

It's the "whatever the relevant conditions are" that worries me. The result
of such language naturally leads me to believe we'll have lots and lots of
XFAILs with complex conditions because we support many versions of gcc,
many versions of the kernel, and other related tools. Worse the version
numbers don't mean the same to all distributions which apply custom patches.

I'm merely worried that this system will be overly complicated, but it
does afford us a great detail about when and where to expect a failure.
The distributions could carry "one more" patch to adjust the XFAILs based
on their own failures, and perhaps common XFAILs get pushed upstream as
you suggest.

> (With such an XFAILing practice, as I noted, architecture / distribution 
> maintainers should then review XPASSes they see at release time to see if 
> any are obsolete.)

That's a good reason to have them in the tests.

> (Part of the point of this patch series is to obsolete various 
> distribution-specific means of generating lists of failures from a 
> testsuite run, replacing them by a single system for generating summaries 
> of test results to which all users and distributors can contribute 
> improvements.)

In this case might we remove old XFAILS if say the kernel version condition
they depended upon is no longer supported?

I admit I like the fundamentals of the design, but I worry about the
complexity of the implementation.

Cheers,
Carlos.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]