cacosh and other math errors

Tue Mar 17 16:43:00 GMT 2009

Carlos O'Donell wrote:
> On Wed, Mar 11, 2009 at 9:44 AM, JohnT <> wrote:
>> Using tarballs from on system: Mandriva
>> 2006 Powerpack, i686, installed kernel 2.6.12, glibc 2.3.5, gcc 4.2.4,
>> gettext 0.17, binutils 2.18, gmp 4.2.4, mpfr 2.31.
>> After building glibc-2.9 with the following configurations, I got nearly
>> identical math failures involving the function cacosh, as well as
>> similar summary reports on math (accuracy) errors. I may follow up on
>> this after looking at the source code.
> The quality of glibc depends on the quality of the toolchain used during
> the build. What were the test results like for your gcc 4.2.4 and binutils
> 2.18?

Sorry for not responding promptly, too many things to do. Here are
summary results from gcc and binutils builds. The gcc configuration is
very simple, just the installation prefix as I recall. I could post the
entire results if it wouldn't be too bulky, but these are probably the
most useful parts. These gcc results are better than I got from v.
4.3.2, 4.2.2 and especially 4.2.3. Are there any generic test suites not
affiliated with some particular OS or compiler? A compiler test and a
libc test, for example?

                === gcc Summary ===

# of expected passes            42561
# of unexpected failures        7
FAIL: gcc.dg/cleanup-10.c execution test
FAIL: gcc.dg/cleanup-11.c execution test
FAIL: gcc.dg/cleanup-8.c execution test
FAIL: gcc.dg/cleanup-9.c execution test
FAIL: gcc.dg/vect/pr20122.c scan-tree-dump-times vectorized 1 loops 2
FAIL: execution test
FAIL: execution test
# of expected failures          116
# of unresolved testcases       1
# of untested testcases         28
# of unsupported tests          308
/home2/bild/gcc/xgcc  version 4.2.4

                === g++ Summary ===

# of expected passes            13627
# of expected failures          67
# of unsupported tests          86
/home2/bild/gcc/testsuite/g++/../../g++  version 4.2.4

                === gfortran Summary ===

# of expected passes            16282
# of expected failures          15
# of unsupported tests          16
/home2/bild/gcc/testsuite/gfortran/../../gfortran  version 4.2.4

                === objc Summary ===

# of expected passes            1806
# of expected failures          7
# of unsupported tests          24
/home2/bild/gcc/xgcc  version 4.2.4

                 === libstdc++ Summary ===

# of expected passes            3852
# of unexpected failures        1
FAIL: abi_check
# of unexpected successes       1
# of expected failures          14
# of unsupported tests          316
make[4]: *** [check-DEJAGNU] Error 1
make[4]: Leaving directory
make[3]: *** [check-am] Error 2
make[3]: Leaving directory
make[2]: *** [check-recursive] Error 1
make[2]: Leaving directory `/home2/bild/i686-pc-linux-gnu/libstdc++-v3'
make[1]: *** [check-target-libstdc++-v3] Error 2
make[1]: Leaving directory `/home2/bild'
make: *** [do-check] Error 2

        === binutils Summary ===

# of expected passes        45

        === gas Summary ===

# of expected passes        185
../as-new 2.18

        === ld Summary ===

# of expected passes        439
# of expected failures        4
/home/dilbert/Download/utils/binutils-2.18/ld/ld-new 2.18

make[3]: Entering directory
./test-demangle < ../.././libiberty/testsuite/demangle-expected
./test-demangle: 765 tests, 0 failures
PASS: test-expandargv-0.
PASS: test-expandargv-1.
PASS: test-expandargv-2.
PASS: test-expandargv-3.

And that's the end of the summary statements of the gcc and binutils
tests. I don't know how many tests are supposed to be in libstdc++ and
whether they were all run before the test exited with the dejagnu error

>> The test-double.out file reported double functions as being without
>> inline functions, which are part of O3 optimization. Building with the
>> flag -O2 should be tried next. O3 adds to the size of the files too.
>> Related bug: needs work.
>> cat math/test-double.out displays the following:
> Regardless of optimization level, test-idouble test tests inline math
> functions, and test-double tests calling the math library functions.
> It is expected that -O3 adds to the size. The -O3 optimization level
> adds several compiler passes which increase code size in an
> attempt to increase speed.
>> testing double (without inline functions)
>> Failure: Real part of: cacosh (0 - 0 i) == 0.0 - pi/2 i: Exception
>> "Invalid operation" set
>> Failure: Real part of: cacosh (inf - inf i) == inf - pi/4 i: Exception
>> "Invalid operation" set
>> Failure: Real part of: cacosh (0 + inf i) == inf + pi/2 i: Exception
>> "Invalid operation" set
>> Test suite completed:
>>  2999 test cases plus 2620 tests for exception flags executed.
>>  3 errors occurred.
> These are valid failures, you should try to debug these.

First I would need to learn the math behind the desired results, complex
analysis. I took a little course in it 30 years ago and haven't thought
anything about it since.

There were other concerns of accuracy in the math results. When the
errors occupy half of the significant digits, it makes me wonder if
there are problems of mixed precision, or numbers not getting
initialized right in the 80 digits of an x86 processor used for double
precision calculation. Maybe there's garbage in the last 16 digits. I
wouldn't be happy with more than one or two significant digits of error,
not to mention half the field of digits. That's saying the error is
16^11 or more times the size of the least significant digit of the
result. They might as well be random numbers. Or am I missing something
in statements like these errors in sin calculation?

0 failures; 285 errors; error rate 0.03%
maximum error:   0000000000000040083d369e8c6c47a11
error in sin(1): 000000000000000000000012713da512a

>> Running make with the -k option tells make to continue in spite of
>> errors. After it finished, errors in math were reported, but apparently
>> nowhere else in the test results. Here's the summary of errors obtained
>> by grepping the math results for "error."
> I always run 'make -k check', it's better to see *all* of the errors and get
> an idea for the state of the entire testsuite before fixing one test.
> If a tests fails it is up to the developer to examine the test "out" file,
> which contains the details of the failure.
> Cheers,
> Carlos O'Donell.

More information about the Libc-help mailing list