Failure: Test: cos (pi/2) == 1.082856673921913968223746169860580e-32 Result: is: 1.08285667392191396822e-32 0x1.c1cd129024e088a67cc7p-107 should be: 1.08285667392191396822e-32 0x1.c1cd129024e088a67cc7p-107 difference: 5.71253355584909492648e-62 0x1.78000000000000000000p-204 ulp : 376.0000 max.ulp : 0.0000 Maximal error of `cos' is : 376 ulp accepted: 1 ulp Failure: Test: sincos (pi/2, &sin_res, &cos_res) puts 1.082856673921913968223746169860580e-32 in cos_res Result: is: 1.08285667392191396822e-32 0x1.c1cd129024e088a67cc7p-107 should be: 1.08285667392191396822e-32 0x1.c1cd129024e088a67cc7p-107 difference: 5.71253355584909492648e-62 0x1.78000000000000000000p-204 ulp : 376.0000 max.ulp : 0.0000 Maximal error of `sincos' is : 376 ulp accepted: 1 ulp
Fixing this. If someone could run: printf ("%.100000g\n", (long double)(M_PIl/2.0L));, on a system with ibm long double that would be great. Then I will increase the number of digits in the answer to match the maximum precision for the type e.g. ~106 digits.
That should be: printf ("%.100000Lg\n", (long double)(M_PIl/2.0L));
OK I have a PPC system to test this with setup now. pi/2 rounded to an IBM long double is: 1.570796326794896619231321691639740613531845480547870673025773690344164545962257761857472360134124755859375L An infinite precision cos(x) of this value yields: cos(1.570796326794896619231321691639740613531845480547870673025773690344164545962257761857472360134124755859375)= 1.0828566739219139682237461698605809743657180846737456545052536933566509276210257011310354810728258977667687179060601794311587878461976409993637130916602872593217973444675992784631595571310528753415771366618685685565006009288560082552130... × 10^-32 The highest precision answer in an IBM long double is: 1.082856673921913968223746169860583539424442830597581817048602989306514317306816102445404953931906744377790888342946371404131442949282377978537095231104103731922805309295654296875e-32 Notice that the infinite precision answer differs from rounded IBM long double value around ~1e-64? That's no mistake, that's around 1ulp. So it should be possible to attain this value. However the ppc cos(x) implemetnation yields: 1.0828566739219139682237461698548710058685937356711019144992345158825349842613103435536910382894240130262349699496720470547777605274664125545314163900911808013916015625e-32 They differ at around the 29th digit, or a difference of 5.71253355584909492648e-62. Comparing hex digits: Is: 0x1.c1cd129024e088a67cc7402000p-107 Should be: 0x1.c1cd129024e088a67cc74020bcp-107 It shows the answer is not as precise as the IBM long double type would allow. A rough guess is that we have 8-12 bits lost. Therefore there is nothing wrong with the test, it is now correctly showing that there are 376 ulps of error in the IBM long double implementation of cos(x). Previously we compared to zero and used a huge value for ulp, ignoring a lot of inaccuracy around zero. Note: At the expected value for cos(~pi/2) the value of 1 ulp is 1.5193e-64 Do you see any flaws in that analysis? The patch I'll probably apply is to convert all the results to hex literals with the maximum precision for the type. e.g. diff --git a/math/libm-test.inc b/math/libm-test.inc index 0049fcd..f0eb81a 100644 --- a/math/libm-test.inc +++ b/math/libm-test.inc @@ -5337,23 +5337,23 @@ cos_test (void) to each type. */ #ifdef TEST_FLOAT /* 32-bit float. */ - TEST_f_f (cos, M_PI_2l, -4.371139000186241438857289400265215e-8L); + TEST_f_f (cos, M_PI_2l, -0x1.777a5cp-25L); #endif #if defined TEST_DOUBLE || (defined TEST_LDOUBLE && LDBL_MANT_DIG == 53) /* 64-bit double or 64-bit long double. */ - TEST_f_f (cos, M_PI_2l, 6.123233995736765886130329661375001e-17L); + TEST_f_f (cos, M_PI_2l, 0x1.1a62633145c07p-54L); #endif #if defined TEST_LDOUBLE && LDBL_MANT_DIG == 64 /* 96-bit long double. */ - TEST_f_f (cos, M_PI_2l, -2.50827880633416601177866354016537e-20L); + TEST_f_f (cos, M_PI_2l, -0xe.ce675d1fc8f8cbbp-69L); #endif #if defined TEST_LDOUBLE && LDBL_MANT_DIG == 106 /* 128-bit IBM long double. */ - TEST_f_f (cos, M_PI_2l, 1.082856673921913968223746169860580e-32L); + TEST_f_f (cos, M_PI_2l, 0x1.c1cd129024e088a67cc74020bcp-107L); #endif #if defined TEST_LDOUBLE && LDBL_MANT_DIG == 113 /* 128-bit long double. */ - TEST_f_f (cos, M_PI_2l, 4.335905065061890512398522013021675e-35L); + TEST_f_f (cos, M_PI_2l, 0x1.cd129024e088a67cc74020bbea64p-115L); #endif TEST_f_f (cos, 0.75L, 0.731688868873820886311838753000084544L); ...
> Therefore there is nothing wrong with the test, it is now correctly showing > that there are 376 ulps of error in the IBM long double implementation of > cos(x). Guess that's the point of this report.
(In reply to comment #4) > > Therefore there is nothing wrong with the test, it is now correctly showing > > that there are 376 ulps of error in the IBM long double implementation of > > cos(x). > > Guess that's the point of this report. It is. My thought was that perhaps I'd lost some precision in the constants used for the test, and it was the case (this is something I'd been talking about with Rich and David). After fixing the constants there can be no doubt, unless I made a mistake with my analysis, that cos for IBM long double is indeed inaccurate. Would you mind updating the ulps file and adding a comment to this test to indicate this BZ#?
I cannot update the ulps file because I don't know the correct ulp.
(In reply to comment #6) > I cannot update the ulps file because I don't know the correct ulp. The correct ulp is that which is displayed as `ulp' in the testsuite failure e.g. 376 for IBM long double. Run `make regen-ulps' and look at the result, and update the ulps file accordingly. Did I misunderstand you? Cheers, Carlos.
It doesn't make sense to use the ulp of a broken implementation.
(In reply to comment #8) > It doesn't make sense to use the ulp of a broken implementation. The implementation isn't broken, it's just inaccurate. What problems do we have leaving the test in place and using a large ulp value for IBM long double? I noticed that Joseph also said he would like to disable cpow once we switch to accurate ulps since the error in cpow is going to be a huge number of ulps. Is this just a policy issue? That we disable tests if the ulp values are too high?
It would be the first one with a huge ulp. This bug is about keeping it that way.
(In reply to comment #10) > It would be the first one with a huge ulp. This bug is about keeping it that > way. Thanks, that makes sense. In that case the intermediate fix is to disable the test for IBM long double because cos is not sufficiently accurate?
I'll be disabling this test until we fix this issue.
Fixed for 2.18 by: commit ed41ffefc3f947f14d565ea8d239ff2d31f6a7fe Author: Joseph Myers <joseph@codesourcery.com> Date: Thu May 9 21:30:08 2013 +0000 Fix ldbl-128ibm cos range reduction near pi/2 (bug 15359).