Comparisons of GNU math functions

JohnT jrt@worldlinc.net
Thu Mar 26 17:18:00 GMT 2009


Regarding the odd math results I reported not too long ago, I was
wondering what the differences are between the math functions of the GNU
Scientific Library (libgsl, I think) and those in glibc. Does anyone
compare results of standard "production" libraries to the results of
high-precision NIST/ISO results with perhaps 256 significant bits or more?

One potential source of error that gcc indicated by warning messages
about == and != is that floating-point comparisons are risky because of
extended-precision bits that some processors use by default for
calculations. The first 64 bits might be identical while the last 16
differ, so a native result would differ from an IEEE 64-bit result. An
"immediate-mode" constant in a binary surely would have no more than 64
bits of precision, while it might be the result of a source-code
operation producing a constant result of 80 internal bits. Dropping the
least significant 16 bits during compilation in GCC 4.3.x as the
operation is converted to a constant in the object code could lead to
errors.

A more suitable way than == or != to do floating-point comparisons in
glibc test routines might be to evaluate (a - b) < epsilon where epsilon
is the desired accuracy.  Are there any published standards on this
question?

John T





More information about the Libc-help mailing list