Floating point problem on glibc-2.1.1
Trevor Johnson
trevor@jpj.net
Tue May 4 08:11:00 GMT 1999
> In fact, I heard that this problem was reported from the Fermi Lab, and
> they used the code like the following for test:
>
> #include <stdio.h>
> #include <math.h>
> main()
> {
> int j, j1;
> double dj, d=8.0;
>
> dj = (1.0 + log(d) / log(2.0));
^^^
> j = (int) dj;
> j1 = (int) (1.0 + log(d) / log(2.0));
^^^
A comment or two could have told us what these 1.0 constants are for,
where they came from. I am guessing they have to do with rounding--the
programmer seems to be saying "round up by one unless the result is
exactly an integer, but never round down"--the same thing accomplished by
ceil(). If what was meant was "round to the nearest integer" then 0.5
would have been a better choice (but not IEEE compliant, I think).
> printf("j=%d, j1=%d, dj=%f\n", j, j1, dj);
> }
>
> In glibc-2.1.1, the result is
>
> j=4, j1=3, dj=4.000000
>
> but we know that the value of "j1" is not correct. So you mean this is
> the unreliable result?
With glibc 2.0.6, this gives the result you're looking for:
#include <math.h>
int main()
{
printf("%i\n", (int) rint(log(8.0) / log(2.0)));
return (0);
}
__
Trevor Johnson
More information about the Libc-alpha
mailing list