On executing the following code (compiled with g++ 4.6.3 on Intel(R) Core(TM) i7-2600 CPU @ 3.40GHz running Ubuntu 12.04.1 LTS): #include<cmath> #include<stdio.h> using namespace std; int main() { float f = 4386557896710122704971697171202048.0; printf("%.16f\n", tan(f)); return 0; } I get the output as 33554432.0 The expected output is 97800744.0 In hex, this means libm says tan(0x77584625) = 0x4C000000 However, the output should be 0x4CBA8A45. The "correct" output was obtained using WolframAlpha (97800745.2669871207241704332677514328806018068212018723832) and one other compiler does produce the right output (in hex 0x4CBA8A45, in decimal 97800744.0) for the same program. I know that the input is a bit strange, but looking at the hex it seems to be a valid representable floating point number. Using double instead of float makes the problem go away.
Compiling and running the program, I get: ./a.out 97800744.0000000000000000 Which glibc version and which architecture is your program running on?
This was fixed in 2.17 with the following commit: commit 7a845b2c237434d4aad790aaba3a973e24ea802f Author: Joseph Myers <joseph@codesourcery.com> Date: Tue Jul 3 17:10:42 2012 +0000 Fix float range reduction problems (bug 14283). *** This bug has been marked as a duplicate of bug 14283 ***