This is the mail archive of the libc-help@sourceware.org mailing list for the glibc project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

trig functions on i386 and x86-64


Recently I have been looking into the computation of trigonometric
functions on several platforms, and in some applications as
well. During this work I have observed a range of behaviour.

My question now is: What is the underlying rationale for the
implementation in glibc of sine (and cosine) for 64-bit doubles on
x86-64 to be based on some C-code that combines argument reduction and
taylor series computation of the function, instead of the FSIN and
FPREM1 instructions used on i386 (which are also in the Intel 64 and
AMD64 instruction sets)? Is it only the accuracy problems of FSIN or
something else?

And if accuracy is the answer, why is the FSIN instruction still used
in libc on i386? Compatibility?



If I understand things correctly, in glibc on i386 sine is computed
with the Intel FSIN instruction (and cosine with FCOS), which takes
argument in the range ± 2^63 (radians).  Apparently the FPU
instructions are not completely accurate, mainly because of a less
than necessary accurate approximation of Pi. This approximation is
used to reduce the argument (range reduction) to a into a smaller
value that is appropriate for using a taylor expansion to actually
compute the function value. Within a small enough range (±Pi/4) the
FSIN instruction is sufficiently accurate however.

I have read the thread
http://sourceware.org/ml/libc-alpha/2001-05/msg00246.html
and found some clues there, but those mails were written before the x86-64
and are lacking hints about the decisions for the x86-64.

Anders Lennartsson


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]