This is the mail archive of the
libc-alpha@sourceware.org
mailing list for the glibc project.
Re: The state of glibc libm
- From: "Joseph S. Myers" <joseph at codesourcery dot com>
- To: Vincent Lefevre <vincent+gcc at vinc17 dot org>
- Cc: libc-alpha at sourceware dot org, gcc at gcc dot gnu dot org, Geert Bosch <bosch at adacore dot com>, Christoph Lauter <christoph dot lauter at lip6 dot fr>
- Date: Wed, 14 Mar 2012 14:40:06 +0000 (UTC)
- Subject: Re: The state of glibc libm
- References: <Pine.LNX.4.64.1202291655580.7156@digraph.polyomino.org.uk><20120314143045.GG3804@xvii.vinc17.org>
On Wed, 14 Mar 2012, Vincent Lefevre wrote:
> For double-double (IBM long double), I don't think the notion of
> correct rounding makes much sense anyway. Actually the double-double
> arithmetic is mainly useful for the basic operations in order to be
> able to implement elementary functions accurately (first step in
> Ziv's strategy, possibly a second step as well). IMHO, on such a
> platform, if expl() (for instance) just calls exp(), this is OK.
expl just calling exp - losing 53 bits of precision - seems rather
extreme. But I'd think it would be fine to say: when asked to compute
f(x), take x' within 10ulp of x, and return a number within 10ulp of
f(x'), where ulp is interpreted as if the mantissa were a fixed 106 bits
(fewer bits for subnormals, of course). (And as a consequence, accurate
range reduction for large arguments would be considered not to matter for
IBM long double; sin and cos could return any value in the range [-1, 1]
for sufficiently large arguments.)
> > (b) Where functions do make attempts at being correctly rounded
> > (especially the IBM Accurate Mathematical Library functions), they tend to
> > be sufficiently slow that the slowness attracts bug reports. Again, this
> > would likely be addressed by new implementations that use careful error
> > bounds and information about worst cases to reduce the cost of being
> > correctly rounding.
>
> I'm not sure that the complaints are about worst cases. More probably
> software implementation vs hardware implementation in the average
> case. But a new software implementation (better in average) could
> help.
Various bugs do complain about particular cases being slow (as well as
about such things as sinf being slower than sin - there, if you
automatically generate functions based not just on the type for the
function being generated but also on what wider types are available and
efficient in hardware, you could generate a version of sinf that uses
double or long double computations internally to speed things up).
--
Joseph S. Myers
joseph@codesourcery.com