This is the mail archive of the
mailing list for the glibc project.
Re: [PATCH] Reduce the maximum precision for exp and log
- From: Siddhesh Poyarekar <siddhesh dot poyarekar at gmail dot com>
- To: "Joseph S. Myers" <joseph at codesourcery dot com>
- Cc: Siddhesh Poyarekar <siddhesh at redhat dot com>, libc-alpha at sourceware dot org
- Date: Mon, 11 Mar 2013 19:21:25 +0530
- Subject: Re: [PATCH] Reduce the maximum precision for exp and log
- References: <20130228160427.GE2358@spoyarek.pnq.redhat.com><Pine.LNX.firstname.lastname@example.org><CAAHN_R16buVZqAJhGdS9p32ttayXDe2=20NCqce+6GWL2CNFemail@example.com>
On 28 February 2013 23:08, Siddhesh Poyarekar
> On 28 February 2013 22:38, Joseph S. Myers <firstname.lastname@example.org> wrote:
>> Please give details of your error analysis for the maximum inaccuracy in
>> these implementations in glibc that shows that, together with the above
>> worst-case figures, the error in the glibc implementation cannot exceed
>> the distance from a half-way value.
> I didn't think of doing that TBH since I assumed that the above
> observations hold true for any accurate multiprecision implementation.
> I realize now that our mp multiplication and division algorithms do
> have an error bound > 1ULP, so I guess need to prove that it doesn't
> make a difference here.
> I'll spend some more time on this and repost the patch with changes if
I've not been able to convince myself that the findings of the paper
apply to glibc libm. As I understand it, the results of the paper
conclude that rounding an approximation of f(x) to N bits is
equivalent to rounding the exact value at infinite precision. It says
nothing about the precision of intermediate computations involved in
computing f(x) however, since that could depend on the method used to
arrive at the result and could be more than N. Does this make sense
or is there a known correlation between intermediate computation
precision and the final precision that I don't know about?