This is the mail archive of the
mailing list for the GSL project.
Re: GSL K0/K1
- From: Patrick Alken <alken at colorado dot edu>
- To: <gsl-discuss at sourceware dot org>, Pavel Holoborodko <pavel at holoborodko dot com>
- Date: Fri, 25 Mar 2016 15:18:23 -0600
- Subject: Re: GSL K0/K1
- Authentication-results: sourceware.org; auth=none
- Authentication-results: holoborodko.com; dkim=none (message not signed) header.d=none;holoborodko.com; dmarc=none action=none header.from=colorado.edu;
- References: <56F42FED dot 8060500 at colorado dot edu> <56F4304C dot 3000907 at colorado dot edu> <56F5998F dot 40902 at lanl dot gov>
- Spamdiagnosticmetadata: NSPM
- Spamdiagnosticoutput: 1:23
Yes sorry for including you so late in the discussion! Pavel (cc'd)
found that some of the GSL Chebyshev coefficients copied over from
SLATEC were rounded incorrectly (ie: just the very last digit or two was
off which caused quite a large error near x=2 for K0 and K1).
The original GSL / SLATEC implementations broke up the K0/K1 interval
in 3 pieces:
Pavel developed new approximations using the intervals:
[0,1] (new rational approximation from Pavel)
[1,8] (new Chebyshev from Pavel)
[8,inf] (original GSL Chebyshev)
which avoids the troublesome point x=2 and has an overall lower error
when compared with arbitrary precision libraries over many random points.
In principle we could simply correct the SLATEC rounding errors, but
Pavel's new method has even lower error than the corrected Chebyshev series.
During some exhaustive numerical testing with random values, Pavel found
fairly good upper limits to the error on the K0/K1 on each of the 3
intervals, which is why we think we should change the error estimate to
the empirical values found through the numerical simulations.
Pavel found similar problems in I0/I1 which he is now working on correcting.
On 03/25/2016 02:03 PM, Gerard Jungman wrote:
As you say, this is the first I have seen of this.
The results look very good.
I briefly compared the code to the old version.
It looks like you changed the domains for some
of the expansions as well as the coefficients
and some of the fractional transformations.
So that's a lot of differences to track.
Just glancing at the differences, I cannot tell
why it is so much better. Do you know exactly
The main problem seemed to be for x in (1,2).
The big break in the 1/x plots at x=2 suggests
that there was a problem in the coefficients
in that domain. Chebyshev expansions should
not behave that way; they should provide
essentially uniform error over the domain.
Was it some kind of typo in the tables?
That's what it looks like to me. But
maybe not... in fact I don't see how
that would have happened.
Maybe it was something more subtle. The basic method,
of course, goes back to SLATEC, so I suppose that
version has the same problems...?
We can and should have a discussion about error estimates
in general. But I will create another thread for that, so
as not to expand or hijack this topic.
As for the method and error estimates for K0 and K1, I am
trying to understand myself what is happening there. Most
of it is straightforward; the only problem is the domain
x <= 2.0. I wrote this code about 20 years ago and
haven't looked at it since. The assumption was that
the original SLATEC was basically best possible, so
it did not get a lot of scrutiny.
I looked at the SLATEC source. In the code I refer to "bk0.f",
although it seems to actually be called "besk0.f", so maybe
there have been further changes to SLATEC since then?
Anyway, SLATEC uses this method where it evaluates I0 for
some reason. So I do the same. I have no idea why they do
this. Maybe there is an original paper somewhere that
explains it. Clearly it is not necessary, and it's
much cleaner to avoid this dependency as well.
If the expansion was correct in the old code,
then the error must have been creeping in
through the evaluation of I0.
Has anybody checked I0? If it is also messed up,
then maybe we have found the culprit.
The error estimate in that section is just an application
of some derivative estimates for that particular expression.
But why that expression is used is entirely a mystery to me.
A brief comment on the random testing: This is a good way
to estimate the coefficient in the uniform error bound.
But in principle it should not be necessary. There must
be rigorous bounds for these expansions based on the
uniformity of the Chebyshev expansions. Can the
necessary coefficients be evaluated directly as
part of the computation of the expansions? That
would be best of all. These error bounds could
be documented somewhere for posterity, at the
On 03/24/2016 12:22 PM, Patrick Alken wrote:
The Bessel function results look good. If everything is tested you
merge it all into the master branch. I'm cc'ing gsl-discuss so that this
discussion will start to be archived.
Regarding the error estimates, an attempt was made early on in GSL to
provide error estimates for all the special functions. From section 7.2
of the manual:
The field val contains the value and the field err contains an estimate
of the absolute error in the value.
I confess I don't fully understand all the error estimation, or even how
accurate the estimates are. I don't know if any studies were done to
verify the accuracy of these error estimates. One of the original GSL
developers (Gerard Jungman) wrote a lot of this code (I've cc'd him on
this email) - maybe he can help shed light on this.
Since you've already done exhaustive comparisons of your new Bessel
functions against arbitrary precision libraries, maybe you could simply
change the error estimate to be:
err = factor * GSL_DBL_EPSILON
where factor are the numbers you've found in your error analysis. This
way the calculation would be very fast (and probably more accurate than
the current error estimation).
We can wait and see if Gerard has any suggestions on this too.
Gerard - since you haven't been following this email chain, Pavel has
found more accurate Chebyshev expansions for the Bessel K0/K1 functions
which have lower errors than the original GSL implementations when
compared with arbitrary precision libraries. Through lots of random
testing he has found upper limits on the errors of each function given
by some positive constant times GSL_DBL_EPSILON. We are wondering if its
safe to change the error estimation of these functions (I don't know the
origin of the current calculations for these error estimates).