This is the mail archive of the
mailing list for the glibc project.
Re: Minimum floating-point requirements
- From: Adhemerval Zanella <azanella at linux dot vnet dot ibm dot com>
- To: libc-alpha at sourceware dot org
- Date: Sun, 16 Feb 2014 20:41:48 -0300
- Subject: Re: Minimum floating-point requirements
- Authentication-results: sourceware.org; auth=none
- References: <Pine dot LNX dot 4 dot 64 dot 1401302108080 dot 12540 at digraph dot polyomino dot org dot uk> <Pine dot LNX dot 4 dot 64 dot 1402072347200 dot 12232 at digraph dot polyomino dot org dot uk> <OF54854818 dot C108092B-ON86257C7B dot 0063B8C0-86257C7B dot 006B6B53 at us dot ibm dot com> <Pine dot LNX dot 4 dot 64 dot 1402102231400 dot 26591 at digraph dot polyomino dot org dot uk> <CAGWvnyn-Cj4Mw4efQTs2MYFHhknyskAEznEqpGeYnb9rY3X4hg at mail dot gmail dot com> <Pine dot LNX dot 4 dot 64 dot 1402150136490 dot 31722 at digraph dot polyomino dot org dot uk> <CAGWvny=aJCdoQvC8q-dNvFdDNAqRCcZ7_adD=Sst8FDr0MN1Qg at mail dot gmail dot com> <Pine dot LNX dot 4 dot 64 dot 1402151656510 dot 6358 at digraph dot polyomino dot org dot uk> <20140216045946 dot GG184 at brightrain dot aerifal dot cx> <CAGWvny=9Jeippop9xuERzwgWL8+QbZiqQFhgxGNdAW0C=EnOLQ at mail dot gmail dot com> <20140216214623 dot GI184 at brightrain dot aerifal dot cx>
On 16-02-2014 18:46, Rich Felker wrote:
> On Sun, Feb 16, 2014 at 02:40:23PM -0500, David Edelsohn wrote:
>> On Sat, Feb 15, 2014 at 11:59 PM, Rich Felker <email@example.com> wrote:
>>> On Sat, Feb 15, 2014 at 05:21:29PM +0000, Joseph S. Myers wrote:
>>>> But I think this is a matter of imposing a decision about the PowerPC
>>>> "ecosystem" (see <https://www.gnu.org/philosophy/words-to-avoid.html>) on
>>>> glibc as much as imposing anything from glibc on anything else. And the
>>>> ultimate question is about the GNU system rather than that "ecosystem".
>>> Indeed. I see this issue as PowerPC folks imposing their legacy
>>> brokenness on everybody else (libc and application developers who have
>>> to work around it).
>> Every ABI has peculiarities and historical baggage. One of the
> The original powerpc ABI (which gcc still supports, and which we
> require gcc to be configured with for use with musl libc, since it
> requires IEEE types) simply has long double == double. The
> double-double nonsense was added long after it was known how bad it
> is, and it should never have been added in the first place, but
> presumably IBM fans pushed it through. So this is not just historical
> baggage but a relatively new imposition of a historical mistake onto
> the glibc powerpc ABI which used to be free of this mess.
>> strengths of the GNU Toolchain has been its acceptance of and
>> accommodation of many different ISAs, ABIs and OSes. That is one of
> There's a difference between accepting and accomodating legitimate
> differences between cpu archs that don't affect the ability to satisfy
> the contracts applications expect, and accommodating a nonsensical
> type pushed by IBM folks that's not even a native type provided by the
> hardware but just a lazy, poorly designed, but fast way of getting
> more precision by using a hybrid hard/soft-float approach to operate
> on a pair of hardware doubles.
We already know you feel about IBM long double and how musl always get it right.
Let's move on. The fact is IBM long double is de facto ABI for powerpc for some
time, besides its idiosyncrasies.
I'd like to focus on practical side: you cited 'I make this complaint about IBM
double-double as a floating point programmer who specifically has to work around
its brokenness'. Do you have an empirical example where you had to circumvent the
issues with IBM long double to make your code to work properly? I'm asking because
I sincerely would like to know a real world case where IBM long double is causing
As David had said, original aim for IBM long double was not really IEEE conformance:
it was to provide a type with some more precision of double that could benefit
from double hardware, *but* with some trade-offs. The double-double arithmetics was
never expected to behave like it has IEEE conformance, if you check literature
it is suggested everytime as complement to double operations, *not* a replacement
However IBM chose it for long double and although you can argue it was bad decision,
let focus on the current issue. What David is arguing, and I tend to agree with him,
is what would be real gains in trying to patching IBM long double? How would be
performance implications of such changes? Would it serve to original purpose of
providing a relative fast type with a relative more precision than using an software
For instance, Joseph's patch http://gcc.gnu.org/ml/gcc-patches/2014-01/msg00157.html
showed some performance degradation on pow and exp in my experiments (of about 8%
using GLIBC benchtests). It is due we use expl in exp and the patches don't really
help on anything in this case but hurt performance. And these are the issues we
are trying to focus on IBM right now: will the gains of making this type behave more
closely to a IEEE conformance type outweigh the possible performance issues of a
type that intended to have tradeoffs?
And currently I also tend to agree to David: Joseph current goal in to push a
modification that *he* sees as important.