4943
2007-08-20 14:22:05 +0000
Inconsistent rounding behaviour for sprintf and IEEE doubles
2014-07-04 16:02:23 +0000
1
1
1
Unclassified
glibc
libc
2.3.4
All
All
RESOLVED
WORKSFORME
P2
normal
---
1
paul
drepper.fsp
glibc-bugs
vincent-srcware
Linux
i386
N/A
oldest_to_newest
18683
0
paul
2007-08-20 14:22:05 +0000
sample source code and output demonstrating the problem:
#include <stdio.h>
#include <math.h>
inline double dp(double d, int p)
{
double f = pow(10, p);
return round(d * f) / f;
}
int main(int a_argc, char *a_argv[])
{
char l_buf[256];
double d = 2597.525;
sprintf(l_buf, "%.15f", d); printf("a) %.3f=%s\n", d, l_buf);
sprintf(l_buf, "%.2f", d); printf("b) %.3f=%s\n", d, l_buf);
sprintf(l_buf, "%.2f", dp(d, 2)); printf("c) %.3f=%s\n", d, l_buf);
d = 2597.625;
sprintf(l_buf, "%.15f", d); printf("d) %.3f=%s\n", d, l_buf);
sprintf(l_buf, "%.2f", d); printf("e) %.3f=%s\n", d, l_buf);
sprintf(l_buf, "%.2f", dp(d, 2)); printf("f) %.3f=%s\n", d, l_buf);
return 0;
}
Output:
a) 2597.525=2597.525000000000091
b) 2597.525=2597.53
c) 2597.525=2597.53
d) 2597.625=2597.625000000000000
e) 2597.625=2597.62
f) 2597.625=2597.63
Note that where the value 2597.625 is stored precisely, the output for sprintf
is inconsistent with that expected for the default selected IEEE rounding mode.
19143
1
vincent-srcware
2007-09-19 13:28:49 +0000
I don't think there is a bug. The IEEE-754 standard doesn't seem to specify to
which decimal value a halfway case should be rounded. As for consistency, the C
function round() requires halfway cases to be rounded away from zero, but the
IEEE-754 binary rounding rules require halfway cases to be rounded to an even
mantissa.
19153
2
paul
2007-09-19 17:48:23 +0000
(In reply to comment #1)
> I don't think there is a bug. The IEEE-754 standard doesn't seem to specify to
> which decimal value a halfway case should be rounded. As for consistency, the C
> function round() requires halfway cases to be rounded away from zero, but the
> IEEE-754 binary rounding rules require halfway cases to be rounded to an even
> mantissa.
I believe it is easy to miss the wood for the trees here.
When one speaks of "rounding", we simply refer to the rule which decides how we
should promote one level of precision (eg. 25.5) to the next (26).
The IEEE floating point arithmetic processor is programmable to allow a choice
in the rules one prefers depending on circumstance. In this specific
circumstance, the formatting of a number for display, I would expect the
promotion of the value 25.5 to 26, rather than 25, which is indeed the result in
99% of the cases. printf("%.0f", floor(25.5)) should give me the value 25,
whilst printf("%.0f", ceil(25.5)) should give the value 26.
By specifying a precision of 2 decimal places to the printf() function, for the
value 2597.625 (stored precisely to 3 decimals of precision) should logically be
promoted to the value 2597.63, and not 2597.62. This in fact is the case for
the Microsoft C library, MSVCRT.DLL.
Whether or not you choose to describe this as "unexpected behaviour" or less
diplomatically, as a "bug", the behaviour is certainly inconsistent, as shown in
the example.
19158
3
vincent-srcware
2007-09-19 21:22:32 +0000
I don't see any inconsistency. The round, trunc, floor and ceil functions have
their own rules. In the round-to-nearest mode (i.e. the default rounding mode),
the glibc chooses to use the IEEE-754 even-rounding rule to round halfway cases
for printf, as shown by the following program:
#include <stdio.h>
int main (void)
{
double x;
for (x = -8.5; x <= 8.5; x += 1.0)
printf ("%4g %.0f\n", x, x);
return 0;
}
which outputs:
-8.5 -8
-7.5 -8
-6.5 -6
-5.5 -6
-4.5 -4
-3.5 -4
-2.5 -2
-1.5 -2
-0.5 -0
0.5 0
1.5 2
2.5 2
3.5 4
4.5 4
5.5 6
6.5 6
7.5 8
8.5 8
This seems to be perfectly correct for me.
19161
4
paul
2007-09-20 00:01:39 +0000
(In reply to comment #3)
> I don't see any inconsistency....
Thank you for making my point for me. Your exact same code compiled with MinGW
for Windows produces the following output:
$ gcc -o tst.exe tst.cpp
paul@MINIEPC /i/ldev
$ ./tst.exe
-8.5 -9
-7.5 -8
-6.5 -7
-5.5 -6
-4.5 -5
-3.5 -4
-2.5 -3
-1.5 -2
-0.5 -1
0.5 1
1.5 2
2.5 3
3.5 4
4.5 5
5.5 6
6.5 7
7.5 8
8.5 9
paul@MINIEPC /i/ldev
$
You still don't spot the inconsistency?
Rounding toward even numbers is mathematical nonsense.
19163
5
vincent-srcware
2007-09-20 01:02:33 +0000
(In reply to comment #4)
> Thank you for making my point for me. Your exact same code compiled
> with MinGW for Windows produces the following output:
This is a different C library under Windows, not the glibc.
> You still don't spot the inconsistency?
No inconsistencies. You get different results because the C standard doesn't
specify these cases and the C libraries are different. FYI, the Mac OS X libc
and the Solaris2.7 libc both give the same results as glibc. So, if there is a
library that would need to change in order to make the results the same
everywhere is MinGW, not the glibc.
> Rounding toward even numbers is mathematical nonsense.
You may think that, but this doesn't make a standard. IEEE-754 rules are already
well-established. So, it would be a bit difficult for you to make things change.
19165
6
paul
2007-09-20 07:03:54 +0000
(In reply to comment #5)
> (In reply to comment #4)
> > Thank you for making my point for me. Your exact same code compiled
> > with MinGW for Windows produces the following output:
>
> This is a different C library under Windows, not the glibc.
The C library used under Windows is, as I have already pointed out, MSVCRT.DLL.
You will note that my bug report is against glibc. I am well aware of the
source of the problem. MinGW links against the standard Microsoft runtime.
> > You still don't spot the inconsistency?
>
> No inconsistencies. You get different results because the C standard doesn't
> specify these cases and the C libraries are different. FYI, the Mac OS X libc
> and the Solaris2.7 libc both give the same results as glibc. So, if there is a
> library that would need to change in order to make the results the same
> everywhere is MinGW, not the glibc.
I have 20 years in the financial services industry and 25 years in development.
This little "peculiarity" of glibc has caused us any amount of trouble. It is
a significant deterrent for us against glibc usage.
Whilst I could believe what you say for Mac OS, as it derives from FreeBSD and
quite likely uses glibc, I would query your statement about Solaris2.7. Even if
Solaris indeed does exhibit this behaviour, there is still no need to assume
that it is correct behaviour. One would be better off emulating a library which
has 90% following in the industry rather than one which sees 5% usage.
> > Rounding toward even numbers is mathematical nonsense.
>
> You may think that, but this doesn't make a standard. IEEE-754 rules are already
> well-established. So, it would be a bit difficult for you to make things change.
Oh, I don't change is too difficult. Just taking the content of this bug report
and publishing it to say, OSNews, or perhaps even Linux Gazette would have the
whole open source community behind me. The entire Microsoft development
community clearly already do agree with me. I do not stand alone.
Please note: I am not suggesting that you change the IEE-754 specification, or
even the default IEEE rounding mode. I AM suggesting that when I output an 8
byte floating point number using printf(), one would expect output as per the
Microsoft library.
This has got nothing at all to do with IEEE arithmetic specifications, internal
binary storage or the chosen default IEEE mode for rounding the results of
calculations. It has everything to do with the implementation of the C99
function printf() and the conversion of an IEEE number verbatim to a character
array representation.
To get a bit more detailed, the IEEE-754 specification for an 8 byte floating
point stores numbers as a sign bit, 10 bits for the exponent and 53 bits for a
mantissa. It goes ahead and specifies a choice of four rounding modes to
determine the result of mathematical calculations when precision is lost. It is
clearly more accurate in a accountancy or banking systems to round towards an
even mantissa (not an even integer- an even mantissa) as the error averages out.
Conversely, and most notably, IEEE-754 does not specify what the expected output
should be for the C functions sprintf(), fprintf() and printf().
19168
7
drepper.fsp
2007-09-20 13:57:31 +0000
> I would query your statement about Solaris2.7. Even if
> Solaris indeed does exhibit this behaviour, there is still no need to assume
> that it is correct behaviour. One would be better off emulating a library which
> has 90% following in the industry rather than one which sees 5% usage.
This proves that a) you're an idiots and b) are to the highest degree egotistical.
The current behavior is correct, widely used, existing practice on all platforms
which try to adhere to standards. And yet you want a change because some other
system where the authors never spend a second thinking about standards does it
differently and you've written your code to match that expectation.
Go away!
19174
8
paul
2007-09-20 14:33:05 +0000
That was a very nice answer to get to a very real query. Thank you.
You and Microsoft really cannot both be right about this one, you know. Is it
really that hard to admit you are wrong?
No, Sir, I will not "go away" as you put it. Consider for a moment the
implications: I write an app that reports financial statements using Windows
and sprintf(). I later port the whole shananigan to Linux. You don't need a
degree in astrophysics to guess which library will get kicked.
The current behaviour is correct only in how the FPU deals with rounding the
error at the 15'th decimal of precision. That behaviour, and indeed the IEEE
specification has absolutely nothing to do with what printf() produces when
asked to represent a current value, which for 8 byte doubles is always stored to
15 decimals of precision, as text.
I contend that you are yourself
a) Egotistical and
b) an idiot.
Futhermore, you have no clue as to the damage you are causing to the credibility
of GLibC as a library, and the open source community as a whole with your
ridiculous attitude.
No. I will not go away. I will take this one as far as it needs to go. I will
stick to you like a leech until you either give me a very good reason for why
GLibC differs from Microsoft, or you fix the problem.
I am not intimidated by you or your childish responses. If it's credibility you
think you have, let us see how well you like having your comments published.
Kindest regards
19175
9
vincent-srcware
2007-09-20 15:04:55 +0000
(In reply to comment #8)
> You and Microsoft really cannot both be right about this one, you know.
Something you don't seem to have understood: the standards do not specify how
the halfway cases should be rounded. So, yes, glibc and Microsoft can be both
right while yielding different results.
> The current behaviour is correct only in how the FPU deals with rounding the
> error at the 15'th decimal of precision. That behaviour, and indeed the IEEE
> specification has absolutely nothing to do with what printf() produces when
> asked to represent a current value, which for 8 byte doubles is always stored
> to 15 decimals of precision, as text.
The IEEE754-1985 standard also (partly) specifies binary to decimal conversions
(see Section 5.6).
19176
10
paul
2007-09-20 15:55:49 +0000
(In reply to comment #9)
Ah- a spot of sanity and logical argument again rather than hurling insults. Cool.
> (In reply to comment #8)
> > You and Microsoft really cannot both be right about this one, you know.
>
> Something you don't seem to have understood: the standards do not specify how
> the halfway cases should be rounded. So, yes, glibc and Microsoft can be both
> right while yielding different results.
I actually took the time to scan the IEEE doc once I started to get some idea as
to what was going on here. Perhaps you are right that I misunderstood at first.
However:
The standards doc describes exactly what should happen with halfway cases. It
says that numbers should be rounded to the closest value, or if it is precisely
halfway, that the number should be rounded to the closest even value.
What the document is describing has nothing to do with printf(), but
everything to do with the results of arithmetic operations. That 15th decimal
of precision is always in doubt- there is inevitably an error associated.
What we do know is that for a large enough set of operations, the probability
for error will be the same for each operation, so essentially we can prevent the
error from being additive by rounding towards the closest even number in cases
where the representation is on a boundary.
Yes, you are theoretically correct that behavior for the same function in two
disparate libraries may differ without their implementation being wrong.
Another way of saying this, is that the two libraries simply follow different
specifications. However, This begs the question: Which specification are we
following for the library, and which does Microsoft implement?
I would assume that Microsoft does at least loosely follow a standard. I believe
GLibC follows the C99 standard.
Nevertheless, sprintf() behaviour should be specified in the C99 spec, and not
the IEEE-754 spec - even though the c99 spec must deal with IEEE values.
When I speak of consistency, and getting back to my original example, the
results of printf("%.2f", 2597.525) and printf("%.2f", 2597.625) should not be
determined by the fact that the FPU can store 2597.625 precisely. Note that
following the "round to closest even value" rule should have produced 2397.52
and 2397.62 if that were the issue here (.02 being the closest even value).
Instead, we see 2397.53 and 2397.62.
What we really have, is a number 2597.625 which may be stored precisely with no
error at all. Practical question is: will the FTSE and their clients be happy
with us rounding that index value down to 2597.62 just because the FPU can store
that value precisely, whilst in the next breath rounding the index value
2597.525 up to 2597.53 because it cannot?
I don't know whether I am missing some obscure logic or reasoning here, and I am
the first to admit that I am not specialist in this area.
Please. I really really really want a solution here. I am not purposely trying
to be pig headed. Apologies for reciprocating in my previous response, but I do
not appreciate being called egotistical and an idiot.
> > The current behaviour is correct only in how the FPU deals with rounding the
> > error at the 15'th decimal of precision. That behaviour, and indeed the IEEE
> > specification has absolutely nothing to do with what printf() produces when
> > asked to represent a current value, which for 8 byte doubles is always stored
> > to 15 decimals of precision, as text.
>
> The IEEE754-1985 standard also (partly) specifies binary to decimal conversions
> (see Section 5.6).
Didn't see that one- will check. Thanks.
19194
11
vincent-srcware
2007-09-21 15:13:51 +0000
(In reply to comment #10)
> Nevertheless, sprintf() behaviour should be specified in the C99 spec, and not
> the IEEE-754 spec - even though the c99 spec must deal with IEEE values.
Before I give some information about current standards and what is in the 754r
draft (since there have been some changes on this subject), I note that in your
bug report (what you posted on 2007-08-20), you just mentioned IEEE, but not
C99: "the output for sprintf is inconsistent with that expected for the default
selected IEEE rounding mode." This makes your arguments difficult to follow...
Now, C99 describes the behavior of the printf family in 7.19.6.1#13. For the
examples given in this bug report, C99 says that the result should be correctly
rounded. C99 does not say which result should be returned in case of a tie, but
in practice, it relies on the definition given in IEEE-754.
In 5.6, IEEE754-1985 also requires correct rounding for binary <-> decimal
conversions (though the range conditions are a bit different). This is
consistent with C99, and the result is specified by the "rounding direction".
IEEE754-1985 defines only one round-to-nearest mode, but in binary only! I
assume that's a defect that has been fixed in a later version (see below). Of
course, it can be *implicitly* generalized to decimal, but not the halfway cases
(the even-rounding notion wasn't introduced at that time: it was just said that
the least significant bit had to be 0, which makes sense only in binary).
LIA-2 says that halfway cases are implementation-defined, but if iec_559F =
true, then the semantics is completely defined by IEC 60559 (which is, AFAIK,
IEEE-754, possibly revised): "ties are rounded to even last digit". It also says
that this is the rounding used by the conversions. So, glibc conforms to LIA-2 +
IEC 60559, but not Microsoft's library.
Now, the latest draft (1.4.0) of IEEE754r (which will probably be more or less
the final version) defines two round-to-nearest modes: roundTiesToEven and
roundTiesToAway (I think this is self-explanatory). It also says:
"An implementation of this standard shall provide roundTiesToEven. It shall be
the default rounding-direction mode for results in binary formats. The default
rounding-direction mode for results in decimal formats is language-defined, but
should be roundTiesToEven. A decimal implementation of this standard shall
provide roundTiesToAway as a user-selectable rounding-direction mode. The
rounding mode roundTiesToAway is optional for binary."
This completely justifies the glibc behavior, at least currently. Of course, the
C standard doesn't define yet which rounding-direction mode is used for decimal
results, but when the choice will be made (if it is made), it will probably be
roundTiesToEven, as suggested by IEEE754r and LIA-2.
Finally, I don't think that many programs rely on a roundTiesToAway behavior of
binary-to-decimal conversion in halfway cases as most binary values are not
representable exactly in decimal. Indeed, whatever choice is made for halfway
cases, there are problems to round values of the form xxx.525 to binary and back
to decimal with 2 fraction digits (depending on xxx, you'll obtain either xxx.52
or xxx.53). You really need a decimal arithmetic or some form of emulation.
19215
12
paul
2007-09-21 22:31:27 +0000
(In reply to comment #11)
> (In reply to comment #10)
> > Nevertheless, sprintf() behaviour should be specified in the C99 spec, and not
> > the IEEE-754 spec - even though the c99 spec must deal with IEEE values.
>
First, I have to thank you for the effort and time you have spent answering my
query. It might not be the answer I would like, but it is a very complete
answer. I will leave the body of your response reasonably intact to preserve
context, and add my comments interspersed.
> Before I give some information about current standards and what is in the 754r
> draft (since there have been some changes on this subject), I note that in your
> bug report (what you posted on 2007-08-20), you just mentioned IEEE, but not
> C99: "the output for sprintf is inconsistent with that expected for the default
> selected IEEE rounding mode." This makes your arguments difficult to follow...
My humble apologies. This was not my intention. I simply assumed that the
problem had something to do with the IEEE rounding mode (correctly, it would
appear) and stated that in my bug report.
> Now, C99 describes the behavior of the printf family in 7.19.6.1#13. For the
> examples given in this bug report, C99 says that the result should be correctly
> rounded. C99 does not say which result should be returned in case of a tie, but
> in practice, it relies on the definition given in IEEE-754.
This is interesting. I wonder whether we could dig up a definition for the
phrase "correctly rounded". You already know my opinion on what is meant by the
phrase. Again, are we not assuming too much about the scope of the IEEE-754?
From what I understand, that document is describing how to round the result of
calculations, not how to round decimal numbers arithmetically.
> In 5.6, IEEE754-1985 also requires correct rounding for binary <-> decimal
> conversions (though the range conditions are a bit different). This is
> consistent with C99, and the result is specified by the "rounding direction".
> IEEE754-1985 defines only one round-to-nearest mode, but in binary only! I
> assume that's a defect that has been fixed in a later version (see below). Of
> course, it can be *implicitly* generalized to decimal, but not the halfway cases
> (the even-rounding notion wasn't introduced at that time: it was just said that
> the least significant bit had to be 0, which makes sense only in binary).
Wow. That's an amazing amount of detail.
> LIA-2 says that halfway cases are implementation-defined, but if iec_559F =
> true, then the semantics is completely defined by IEC 60559 (which is, AFAIK,
> IEEE-754, possibly revised): "ties are rounded to even last digit". It also says
> that this is the rounding used by the conversions. So, glibc conforms to LIA-2 +
> IEC 60559, but not Microsoft's library.
I really don't have a problem with rounding to nearest even, if it is only
dealing with the 15th decimal place for double precision floating points. When
we speak of rounding here, I do not believe it has anything at all to do with
binary to decimal conversion.
To the best of my knowledge, all 8 byte floating point values are stored to 15
decimals of precision (or 52 binary digits of precision). The only instance
where it makes sense to round to nearest even value, is in the context of
averaging error for calculation results.
> Now, the latest draft (1.4.0) of IEEE754r (which will probably be more or less
> the final version) defines two round-to-nearest modes: roundTiesToEven and
> roundTiesToAway (I think this is self-explanatory). It also says:
>
> "An implementation of this standard shall provide roundTiesToEven. It shall be
> the default rounding-direction mode for results in binary formats. The default
> rounding-direction mode for results in decimal formats is language-defined, but
> should be roundTiesToEven. A decimal implementation of this standard shall
> provide roundTiesToAway as a user-selectable rounding-direction mode. The
> rounding mode roundTiesToAway is optional for binary."
>
> This completely justifies the glibc behavior, at least currently. Of course, the
> C standard doesn't define yet which rounding-direction mode is used for decimal
> results, but when the choice will be made (if it is made), it will probably be
> roundTiesToEven, as suggested by IEEE754r and LIA-2.
One might assume that, but in practice math just doesn't work that way.
Changing the rules for rounding math equations is like defining apple=pear and
then selling pear juice instead of apples. Earlier on you say that C99 says
that the numbers "should be correctly rounded" when converted to decimal. I
don't see any ambiguity in that statement. I don't see any need for the C99
spec to provide any further clarification...
> Finally, I don't think that many programs rely on a roundTiesToAway behavior of
> binary-to-decimal conversion in halfway cases as most binary values are not
> representable exactly in decimal. Indeed, whatever choice is made for halfway
> cases, there are problems to round values of the form xxx.525 to binary and back
> to decimal with 2 fraction digits (depending on xxx, you'll obtain either xxx.52
> or xxx.53). You really need a decimal arithmetic or some form of emulation.
As you say, there are many more numbers that are inexactly stored than those
which are exact. It took a while for us to find the fact that our reports were
sometimes generating the wrong numbers. It works most of the time. Our problem
is that those reports are being used by investment houses to make decisions
sometimes worth billions, and we just cannot have those decisions made against
wrong numbers.
19216
13
vincent-srcware
2007-09-21 23:43:09 +0000
(In reply to comment #12)
> > Now, C99 describes the behavior of the printf family in 7.19.6.1#13. For the
> > examples given in this bug report, C99 says that the result should be
> > correctly rounded. C99 does not say which result should be returned in case
> > of a tie, but in practice, it relies on the definition given in IEEE-754.
>
> This is interesting. I wonder whether we could dig up a definition for the
> phrase "correctly rounded".
It is defined by C99 in the section "Terms and definitions". Perhaps you should
take a bit time to read the standards.
> You already know my opinion on what is meant by the
> phrase. Again, are we not assuming too much about the scope of the IEEE-754?
> From what I understand, that document is describing how to round the result of
> calculations, not how to round decimal numbers arithmetically.
IEEE-754 covers the arithmetic operations, the square root, binary format
conversions, binary FP <-> integer, *and* binary <-> decimal conversions (I've
already mentioned Section 5.6 twice, have you read it?). All of these operations
basically correspond to a math function over the real numbers, then a rounding
operation.
> As you say, there are many more numbers that are inexactly stored than those
> which are exact. It took a while for us to find the fact that our reports
> were sometimes generating the wrong numbers. It works most of the time. Our
> problem is that those reports are being used by investment houses to make
> decisions sometimes worth billions, and we just cannot have those decisions
> made against wrong numbers.
Before reporting inexistent bugs, I think you should really learn more about
standards and computer arithmetic, probably starting with:
http://www2.hursley.ibm.com/decimal/
19217
14
paul
2007-09-22 07:51:46 +0000
(In reply to comment #13)
> (In reply to comment #12)
> > > Now, C99 describes the behavior of the printf family in 7.19.6.1#13. For the
> > > examples given in this bug report, C99 says that the result should be
> > > correctly rounded. C99 does not say which result should be returned in case
> > > of a tie, but in practice, it relies on the definition given in IEEE-754.
> >
> > This is interesting. I wonder whether we could dig up a definition for the
> > phrase "correctly rounded".
>
> It is defined by C99 in the section "Terms and definitions". Perhaps you should
> take a bit time to read the standards.
>
> > You already know my opinion on what is meant by the
> > phrase. Again, are we not assuming too much about the scope of the IEEE-754?
> > From what I understand, that document is describing how to round the result of
> > calculations, not how to round decimal numbers arithmetically.
>
> IEEE-754 covers the arithmetic operations, the square root, binary format
> conversions, binary FP <-> integer, *and* binary <-> decimal conversions (I've
> already mentioned Section 5.6 twice, have you read it?). All of these operations
> basically correspond to a math function over the real numbers, then a rounding
> operation.
>
> > As you say, there are many more numbers that are inexactly stored than those
> > which are exact. It took a while for us to find the fact that our reports
> > were sometimes generating the wrong numbers. It works most of the time. Our
> > problem is that those reports are being used by investment houses to make
> > decisions sometimes worth billions, and we just cannot have those decisions
> > made against wrong numbers.
>
> Before reporting inexistent bugs, I think you should really learn more about
> standards and computer arithmetic, probably starting with:
> http://www2.hursley.ibm.com/decimal/
Oh dear. There you guys go again, inferring that I am an idiot. Quite
honestly, all of your explanations thus far fail to address
printf("%.2f", 2597.525);
2597.53
printf("%.2f", 2597.5625);
2597.52
You can call "works for me", "perfectly sensible" and "according to spec" all
you like, but converting 2597.525000000000091 as stored in binary to decimal
gives 2597.53, whilst converting
>
> It is defined by C99 in the section "Terms and definitions". Perhaps you should
> take a bit time to read the standards.
>
> > You already know my opinion on what is meant by the
> > phrase. Again, are we not assuming too much about the scope of the IEEE-754?
> > From what I understand, that document is describing how to round the result of
> > calculations, not how to round decimal numbers arithmetically.
>
> IEEE-754 covers the arithmetic operations, the square root, binary format
> conversions, binary FP <-> integer, *and* binary <-> decimal conversions (I've
> already mentioned Section 5.6 twice, have you read it?). All of these operations
> basically correspond to a math function over the real numbers, then a rounding
> operation.
>
> > As you say, there are many more numbers that are inexactly stored than those
> > which are exact. It took a while for us to find the fact that our reports
> > were sometimes generating the wrong numbers. It works most of the time. Our
> > problem is that those reports are being used by investment houses to make
> > decisions sometimes worth billions, and we just cannot have those decisions
> > made against wrong numbers.
>
> Before reporting inexistent bugs, I think you should really learn more about
> standards and computer arithmetic, probably starting with:
> http://www2.hursley.ibm.com/decimal/
Oh dear. There you guys go again, inferring that I am an idiot. Quite
honestly, all of your explanations thus far fail to address
printf("%.2f", 2597.525);
2597.53
printf("%.2f", 2597.625);
2597.52
You can call "works for me", "perfectly sensible" and "according to spec" all
you like, but converting 2597.525000000000091 as stored in binary to decimal
gives 2597.53, whilst converting 2597.625000000000000 to decimal gives 2597.62.
You say that C99 does not say what to do with halfway cases. C99 says the
number should be correctly rounded. The correctly rounded number for 2597.625
is 2597.63. You say that the IEEE spec, says that it should be rounded to
"nearest even". I say that IEEE cannot re-define the rules of decimal
mathematics. Even if they could, the number 2597.525 should then be rounded to
2597.52 in order to be consistent. But, you say, the number 2597.525 is
inexactly stored, and is thus being rounded to the closest. I say that if
Microsoft can address this problem, then so can you.
You have now explained your reasoning thoroughly. The fact that I do not agree
with your reasoning, and in this instance do agree with Microsoft's, and the
rest of the world's reasoning does not make me an idiot.
19218
15
madcoder
2007-09-22 09:11:27 +0000
Subject: Re: Inconsistent rounding behaviour for sprintf and IEEE doubles
On jeu, sep 20, 2007 at 06:33:29 +0000, paul at inet dot co dot za wrote:
> ------- Additional Comments From paul at inet dot co dot za 2007-09-20 14:33 -------
> No, Sir, I will not "go away" as you put it. Consider for a moment the
> implications: I write an app that reports financial statements using Windows
> and sprintf(). I later port the whole shananigan to Linux. You don't need a
> degree in astrophysics to guess which library will get kicked.
On Sat, Sep 22, 2007 at 11:52:01AM +0000, paul at inet dot co dot za wrote:
> Oh dear. There you guys go again, inferring that I am an idiot. Quite
> honestly, all of your explanations thus far fail to address
>
> printf("%.2f", 2597.525);
> 2597.53
>
> printf("%.2f", 2597.625);
> 2597.52
I suppose you meant .62 ...
> You can call "works for me", "perfectly sensible" and "according to spec" all
> you like, but converting 2597.525000000000091 as stored in binary to decimal
> gives 2597.53, whilst converting 2597.625000000000000 to decimal gives 2597.62.
Yeah and this is correct, because rounding is performed following IEEE
754, and IEEE 754 uses round to nearest by default, see
http://en.wikipedia.org/wiki/IEEE_754#Rounding_floating-point_numbers
for the short explanation, which says that in case of a number that
falls midway, it's sometimes rounded towards -∞ sometimes towards +∞. If
you don't get this, then it's your loss, but the behavior is correct.
> You say that C99 does not say what to do with halfway cases. C99 says the
> number should be correctly rounded. The correctly rounded number for 2597.625
> is 2597.63. You say that the IEEE spec, says that it should be rounded to
> "nearest even". I say that IEEE cannot re-define the rules of decimal
> mathematics. Even if they could, the number 2597.525 should then be rounded to
> 2597.52 in order to be consistent. But, you say, the number 2597.525 is
> inexactly stored, and is thus being rounded to the closest. I say that if
> Microsoft can address this problem, then so can you.
If you want to write a financial application like you said, then you
shouldn't have used floating point in the first place. Floating point is
designed so that errors due to inexact storing of the least significants
digits of your number are evenly distributed in a computation, so that
they dont loose too much precision.
But to write a financial application where you don't have to deal with
less than 10^-6 or 10^-4 fractions of 1, then you should have used fixed
point arithmetics, so that you don't have to rely on unspecified
rounding behaviors.
You wrote your application based on false assumptions, don't put the
burden on the glibc, you are wrong in the first place.
19219
16
paul
2007-09-22 09:39:45 +0000
(In reply to comment #15)
> Subject: Re: Inconsistent rounding behaviour for sprintf and IEEE doubles
>
>
> On jeu, sep 20, 2007 at 06:33:29 +0000, paul at inet dot co dot za wrote:
> > ------- Additional Comments From paul at inet dot co dot za 2007-09-20
14:33 -------
> > No, Sir, I will not "go away" as you put it. Consider for a moment the
> > implications: I write an app that reports financial statements using Windows
> > and sprintf(). I later port the whole shananigan to Linux. You don't need a
> > degree in astrophysics to guess which library will get kicked.
>
> On Sat, Sep 22, 2007 at 11:52:01AM +0000, paul at inet dot co dot za wrote:
> > Oh dear. There you guys go again, inferring that I am an idiot. Quite
> > honestly, all of your explanations thus far fail to address
> >
> > printf("%.2f", 2597.525);
> > 2597.53
> >
> > printf("%.2f", 2597.625);
> > 2597.52
>
> I suppose you meant .62 ...
I suppose so, yes.
>
> > You can call "works for me", "perfectly sensible" and "according to spec" all
> > you like, but converting 2597.525000000000091 as stored in binary to decimal
> > gives 2597.53, whilst converting 2597.625000000000000 to decimal gives 2597.62.
>
> Yeah and this is correct, because rounding is performed following IEEE
> 754, and IEEE 754 uses round to nearest by default, see
> http://en.wikipedia.org/wiki/IEEE_754#Rounding_floating-point_numbers
> for the short explanation, which says that in case of a number that
> falls midway, it's sometimes rounded towards -∞ sometimes towards +∞. If
> you don't get this, then it's your loss, but the behavior is correct.
>
> > You say that C99 does not say what to do with halfway cases. C99 says the
> > number should be correctly rounded. The correctly rounded number for 2597.625
> > is 2597.63. You say that the IEEE spec, says that it should be rounded to
> > "nearest even". I say that IEEE cannot re-define the rules of decimal
> > mathematics. Even if they could, the number 2597.525 should then be rounded to
> > 2597.52 in order to be consistent. But, you say, the number 2597.525 is
> > inexactly stored, and is thus being rounded to the closest. I say that if
> > Microsoft can address this problem, then so can you.
>
> If you want to write a financial application like you said, then you
> shouldn't have used floating point in the first place. Floating point is
> designed so that errors due to inexact storing of the least significants
> digits of your number are evenly distributed in a computation, so that
> they dont loose too much precision.
... and I do not have a problem with this. FP is the ideal format for us to use.
> But to write a financial application where you don't have to deal with
> less than 10^-6 or 10^-4 fractions of 1, then you should have used fixed
> point arithmetics, so that you don't have to rely on unspecified
> rounding behaviors.
>
> You wrote your application based on false assumptions, don't put the
> burden on the glibc, you are wrong in the first place.
>
What a cop out.
We have been using FP successfully for over a decade. FP is accurate for up to
15 decimal places- way more than we need. The problem is not accuracy here. The
problem has to do with conversion of a number from the binary to a text
representation- a problem which the Microsoft library does not have.
Please note:
2597.525000000000091 to 15 decimals of precision is
2597.52500000000
- whichever rounding rules you choose to apply. FP makes no claims at all
beyond 15 decimals of precision.
Is it in any way clearer, if I write
2597.52500000000 == 2597.53
and
2597.62500000000 == 2597.62
How can you claim consistency?????
19220
17
schwab
2007-09-22 09:57:52 +0000
The number 2597.525 does not exist. There are no numbers between
2597.5249999999996362 and 2597.5250000000000909 in IEEE double.
19221
18
paul
2007-09-22 10:09:31 +0000
(In reply to comment #17)
> The number 2597.525 does not exist. There are no numbers between
> 2597.5249999999996362 and 2597.5250000000000909 in IEEE double.
>
You are wrong. All numbers between
2597.5249999999996362 and
2597.5250000000000909 are
2597.52500000000 stored to 15 decimals of precision.
You are incorrect to assume that there exists a floating point number such as
2597.5250000000000909
since this is simply
2597.52500000000 stored with an error of 0.0000000000000909
19222
19
madcoder
2007-09-22 10:39:14 +0000
Subject: Re: Inconsistent rounding behaviour for sprintf and IEEE doubles
On Sat, Sep 22, 2007 at 02:09:47PM +0000, paul at inet dot co dot za wrote:
>
> ------- Additional Comments From paul at inet dot co dot za 2007-09-22 10:09 -------
> (In reply to comment #17)
> > The number 2597.525 does not exist. There are no numbers between
> > 2597.5249999999996362 and 2597.5250000000000909 in IEEE double.
> >
>
> You are wrong. All numbers between
> 2597.5249999999996362 and
> 2597.5250000000000909 are
> 2597.52500000000 stored to 15 decimals of precision.
>
> You are incorrect to assume that there exists a floating point number such as
> 2597.5250000000000909
> since this is simply
> 2597.52500000000 stored with an error of 0.0000000000000909
No it's not. It's what you _think_ it is. But not what the FPU knows
it is.
Just think about this one second. What if you had stored
2597.52500000000009 and not 2597.5250000 ? It would have made no
difference, because for your FPU, 2597.5250000, 2597.52500000000009 and
2597.5250000000000909 have the same representation.
So when you ask for the rounded value the libc has to round
2597.5250000000000909, not 2597.52500000000009, not 2597.525 but
2597.5250000000000909 because it's what is stored in your double value.
So it rounds to XXX.53, because default rounding scheme is Nearest
even.
0.125 (or 0.625 or XXXX.625) is another story, because 0.125 indeed is
an existing value in the IEEE world, but it falls right in between 0.12
and 0.13. And here, IEEE says that "Nearest even" rounding will round
half of the time up, half of the time down. For 0.125 it rounds down,
try for 0.0125 it rounds up. And this is done on purpose to spread the
errors evenly accross different computations.
STOP ARGUING that 2597.5250000000000909 is 2597.52500000000 stored to
15 decimals of precision. You're uterly wrong. 2597.5250* does not
exists in IEEE so once stored, it becomes 2597.5250000000000909. The FPU
has no way to know you meant 2597.525, it sees 2597.5250000000000909,
and 2597.5250000000000909 only. Not {2597.5250000000000909 but only the
15 first digits are meaningful}.
19223
20
paul
2007-09-22 10:55:27 +0000
(In reply to comment #19)
> Subject: Re: Inconsistent rounding behaviour for sprintf and IEEE doubles
>
> On Sat, Sep 22, 2007 at 02:09:47PM +0000, paul at inet dot co dot za wrote:
> >
> > ------- Additional Comments From paul at inet dot co dot za 2007-09-22
10:09 -------
> > (In reply to comment #17)
> > > The number 2597.525 does not exist. There are no numbers between
> > > 2597.5249999999996362 and 2597.5250000000000909 in IEEE double.
> > >
> >
> > You are wrong. All numbers between
> > 2597.5249999999996362 and
> > 2597.5250000000000909 are
> > 2597.52500000000 stored to 15 decimals of precision.
> >
> > You are incorrect to assume that there exists a floating point number such as
> > 2597.5250000000000909
> > since this is simply
> > 2597.52500000000 stored with an error of 0.0000000000000909
>
> No it's not. It's what you _think_ it is. But not what the FPU knows
> it is.
>
> Just think about this one second. What if you had stored
> 2597.52500000000009 and not 2597.5250000 ? It would have made no
> difference, because for your FPU, 2597.5250000, 2597.52500000000009 and
> 2597.5250000000000909 have the same representation.
>
> So when you ask for the rounded value the libc has to round
> 2597.5250000000000909, not 2597.52500000000009, not 2597.525 but
> 2597.5250000000000909 because it's what is stored in your double value.
>
> So it rounds to XXX.53, because default rounding scheme is Nearest
> even.
>
> 0.125 (or 0.625 or XXXX.625) is another story, because 0.125 indeed is
> an existing value in the IEEE world, but it falls right in between 0.12
> and 0.13. And here, IEEE says that "Nearest even" rounding will round
> half of the time up, half of the time down. For 0.125 it rounds down,
> try for 0.0125 it rounds up. And this is done on purpose to spread the
> errors evenly accross different computations.
>
> STOP ARGUING that 2597.5250000000000909 is 2597.52500000000 stored to
> 15 decimals of precision. You're uterly wrong. 2597.5250* does not
> exists in IEEE so once stored, it becomes 2597.5250000000000909. The FPU
> has no way to know you meant 2597.525, it sees 2597.5250000000000909,
> and 2597.5250000000000909 only. Not {2597.5250000000000909 but only the
> 15 first digits are meaningful}.
This is utter rubbish. Of course the FPU, and you too, have a way of knowing
the precision. It is stated as per the specification as 15 decimals. Assuming
anything after the 15th digit for the purposes of conversion to text is
nonsense. The error portion (as stored after the 15th digit) is useful only
when using the number in further calculations, where the rounding to closest or
rounding to nearest even on boundary applies.
Was all the shouting to try to stop me from pointing out something important
that you folks are clearly missing? No amount of shouting is going to change the
fact that IEEE doubles store at most 15 digits precisely. How about you pass
this along to someone you know who may be competent in mathematics, who will
verify what I am saying, rather than just sprouting at the gills.
19224
21
paul
2007-09-22 11:05:31 +0000
(In reply to comment #19)
> 0.125 (or 0.625 or XXXX.625) is another story, because 0.125 indeed is
> an existing value in the IEEE world, but it falls right in between 0.12
> and 0.13. And here, IEEE says that "Nearest even" rounding will round
> half of the time up, half of the time down. For 0.125 it rounds down,
> try for 0.0125 it rounds up. And this is done on purpose to spread the
> errors evenly accross different computations.
This part warrants a separate response.
0.125 is an "existing value" in your definition as it is precisely stored.
Because of the fact that it is precisely stored, the value does not require
rounding in the IEEE sense at all. IEEE rounding only comes into effect when
trying to store the result of a calculation, where that result cannot in turn be
stored precisely. The FPU takes care of all of that, and should not concern you.
So, for example, say you divide the precisely stored value 0.125 by 10. The FPU
might not be able to store the result 0.0125 precisely, and therefore would FPU
default rounding before storing the result.
This has absolutely nothing to do with conversion to text.
19225
22
madcoder
2007-09-22 11:15:32 +0000
Subject: Re: Inconsistent rounding behaviour for sprintf and IEEE doubles
On Sat, Sep 22, 2007 at 02:55:50PM +0000, paul at inet dot co dot za wrote:
> Was all the shouting to try to stop me from pointing out something important
> that you folks are clearly missing? No amount of shouting is going to change the
> fact that IEEE doubles store at most 15 digits precisely.
Okay, you definitely don't understand floating point arithmetics. The
fact that a floating number is 15 digits precitions means that you can
store whichever 15 significant digits into your value, the resulting
stored value will be different if those 15 digits are different. Meaning
that up to 15 significant values, the operations that takes a given
number, and computes its IEEE 754 representation is injective.
IT DOES NOT ASSURE THAT AFTER THOSE 15 DIGITS, THE OTHER DIGITS ARE
SET TO 0. If you belive so, you never ever understood what IEEE 754 is
about.
When you store your number, the real value has a 909 after the 15th
significant digit. And it's exactly the whole point of the discussion:
YOU SHOULD NOT USE FLOATING POINT FOR FINANCIAL AND OTHER EXACT VALUES.
That's what fixed point is for.
> How about you pass this along to someone you know who may be competent
> in mathematics, who will verify what I am saying, rather than just
> sprouting at the gills.
I know someone called Vincent Lefèvre. And for the record, in France
IEEE 754 numbers are teached in MS, and I've a MS in Computer Science,
so I pretty much know what I'm talking about, thank you.
19226
23
paul
2007-09-22 11:39:52 +0000
(In reply to comment #22)
> Subject: Re: Inconsistent rounding behaviour for sprintf and IEEE doubles
>
> On Sat, Sep 22, 2007 at 02:55:50PM +0000, paul at inet dot co dot za wrote:
> > Was all the shouting to try to stop me from pointing out something important
> > that you folks are clearly missing? No amount of shouting is going to change the
> > fact that IEEE doubles store at most 15 digits precisely.
>
> Okay, you definitely don't understand floating point arithmetics. The
> fact that a floating number is 15 digits precitions means that you can
> store whichever 15 significant digits into your value, the resulting
> stored value will be different if those 15 digits are different. Meaning
> that up to 15 significant values, the operations that takes a given
> number, and computes its IEEE 754 representation is injective.
>
> IT DOES NOT ASSURE THAT AFTER THOSE 15 DIGITS, THE OTHER DIGITS ARE
> SET TO 0. If you belive so, you never ever understood what IEEE 754 is
> about.
I never said that the other digits are set to zero. I said that they represent
the error. All IEEE doubles are stored to exactly the same precision of 15
decimal places, and are accurate to that point.
Since they represent the error, only the first 15 digits are pertinent to text
conversions.
Oh... and please hold with the "you don't understand floating point" stuff. It
is irritating, and I understand perfectly well, thank you.
> When you store your number, the real value has a 909 after the 15th
> significant digit. And it's exactly the whole point of the discussion:
> YOU SHOULD NOT USE FLOATING POINT FOR FINANCIAL AND OTHER EXACT VALUES.
> That's what fixed point is for.
... another repeating theme "You should not be using floating point" is another
irritation and completely irrelevant. "Should not be using the GNU C library
for floating point" might be closer to the truth.
Floating point is precise enough for me, as I have already stated. I am not
concerned about accuracy of calculations here, it is text representation which
concerns me.
> > How about you pass this along to someone you know who may be competent
> > in mathematics, who will verify what I am saying, rather than just
> > sprouting at the gills.
>
> I know someone called Vincent Lefèvre. And for the record, in France
> IEEE 754 numbers are teached in MS, and I've a MS in Computer Science,
> so I pretty much know what I'm talking about, thank you.
Vincent has been extraordinarily good at discussing this query. If you do not
want me questioning your competence, please do not question mine. It always
seems the last resort in these discussions: "if you don't agree with him, but
cannot argue his point, try to discredit him."
19227
24
vincent-srcware
2007-09-22 13:02:37 +0000
Paul, you really make every one (including me) think that you do not understand
floating-point arithmetic. Double-precision numbers are binary numbers, not
decimal numbers. Saying "15-digit precision" is misleading; you should say
"53-bit precision" since their representation is in binary. Now, if you have so
much confidence in Microsoft's C library, I suggest that you try the following
program:
#include <stdio.h>
int main (void)
{
volatile double x = 2597.525;
volatile double y = 5000.525;
printf ("%.2f\n%.2f\n", x, y);
return 0;
}
(please do not change the numbers), and see if you get "consistent" results for
x and for y. And again, I suggest that you read
http://www2.hursley.ibm.com/decimal/
(in particular the FAQ).
19231
25
paul
2007-09-22 20:19:57 +0000
(In reply to comment #24)
> And again, I suggest that you read
> http://www2.hursley.ibm.com/decimal/
> (in particular the FAQ).
I have read this document, thanks. FP is accurate enough for my purposes in all
calculations. Once the result needs to be displayed though, I have an entirely
different situation. Yes, of course I have work-arounds, as shown in my
original example. Why are these work-arounds necessary though?
> Paul, you really make every one (including me) think that you do not
> understand floating-point arithmetic.
This is the strangest thing. I probably understand it better than most, which
is really scary when you think about it. Since the late 80's I could quote the
IEEE storage format without a second thought.
Perhaps it is not common to have people query these things. Everyone just seems
to accept that "floating point is inaccurate" and are happy to leave it at that.
When someone comes along and rocks the boat people get all upset.
> Double-precision numbers are binary numbers, not decimal numbers. Saying
> "15-digit precision" is misleading;
I thought I was using the correct terminology when I said "15 decimals of
precision" or phrased differently, "15 decimal digits of precision"...
> you should say "53-bit precision" since their representation is in binary.
well, not quite 53 bits, rather 52. The extra bit is implied simply because the
FPU does not store the value 0. When you break it down, though, it's not even
52 bits, as you cannot ignore calculation and storage error. The largest
mantissa is (decimal) 9007199254740991 which is 16 digits. We know that there
is always an error associated with storage, and therefore we cannot consider
more than 15 decimals (decimal digits) to be accurate.
> Now, if you have so much confidence in Microsoft's C library, I suggest that
> you try the following program:
Erg. Confidence? I think my words were "Microsoft does not seem to have this
problem"...
> #include <stdio.h>
> int main (void)
> {
> volatile double x = 2597.525;
> volatile double y = 5000.525;
> printf ("%.2f\n%.2f\n", x, y);
> return 0;
> }
>
> (please do not change the numbers), and see if you get "consistent" results for
> x and for y.
Ok, Vincent, you are right. Well done. In this case, even my "work-around" fails.
MS stores the value
5000.525
as
5000.524999999999600
Ignoring the error, the value
5000.52499999999
being less than
5000.525
rounds to
5000.52
(closest)
Linux produces the identical result.
Should the FPU be ignoring the error when promoting 5000.5249999999996 to
5000.52499999999 to account for precision? Or should it not instead have
promoted the value to 5000.52500000000? I cannot answer that. Looks like a true
case of FP inaccuracy to me.
This is very different from the case of 2597.625 where we had an exactly stored
value being rounded towards the even value simply because it was exactly stored
and on a boundary.
Seems no-one can get this right. I will see what Microsoft have to say about
this. They don't have the "round to nearest even on boundaries" argument to
throw my way. I will certainly inform you as to the outcome.
Their result is certainly inconsistent with your earlier example:
#include <stdio.h>
int main (void)
{
double x;
for (x = -8.5; x <= 8.5; x += 1.0)
printf ("%4g %.0f\n", x, x);
return 0;
}
[Microsoft output]
-8.5 -9
-7.5 -8
-6.5 -7
-5.5 -6
-4.5 -5
-3.5 -4
-2.5 -3
-1.5 -2
-0.5 -1
0.5 1
1.5 2
2.5 3
3.5 4
4.5 5
5.5 6
6.5 7
7.5 8
8.5 9
19232
26
vincent-srcware
2007-09-22 23:09:41 +0000
(In reply to comment #25)
> Since the late 80's I could quote the IEEE storage format without a second
> thought.
Knowing the format is not the same as knowing how FP works.
> Perhaps it is not common to have people query these things. Everyone just
> seems to accept that "floating point is inaccurate" and are happy to leave
> it at that.
People are working to make it better. Since you need an arithmetic for financial
applications, you should know that work has been done in this way: decimal
floating-point, with some specific functions, defined in IEEE754r. But the glibc
doesn't support them yet (and decimal support is quite recent and incomplete in
GCC).
> > Double-precision numbers are binary numbers, not decimal numbers. Saying
> > "15-digit precision" is misleading;
>
> I thought I was using the correct terminology when I said "15 decimals of
> precision" or phrased differently, "15 decimal digits of precision"...
FYI, you need 17 digits to make sure that no information is lost in the
conversions. Your "15 decimal digits of precision" is no more an approximation.
> > you should say "53-bit precision" since their representation is in binary.
>
> well, not quite 53 bits, rather 52.
53, as you need to include the first bit 1 (to be consistent with other bases).
See the value of the C macro DBL_MANT_DIG on your platform, for instance.
> When you break it down, though, it's not even 52 bits, as you cannot ignore
> calculation and storage error.
No, you mix up "accuracy" and "precision". The precision is always 53 bits for
the double format (well, not taking subnormals into account). After
calculations, the precision remains the same, but the accuracy can decrease (and
it will, in general).
> MS stores the value
> 5000.525
> as
> 5000.524999999999600
This is not 5000.524999999999600, but a 53-bit value. The exact value (written
in decimal) is: 5000.524999999999636202119290828704833984375. But if you
understand floating point, you should know that and you should know why my
program returned the value you obtained. Not that my program is not some random
one, I could write it and find values that gave different rounding directions
because I understand floating point.
19233
27
paul
2007-09-23 07:19:05 +0000
(In reply to comment #26)
> (In reply to comment #25)
> > Since the late 80's I could quote the IEEE storage format without a second
> > thought.
>
> Knowing the format is not the same as knowing how FP works.
<snip>
> FYI, you need 17 digits to make sure that no information is lost in the
> conversions. Your "15 decimal digits of precision" is no more an approximation.
<snip>
> > When you break it down, though, it's not even 52 bits, as you cannot ignore
> > calculation and storage error.
>
> No, you mix up "accuracy" and "precision". The precision is always 53 bits for
> the double format (well, not taking subnormals into account). After
> calculations, the precision remains the same, but the accuracy can decrease (and
> it will, in general).
<snip snip >
> one, I could write it and find values that gave different rounding directions
> because I understand floating point.
Vincent, Now you are just arguing semantics. You cannot propel yourself forward
by patting yourself on the back.
Your example does not produce an unexpected result due to a rounding error (as
per the examples in my previous query) but rather produces an unexpected result
due to an actual inaccuracy in the storage.
>This is not 5000.524999999999600, but a 53-bit value. The exact value (written
>in decimal) is: 5000.524999999999636202119290828704833984375. But if you
>understand floating point, you should know that...
Sorry, Vincent, but I do not agree with that bit of blurb. The actual value
when written in decimal cannot take more than 15 decimal digits into
consideration. The correct way to interpret the value is
5000.524999999999636 == 5000.52499999999
with a possible error of
0.000000000009636
If you round the value 5000.524999999999 to 3 decimal points, you get 5000.525.
If you round the number 5000.524999999999 to 2 decimal points, you get 5000.52.
This is not due to inconsistency, but inaccuracy.
In fact floating point storage format (mantissa, exponent and sign) is no
different from scientific notation which is the decimal equivalent of the binary
floating point, other than that the precision of floating point numbers is fixed
at 15 decimal digits of precision.
But if you understood floating point, you would know this.
For now, it seems that a pocket calculator does a better job of converting
binary floating point to decimal text representation than either GLibC,
MSVCRT.DLL or Vincent Lefèvre .
19235
28
vincent-srcware
2007-09-23 09:32:54 +0000
(In reply to comment #27)
> Sorry, Vincent, but I do not agree with that bit of blurb. The actual value
> when written in decimal cannot take more than 15 decimal digits into
> consideration. The correct way to interpret the value is
> 5000.524999999999636 == 5000.52499999999
> with a possible error of
> 0.000000000009636
This is ridiculous and shows that you still haven't understood anything.
> For now, it seems that a pocket calculator does a better job of converting
> binary floating point to decimal text representation than either GLibC,
> MSVCRT.DLL or Vincent Lefèvre .
FYI, most pocket calculators use decimal arithmetic, thus do not do any binary
<-> decimal conversion. But now, I hope that you understand at least that you
need decimal arithmetic.
19236
29
paul
2007-09-23 10:20:47 +0000
(In reply to comment #28)
> (In reply to comment #27)
> > Sorry, Vincent, but I do not agree with that bit of blurb. The actual value
> > when written in decimal cannot take more than 15 decimal digits into
> > consideration. The correct way to interpret the value is
> > 5000.524999999999636 == 5000.52499999999
> > with a possible error of
> > 0.000000000009636
>
> This is ridiculous and shows that you still haven't understood anything.
... and your comment shows that you have a very biased and narrow perspective.
>
> > For now, it seems that a pocket calculator does a better job of converting
> > binary floating point to decimal text representation than either GLibC,
> > MSVCRT.DLL or Vincent Lefèvre .
>
> FYI, most pocket calculators use decimal arithmetic, thus do not do any binary
> <-> decimal conversion. But now, I hope that you understand at least that you
> need decimal arithmetic.
I have stated repeatedly, that FP arithmetic is just fine for me. Why do you
insist that I need to use an arb precision library? I do not need more than 15
(In fact I need no more than 10) significant decimal digits in my calculations,
which IEEE double supposedly provides.
My problem is not with the binary representation of the number; it is with the
apparently incompetent conversion of those numbers to ascii text representation
inside a buffer for display.
19239
30
paul
2007-09-23 16:41:24 +0000
Ok, Vincent;
Semantics aside, and disregarding all comments to the contrary, here is a small
app which does binary to decimal conversion for all examples we have had to
date. "Money where the mouth is" sort of stuff. No doubt it could be pulled
apart in any number of ways, but it substantiates the theory and proves that
what I have been saying is possible.
I have not fully tested this against all cases, but those we have discussed here
all produce mathematically sound conversions.
#include <math.h>
#include <string.h>
#include <stdio.h>
void tobuf(size_t max, int *len, char *buf,
double x, int precision, double max_prec, double carry)
{
int sign = x < 0; // remember the sign
double q = pow10(-precision); // current mask
double y = x==0?0:fmod(fabs(x), q); // modulus
double l_div = round(y*max_prec)/max_prec+carry; // significant digit
int l_dec = (int)round(l_div*10/q); // round to decimal
carry = l_dec>=10?l_div:0; // carry forward?
l_dec = l_dec % 10; // this decimal
x = x>0?x-y:x+y; // subtract modulus
if (fabs(x) > 0) // recurse while |x| > 0
tobuf(max, len, buf, x, precision-1, max_prec, carry);
else { // x == 0 - first digit
if (*len >= max) return;
if (sign) { buf[*len] = '-'; *len = *len + 1; }
if (*len >= max) return;
if (precision == 0) { buf[*len] = '0'; *len = *len + 1; }
}
// for first and subsequent digits, add the digit to the buffer
if (*len < max && precision==0) { buf[*len] = '.'; *len = *len + 1; }
if (*len >= max) return;
buf[*len] = '0' + l_dec;
*len = *len + 1;
}
int dbl2buf(size_t max, char *buf, double x, int precision) {
const int DECIMALS=15;
int max_dec = DECIMALS-(int)(trunc(log10(fabs(x)))+1); // max significant
digits
double max_prec = pow10(max_dec); // magnitude for
precision loss
int len = 0; // buffer length init
double y = x==0?0:fmod(fabs(x), 1/max_prec); // determine error
double l_carry = round(y*max_prec)/max_prec; // error is carried
forward
if (x != x) { strncpy(buf, "NAN", max); return 0; }
if ((x-x) != (x-x)) { strncpy(buf, "INF", max); return 0; }
tobuf(max, &len, buf, x, precision-1, max_prec, l_carry); // fill in buffer
buf[len] = 0; // terminate buffer
return len; // return buffer
length used
}
int main (void)
{
int n;
double x;
char buf[64];
x = 5000.525; dbl2buf(sizeof(buf), buf, x, 2); printf("%.15f = %s\n", x, buf);
x = 2596.625; dbl2buf(sizeof(buf), buf, x, 2); printf("%.15f = %s\n", x, buf);
x = 2596.525; dbl2buf(sizeof(buf), buf, x, 2); printf("%.15f = %s\n", x, buf);
for (x = -8.5; x <= 8.5; ++x) {
dbl2buf(sizeof(buf), buf, x, 0); printf("%.15f = %s\n", x, buf);
}
for (x = -8.5; x <= 8.5; ++x) {
dbl2buf(sizeof(buf), buf, x, 24); printf("%.15f = %s\n", x, buf);
}
return 0;
}
_________________________________________
Program output:
5000.524999999999636 = 5000.53
2596.625000000000000 = 2596.63
2596.525000000000091 = 2596.53
-8.500000000000000 = -9
-7.500000000000000 = -8
-6.500000000000000 = -7
-5.500000000000000 = -6
-4.500000000000000 = -5
-3.500000000000000 = -4
-2.500000000000000 = -3
-1.500000000000000 = -2
-0.500000000000000 = -1
0.500000000000000 = 1
1.500000000000000 = 2
2.500000000000000 = 3
3.500000000000000 = 4
4.500000000000000 = 5
5.500000000000000 = 6
6.500000000000000 = 7
7.500000000000000 = 8
8.500000000000000 = 9
-8.500000000000000 = -8.500000000000000000000000
-7.500000000000000 = -7.500000000000000000000000
-6.500000000000000 = -6.500000000000000000000000
-5.500000000000000 = -5.500000000000000000000000
-4.500000000000000 = -4.500000000000000000000000
-3.500000000000000 = -3.500000000000000000000000
-2.500000000000000 = -2.500000000000000000000000
-1.500000000000000 = -1.500000000000000000000000
-0.500000000000000 = -0.500000000000000000000000
0.500000000000000 = 0.500000000000000000000000
1.500000000000000 = 1.500000000000000000000000
2.500000000000000 = 2.500000000000000000000000
3.500000000000000 = 3.500000000000000000000000
4.500000000000000 = 4.500000000000000000000000
5.500000000000000 = 5.500000000000000000000000
6.500000000000000 = 6.500000000000000000000000
7.500000000000000 = 7.500000000000000000000000
8.500000000000000 = 8.500000000000000000000000
Kindest regards,
19242
31
paul
2007-09-23 18:53:53 +0000
... and this version deals correctly with the outside cases...
#include <math.h>
#include <string.h>
#include <stdio.h>
//______________________________________________________________________
// Utility function converts an IEEE double precision number to a
// fixed precision decimal format stored in a buffer.
void tobuf(size_t max, int *len, char *buf,
double x, int precision, double max_prec, double carry)
{
int sign = x < 0; // remember the sign
double q = pow10(-precision); // current mask
double y = x==0?0:fmod(fabs(x), q); // modulus
double l_div = round(y*max_prec)/max_prec+carry; // significant digit
int l_dec = (int)round(l_div*10/q); // round to decimal
carry = l_dec>=10?l_div:0; // carry forward?
l_dec = l_dec % 10; // this decimal
x = x>0?x-y:x+y; // subtract modulus
if (fabs(x) > 0) // recurse while |x| > 0
tobuf(max, len, buf, x, precision-1, max_prec, carry);
else { // x == 0 - first digit
if (*len >= max) return;
if (sign) { buf[*len] = '-'; *len = *len + 1; }
if (*len+1 <= max && precision >= 0) {
buf[*len] = '0'; *len = *len + 1;
buf[*len] = '.'; *len = *len + 1;
}
while (precision-- > 0) {
buf[*len] = '0'; *len = *len + 1;
if (*len >= max) return;
}
precision = -1; // don't place another period
}
if (*len <= max && precision == 0) {
buf[*len] = '.'; *len = *len + 1;
}
// for first and subsequent digits, add the digit to the buffer
if (*len >= max) return;
if (l_dec < 0) l_dec = 0;
buf[*len] = '0' + l_dec;
*len = *len + 1;
}
//______________________________________________________________________
// Convert the value x to a decimal representation stored in a buffer
int dbl2buf(size_t max, char *buf, double x, int precision) {
const int DECIMALS=15;
// max significant digits
int max_dec = DECIMALS-(int)(trunc(log10(fabs(x)))+1);
double max_prec = pow10(max_dec); // magnitude for precision loss
int len = 0; // buffer length init
double y = x==0?0:fmod(fabs(x), 1/max_prec); // determine error
double l_carry = round(y*max_prec)/max_prec; // error is carried forward
if (x != x) { strncpy(buf, "NAN", max); return 0; }
if ((x-x) != (x-x)) { strncpy(buf, "INF", max); return 0; }
tobuf(max, &len, buf, x, precision-1, max_prec, l_carry); // fill in buffer
buf[len] = 0; // terminate buffer
return len; // return buffer length used
}
19243
32
vincent-srcware
2007-09-23 19:31:29 +0000
(In reply to comment #29)
> I have stated repeatedly, that FP arithmetic is just fine for me. Why do you
> insist that I need to use an arb precision library?
I have said *decimal* arithmetic, not arbitrary precision arithmetic.
And FYI, your program fails to work correctly on my Linux/ppc machine:
5000.524999999999636 = 5000.53
2596.625000000000000 = 2596.62
2596.525000000000091 = 2596.52
-8.500000000000000 = -8
-7.500000000000000 = -8
-6.500000000000000 = -7
-5.500000000000000 = -6
-4.500000000000000 = -5
-3.500000000000000 = -3
-2.500000000000000 = -2
-1.500000000000000 = -1
-0.500000000000000 = -0
0.500000000000000 = 0
1.500000000000000 = 1
2.500000000000000 = 2
3.500000000000000 = 3
4.500000000000000 = 5
5.500000000000000 = 6
6.500000000000000 = 7
7.500000000000000 = 8
8.500000000000000 = 8
-8.500000000000000 = -8.500000000000030000000000
-7.500000000000000 = -7.500000000000040000000007
-6.500000000000000 = -6.500000000000040000000007
-5.500000000000000 = -5.500000000000030000000007
-4.500000000000000 = -4.500000000000030000000007
-3.500000000000000 = -3.500000000000010000000000
-2.500000000000000 = -2.500000000000010000000000
-1.500000000000000 = -1.500000000000000000000000
-0.500000000000000 = -0.500000000000000000000000
0.500000000000000 = 0.500000000000000000000000
1.500000000000000 = 1.500000000000000000000000
2.500000000000000 = 2.500000000000010000000000
3.500000000000000 = 3.500000000000010000000000
4.500000000000000 = 4.500000000000030000000007
5.500000000000000 = 5.500000000000030000000007
6.500000000000000 = 6.500000000000040000000007
7.500000000000000 = 7.500000000000040000000007
8.500000000000000 = 8.500000000000030000000000
19244
33
paul
2007-09-23 19:54:35 +0000
(In reply to comment #32)
> And FYI, your program fails to work correctly on my Linux/ppc machine:
Hmmm. I don't have a PPC machine available to test against. Works on my side
on an Intel architecture for a variety of test cases though, so it must be
something specific. What byte order is the PPC? Is it MSB perhaps? I don't
know whether it may be related. I can probably test on SUN/Solaris.
I don't know whether you are aware, but Gnumeric which I use frequently, also
apparently uses the GNU C library floating point- It displays similar problems
to what we have discussed here. Magic spreadsheet program though.
Regards, and thanks for all your persistence and advice. Particularly on an
issue long since marked "resolved".
19286
34
paul
2007-09-25 06:51:01 +0000
Vincent, one last thing: Would you mind testing the following on your PPC?
Just a one liner change effectively; the other changes are just buffer overrun
traps.
Many thanks,
Paul
#include <math.h>
#include <string.h>
#include <stdio.h>
//______________________________________________________________________
// Utility function converts an IEEE double precision number to a
// fixed precision decimal format stored in a buffer.
void tobuf(size_t max, int *len, char *buf,
double x, int precision, double max_prec, double carry)
{
int sign = x < 0; // remember the sign
double q = pow(10,-precision); // current mask
double y = x==0?0:fmod(fabs(x), q); // modulus
double l_div = round(y*max_prec)/max_prec+carry; // significant digit
int l_dec = (int)round(l_div*10/q); // round to decimal
carry = l_dec>=10?l_div:0; // carry forward?
l_dec = l_dec % 10; // this decimal
x = x>0?x-y:x+y; // subtract modulus
if (fabs(x) > 0) // recurse while |x| > 0
tobuf(max, len, buf, x, precision-1, max_prec, carry);
else { // x == 0 - first digit
if (*len+1 < max && sign) { buf[*len] = '-'; *len = *len + 1; }
if (*len+2 < max && precision >= 0) {
buf[*len] = '0'; *len = *len + 1;
buf[*len] = '.'; *len = *len + 1;
}
while (*len+1 < max && precision-- > 0) {
buf[*len] = '0'; *len = *len + 1;
}
precision = -1; // don't place another period
}
if (*len+1 < max && precision == 0) {
buf[*len] = '.'; *len = *len + 1;
}
// for first and subsequent digits, add the digit to the buffer
if (*len+1 >= max) return;
if (l_dec < 0) l_dec = 0;
buf[*len] = '0' + l_dec;
*len = *len + 1;
}
//______________________________________________________________________
// Convert the value x to a decimal representation stored in a buffer
int dbl2buf(size_t max, char *buf, double x, int precision) {
const int DECIMALS=15; // max significant digits
int max_dec = DECIMALS-(int)(trunc(log10(fabs(x)))+1);
double max_prec = pow(10,max_dec); // magnitude for precision loss
int len = 0; // buffer length init
double y = x==0?0:fmod(fabs(x), 1/max_prec); // determine error
double l_carry = round(y*max_prec)/max_prec; // error is carried forward
x = x>0?x-y:x+y; // subtract modulus
if (x != x) { strncpy(buf, "NAN", max); return 0; }
if ((x-x) != (x-x)) { strncpy(buf, "INF", max); return 0; }
tobuf(max, &len, buf, x, precision-1, max_prec, l_carry); // fill in buffer
buf[len] = 0; // terminate buffer
return len; // return buffer length used
}
int main (void)
{
int n;
double x;
char buf[64];
x = 5000.525; dbl2buf(sizeof(buf), buf, x, 2); printf("%.15f = %s\n", x, buf);
x = 2596.625; dbl2buf(sizeof(buf), buf, x, 2); printf("%.15f = %s\n", x, buf);
x = 2596.525; dbl2buf(sizeof(buf), buf, x, 2); printf("%.15f = %s\n", x, buf);
for (x = -8.5; x <= 8.5; ++x) {
dbl2buf(sizeof(buf), buf, x, 0); printf("%.15f = %s\n", x, buf);
}
for (x = -8.5; x <= 8.5; ++x) {
dbl2buf(sizeof(buf), buf, x, 24); printf("%.15f = %s\n", x, buf);
}
return 0;
}
19287
35
vincent-srcware
2007-09-25 10:43:07 +0000
(In reply to comment #34)
> Vincent, one last thing: Would you mind testing the following on your PPC?
Still wrong for this one:
5000.524999999999636 = 5000.53
But note:
* Some functions like pow are not necessarily correctly rounded, and the
accuracy may be quite bad, so that it is not a good idea to use them.
* I recall that the current rounding mode must be taken into account, and this
is even clearer with a recent correction in the ISO C standard. See
http://www.open-std.org/jtc1/sc22/wg14/www/docs/dr_286.htm
"Change Annex F.5 Binary-decimal conversion: Paragraph 2: 'correctly rounded' to
'correctly rounded (which honors the current rounding mode)'." (but this was
more or less implicit).
If you want to round halfway cases away from 0, you need to wait for this
rounding direction being implemented in the processor (only at this time, the
glibc may need to be updated) or write your own routines.
* You need to test your routines on much more values (e.g. very small, very
large, worst cases...).
19288
36
paul
2007-09-25 11:00:58 +0000
(In reply to comment #35)
> (In reply to comment #34)
> > Vincent, one last thing: Would you mind testing the following on your PPC?
>
> Still wrong for this one:
>
> 5000.524999999999636 = 5000.53
Hey, thanks, Vincent!
> But note:
>
> * Some functions like pow are not necessarily correctly rounded, and the
> accuracy may be quite bad, so that it is not a good idea to use them.
> * I recall that the current rounding mode must be taken into account, and this
> is even clearer with a recent correction in the ISO C standard. See
> http://www.open-std.org/jtc1/sc22/wg14/www/docs/dr_286.htm
> "Change Annex F.5 Binary-decimal conversion: Paragraph 2: 'correctly rounded' to
> 'correctly rounded (which honors the current rounding mode)'." (but this was
> more or less implicit).
> If you want to round halfway cases away from 0, you need to wait for this
> rounding direction being implemented in the processor (only at this time, the
> glibc may need to be updated) or write your own routines.
Great, thanks again for the info and the reference. I appreciate your
exceptional patience, expertise and willingness to help.
> * You need to test your routines on much more values (e.g. very small, very
> large, worst cases...).
Will do. I have done some of these, but in no way can the testing I have done
thus far be called "exhaustive". I suspect that there will be more than a few
instances where it breaks.
Regards,
Paul
19436
37
paul
2007-10-03 10:03:47 +0000
Hi, Vincent
I have, as you suggested performed some extensive testing against my binary to
decimal conversion procedure. There have been a few changes, mostly cosmetic,
but one specific bug fix where the recursion terminated prematurely with a zero
value and existing carry.
Please consider the following code:
________________________________________________________________________
#include <math.h>
#include <string.h>
#include <stdio.h>
#include <stdlib.h>
//______________________________________________________________________
// Utility function converts an IEEE double precision number to a
// fixed precision decimal format stored in a buffer.
void tobuf(size_t max, int *len, char *buf,
double x, int precision, double max_prec, double carry)
{
int sign = x < 0; // remember the sign
double q = pow(10,-precision); // current mask
double y = x==0?0:fmod(fabs(x), q); // modulus
double l_div = round(y*max_prec)/max_prec+carry; // significant digit
int l_dec = (int)round(l_div*10/q); // round to decimal
carry = l_dec>=10?l_div:0; // carry forward?
l_dec = l_dec % 10; // this decimal
x = x>0?x-y:x+y; // subtract modulus
if (fabs(carry) > 0 || fabs(x) > 0) // recurse while |x| > 0
tobuf(max, len, buf, x, precision-1, max_prec, carry);
else { // x == 0 - first digit
if (*len+1 < max && sign) buf[(*len)++] = '-';
if (*len+2 < max && precision >= 0) {
buf[(*len)++] = '0';
buf[(*len)++] = '.';
}
while (*len+1 < max && precision-- > 0)
buf[(*len)++] = '0';
}
if (*len+1 < max && precision == 0)
buf[(*len)++] = '.';
// for first and subsequent digits, add the digit to the buffer
if (*len+1 >= max) return;
if (l_dec < 0) l_dec = 0;
buf[(*len)++] = '0' + l_dec;
}
//______________________________________________________________________
// Convert the value x to a decimal representation stored in a buffer
int dbl2buf(size_t max, char *buf, double x, int precision) {
const int DECIMALS=15; // max significant digits
int max_dec = DECIMALS-(int)(trunc(log10(fabs(x)))+1);
double max_prec = pow(10,max_dec); // magnitude for precision loss
int len = 0; // buffer length init
double y = x==0?0:fmod(fabs(x), 1/max_prec); // determine error
double l_carry = round(y*max_prec)/max_prec; // error is carried forward
x = x>0?x-y:x+y; // subtract modulus
if (x != x) { strncpy(buf, "NAN", max); return 0; }
if ((x-x) != (x-x)) { strncpy(buf, "INF", max); return 0; }
tobuf(max, &len, buf, x, precision-1, max_prec, l_carry); // fill in buffer
buf[len] = 0; // terminate buffer
return len; // return buffer length used
}
//______________________________________________________________________
// Test the dbl2buf function.
int main (void)
{
int n, nfails=0, nspfails=0;
double x;
char buf1[64];
char buf2[64];
char buf3[64];
for(n=0;n<10000000;n++){
x = random() / (double)random();
dbl2buf(sizeof(buf1), buf1, x, 15);
x = atof(buf1); // initialise test value, precision 15.
dbl2buf(sizeof(buf2), buf2, x, 15);
snprintf(buf3, sizeof(buf3), "%.15f", x);
if (strcmp(buf1, buf2)) {
nfails++;
}
else if (strcmp(buf2, buf3)) {
nspfails++;
}
}
printf("%i random floating point values tested\n", n);
printf(" Number of dbl2buf failures: %i\n", nfails);
printf(" Number of sprintf failures: %i\n", nspfails);
return 0;
}
Results:
10000000 random floating point values tested
Number of dbl2buf failures: 0
Number of sprintf failures: 394383
____________________________________________________________________________
Please be aware that this bug report is now linked from an article published
this month (October) in the Linux Gazette.
Kind regards,
Paul Sephton
19437
38
madcoder
2007-10-03 10:37:04 +0000
Subject: article "A Question Of Rounding" in issue #143 (Was: Inconsistent rounding behaviour for sprintf and IEEE doubles)
Hi,
I would like to report that the article "A Question Of Rounding" in
your issue #143 is completely misleading, because its author doesn't
understand how floating point works. Hence you published an article that
is particularly wrong.
The author doesn't understand that a rounding occurs when you store a
double value. Hence statements like:
] When converting a value 3.5 (stored at a precison of 15) to text,
or
] when converting the value 2.5, sprintf() produces not the expected
] value 3, but the value 2!
May confuse any reader that isn't IEEE754 aware. What happens for real
is that when you store any value it is replaced with the nearest
representable IEEE754 number, hence not necessarily the number which was
written by the programmer. E.g. the example he gives in his totally
bogus "late" addendum about 5000.525, actually stores:
5000.52499999999964 in the double value. And all the program then only
works with 5000.52499999999964 (and not "5000.525 with a 15 digit
precision" like he states in the glibc bug report. His mental
representation of IEEE754 numbers isn't accurate and does not fit how it
works).
What he also doesn't understands is that when the double value is an
exact IEEE754 value, for example 2.5 or 3.5 are really 2.5 or 3.5 for
IEEE754, then if you ask for a rounding of a value that is _just_ in the
middle of a given range, IEEE754 use round-to-even method, to dispatch
errors in the both directions[0].
And that's why when you ask for:
printf("%g:%.2g %g:%.2g\n", 0.12, 0.125, 0.225, 0.225);
it yields:
0.125:0.12 0.225:0.23
This is totally correct and expected. What the author doesn't
understand is that when you write monetary/financial/... software,
IEEE754 are not to be used, because it _will_ generates this kind of
issues, and that fixed point with accurate rounding behaviour has to be
used instead. Floating point is not good enough for those problems,
because when you mean 5000.5250USD you definitely don't want your
program use 5000.52499999999964 instead "because it's near enough".
I'm sorry, but this article is very wrong, and give false informations
on a matter that isn't very well understood by many programmers, hence I
beg you to remove this article, for the sake of the teacher that already
have to fight against enough preconceived ideas about ieee 754 numbers
already.
best regards,
[0] http://en.wikipedia.org/wiki/Rounding#Round-to-even_method
19441
39
paul
2007-10-03 11:47:54 +0000
(In reply to comment #38)
> Subject: article "A Question Of Rounding" in issue #143 (Was: Inconsistent
rounding behaviour for sprintf and IEEE doubles)
>
> Hi,
>
> I would like to report that the article "A Question Of Rounding" in
> your issue #143 is completely misleading, because its author doesn't
> understand how floating point works. Hence you published an article that
> is particularly wrong.
Madcoder, old boy, perhaps you have not paid enough attention to what I have
been saying. From your own comments here, I doubt you are qualified to judge
whether or not my understanding of IEEE representation is sufficient.
It seems that you yourself don't understand the right to freedom of speech.
Feel free to disagree with me, even vehemently- but you can't stop me from
saying what I want to.
I am well aware of the IEEE rounding modes etc. I am equally aware of the rules
of decimal arithmetic.
When a standard library cannot convert a value with max precision 15 decimals
from text to a double precision and back to the same text then I couldn't give a
hoot about IEEE rounding modes- your binary to text conversion is broken.
Your head is so deeply buried in the pile of standards documents that you can't
see reason. Standards are like statistics. You can get them to say just about
anything you like.
Refer to the program listing above for a binary to decimal conversion procedure
that actually works.
19443
40
vincent-srcware
2007-10-03 12:31:51 +0000
(In reply to comment #39)
> It seems that you yourself don't understand the right to freedom of speech.
> Feel free to disagree with me, even vehemently- but you can't stop me from
> saying what I want to.
You can say whatever you want on your web page. But for the Linux Gazette, the
editor will decide, and I hope there will be enough complaints against such a
wrong article.
> When a standard library cannot convert a value with max precision 15 decimals
> from text to a double precision and back to the same text then I couldn't give a
> hoot about IEEE rounding modes- your binary to text conversion is broken.
Where did you see it cannot do that? Do you have an example?
(Note that "%.15f" outputs 15 digits after the decimal point, so this may be
more than 15 digits, taking into account those before the decimal point. You
should use "%.15g" if you want 15 significant digits.)
19450
41
paul
2007-10-03 15:07:03 +0000
(In reply to comment #40)
>
> Where did you see it cannot do that? Do you have an example?
>
> (Note that "%.15f" outputs 15 digits after the decimal point, so this may be
> more than 15 digits, taking into account those before the decimal point. You
> should use "%.15g" if you want 15 significant digits.)
Hint: "Paragraph 2: Conversions involving IEC 60559 formats follow all pertinent
recommended practice. In particular, conversion between any supported IEC 60559
format and decimal with DECIMAL_DIG or fewer significant digits is correctly
rounded, which assures that conversion from the widest supported IEC 60559
format to decimal with DECIMAL_DIG digits and back is the identity function."
x = 9.074439913906690
printf("%.15f", x);
9.074439913906691
x = 9.074439913906691
printf("%.15f", x);
9.074439913906691
19451
42
vincent-srcware
2007-10-03 15:26:13 +0000
(In reply to comment #41)
> Hint: "Paragraph 2: Conversions involving IEC 60559 formats follow all
> pertinent recommended practice. In particular, conversion between any
> supported IEC 60559 format and decimal with DECIMAL_DIG or fewer significant
> digits is correctly rounded, which assures that conversion from the widest
> supported IEC 60559 format to decimal with DECIMAL_DIG digits and back is the
> identity function."
I agree with that. But note that this paragraph says that:
base 2 -> base 10 -> base 2
must be the identity function. Of course, this doesn't apply to:
base 10 -> base 2 -> base 10
> x = 9.074439913906690
> printf("%.15f", x);
> 9.074439913906691
This is a "base 10 -> base 2 -> base 10" conversion sequence.
If you want "base 10 -> base 2 -> base 10" to be the identity function, you
shouldn't use more than 15 significant decimal digits. In your example above,
you have 16 significant digits.
19452
43
paul
2007-10-03 17:21:23 +0000
(In reply to comment #42)
> I agree with that. But note that this paragraph says that:
> base 2 -> base 10 -> base 2
> must be the identity function. Of course, this doesn't apply to:
> base 10 -> base 2 -> base 10
>
> > x = 9.074439913906690
> > printf("%.15f", x);
> > 9.074439913906691
>
> This is a "base 10 -> base 2 -> base 10" conversion sequence.
>
> If you want "base 10 -> base 2 -> base 10" to be the identity function, you
> shouldn't use more than 15 significant decimal digits. In your example above,
> you have 16 significant digits.
Correct. However, since my conversion understands that a maximum of 15
significant digits is implicit in IEEE doubles, it knows that it is not sensible
to output the value 9.074439913906691 in the first place. It is therefore the
identity function regardless of the requested precision, or whether we go
base10->base2->base10 or base2->base10->base2:
char buf[64];
x = 9.074439913906691
dbl2buf(sizeof(buf), buf, x, 15);
printf("%s\n", buf);
9.074439913906690
x = 9.074439913906690
dbl2buf(sizeof(buf), buf, x, 15);
printf("%s\n", buf);
9.074439913906690
dbl2buf(sizeof(buf), buf, x, 20);
printf("%s\n", buf);
9.07443991390669000000
Why is this not better?
Whilst I realise that some people would love to see as a decimal representation
of the binary number,
printf("%.32f", 9.07443991390669);
=>9.07443991390669069119212508667260
...and are used to seeing this, all of the extra digits after significant digit
15 are misleading. They really do not make any sense in the decimal context.
People seem to think that the decimal digits after significant digit somehow
contribute to the precision of the value, and are meaningful, where in fact they
are not meaningful in the decimal world, since the binary number cannot be
represented accurately as decimal beyond a precision of 15.
Effectively, irrational numbers base 2 are not the same set of irrational
numbers base 10. It is illogical in my mind to try to represent anything as
decimal beyond significant digit 15, providing the conversion properly rounds
"according to pertinent recommended practice" any digits after DECIMAL_DIG.
We know that to represent the decimal mantissa 907443991390669 in binary
equivalent will take up 52 bits. The 53'rd bit can only hold that much
information. Yet, we repeatedly continue to divide the remainder to make up all
the rest of those digits...
Isn't that silly? Even worse, misleading?
19463
44
vincent-srcware
2007-10-04 00:47:47 +0000
(In reply to comment #43)
> Correct. However, since my conversion understands that a maximum of 15
> significant digits is implicit in IEEE doubles,
A double is represented in binary and has 53 significant digits (bits) in base
2. Speaking of significant digits in base 10 for a binary number doesn't make
much sense, as there isn't a canonical definition. For instance, if you want to
make sure to distinguish any two double-precision numbers after a conversion
into base 10, you need 17 decimal digits (BTW, this is explicitly stated in the
IEEE-754 standard -- check it if you don't believe me).
> Whilst I realise that some people would love to see as a decimal representation
> of the binary number,
> printf("%.32f", 9.07443991390669);
> =>9.07443991390669069119212508667260
>
> ...and are used to seeing this, all of the extra digits after significant digit
> 15 are misleading. They really do not make any sense in the decimal context.
The number is represented in binary, but it is after all a real number. As a
real number, it can make sense to display it with whatever number of digits. For
instance, if one computes 2^100, the number is an exact power of 2, so that
outputting all its digits is decimal is completely meaningful.
> People seem to think that the decimal digits after significant digit somehow
> contribute to the precision of the value, and are meaningful, where in fact
> they are not meaningful in the decimal world, since the binary number cannot
> be represented accurately as decimal beyond a precision of 15.
You haven't understood the difference between "precision" and "accuracy". See
the example above. Though 2^100 is represented in a double with a 53-bit
*precision*, its *accuracy* is infinite (since the exact value is represented).
19465
45
paul
2007-10-04 07:52:43 +0000
(In reply to comment #44)
> (In reply to comment #43)
> > Correct. However, since my conversion understands that a maximum of 15
> > significant digits is implicit in IEEE doubles,
>
> A double is represented in binary and has 53 significant digits (bits) in base
> 2. Speaking of significant digits in base 10 for a binary number doesn't make
> much sense, as there isn't a canonical definition. For instance, if you want to
> make sure to distinguish any two double-precision numbers after a conversion
> into base 10, you need 17 decimal digits (BTW, this is explicitly stated in the
> IEEE-754 standard -- check it if you don't believe me).
>
> > Whilst I realise that some people would love to see as a decimal representation
> > of the binary number,
> > printf("%.32f", 9.07443991390669);
> > =>9.07443991390669069119212508667260
> >
> > ...and are used to seeing this, all of the extra digits after significant digit
> > 15 are misleading. They really do not make any sense in the decimal context.
>
> The number is represented in binary, but it is after all a real number. As a
> real number, it can make sense to display it with whatever number of digits. For
> instance, if one computes 2^100, the number is an exact power of 2, so that
> outputting all its digits is decimal is completely meaningful.
>
> > People seem to think that the decimal digits after significant digit somehow
> > contribute to the precision of the value, and are meaningful, where in fact
> > they are not meaningful in the decimal world, since the binary number cannot
> > be represented accurately as decimal beyond a precision of 15.
>
> You haven't understood the difference between "precision" and "accuracy". See
> the example above. Though 2^100 is represented in a double with a 53-bit
> *precision*, its *accuracy* is infinite (since the exact value is represented).
Ok, Vincent. I have taken more than enough of your (and other's) valuable time
here. I don't believe there's a single argument I can add to further explore
this topic. If you at this stage still believe that it is correct to use the
default rounding mode in conversions to text, there is nothing I can say here to
change your mind, is there? :-)
I am not going to bash your point of view, as it is a valid one. If I could ask
one thing, it would be that you don't bash mine, as it too is valid.
On a parting note,
Accuracy : The state of being accurate; freedom from mistakes, this
exemption arising from carefulness; exact conformity to
truth, or to a rule or model; precision; exactness; nicety;
correctness; as, the value of testimony depends on its
accuracy.
[1913 Webster]
Precision: The quality or state of being precise; exact limitation;
exactness; accuracy; strict conformity to a rule or a
standard; definiteness.
[1913 Webster]
The way I understand it, is that accuracy is defined by precision. Therefore,
the phrase "the number is accurate to a precision of..." is used to describe the
confidence one might have in a number. The same number accurate to a given
level of precision might be inaccurate to another level of precision.
My understanding is that there is no such thing as "infinite accuracy" when
applied to storing decimal values in floating point or going the other way
around. Further, that the maximum level of precision for a decimal number
stored in binary format is finite, and set at DECIMAL_DIG, or 15 for IEEE doubles.
Much of the misunderstanding in this discussion has been due to the fact that I
kept harping on decimal numbers. Your conclusion is that I don't understand
floating point. In defense, I can only say that the topic of conversation here
has from the start been the textual representation of floating point numbers.
Whenever someone tried to tell me about binary storage, I would ignore that
comment as it is off topic. This exacerbated the misunderstandings and
consequent strong feelings that I was ignoring peoples input.
Perhaps, in fact certainly, I have been unclear in many of my comments,
incorrectly assuming a certain shared frame of reference or mindset on the part
of the reader. For that, and the heated debates they sometimes sparked, I
apologise.
Please be aware that my intention was not to attack the GNU C library, it's
integrity, nor that of it's developers. My intention was to improve it. My
reference to the behaviour of the Microsoft library was to highlight the
differences, not to express my support for it as a whole. Whilst I do tend to
support it's implementation of the sprintf() function over that of GLibC, you
have perfectly explained your perspective on the matter.
We could probably argue back and forth for weeks on this topic, and even derive
some enjoyment from the argument. You would probably win most of those
arguments, since you are clearly the expert in this field. I cannot, infringe
on more of your time though, and wish to thank you for that which you have
already given.
Best regards and wishes for the future
Paul Sephton
19519
46
redhat.com+Bugzilla
2007-10-05 19:20:13 +0000
Paul Sephton's statements are consistent with a 15-decimal-digit model of
arithmetic and a non-standard rounding rule. E.g., suppose sprintf behaved
this way when passed a floating-point format string and an IEEE 754
double-precision number x:
Set y to the 15-decimal-digit number nearest to x. (This requires y be
maintained in some format capable of representing decimal numbers.) (To
simplify discussion, I omit consideration of underflow and overflow.)
Round y according to the format string, with ties rounded away from zero.
I believe this would produce the behavior he desires.
Of course, neither of these is the way that IEEE 754 floating-point arithmetic
or C's sprintf function is specified to behave.
IEEE 754 defines a floating-point number in terms of a sign s, a biased
exponent e, and a fraction f. It also refers to a significand, which, for
normal numbers, equals 1+f. Paul Sephton used the term "mantissa," but that is
incorrect. A mantissa is the fractional part of a logarithm. If x is a
normally representable positive number, E is an integer, and 2**E <= x <
2**(E+1), then the significand of x is x / 2**E and the mantissa of x is
log[b](x) - floor(log[b](x)), for some logarithm base b, often ten.
Quoting ANSI/IEEE Std 754-1985, section 3.2.2, with some changes due to limited
typography, the value of a double-precision number is "(-1)**s * 2**(e-1023) *
(1.f)". That is the exact value represented. That is the entirety of the
representation. IEEE 754 says nothing about considering it to be a
15-decimal-digit number. Any assertion that an IEEE 754 floating-point number
represents other numbers, such as a small interval around the number, has no
basis in the standard.
I will use the hexadecimal floating constant notation defined in C ISO/IEC
9899:TC2 6.4.4.2. In this notation, a number 0x1.234p56 stands for 0x1.234 *
2**56, that is, (1 + 2/16 + 3/16**2 + 4/16**3) * 2**56.
Nothing in the 1985 IEEE 754 specification indicates that double-precision
numbers stand for 15-decimal-digit numbers. According to IEEE 754, if a
double-precision number has sign bit 0, exponent 11, and fraction:
0x.44b0ccccccccd,
0x.44b4, or
0x.44b7333333333,
then the floating-point number represents, respectively:
1 * 2048 * 0x1.44b0ccccccccd,
1 * 2048 * 0x1.44b4, or
1 * 2048 * 0x1.44b7333333333.
In decimal, the number is exactly:
2597.52500000000009094947017729282379150390625,
2597.625, or
2597.72499999999990905052982270717620849609375.
Observe that this completely explains the existing sprintf behavior:
"2597.525" in source is translated at compile time to the floating-point
number 0x1.44b0ccccccccdp+11. This number is passed to sprintf to be
converted according to the format string "%.2f". sprintf examines
0x1.44b0ccccccccdp+11, which is
2597.52500000000009094947017729282379150390625, and it has a choice of
rounding it to "2597.52" or "2597.53". Since
2597.52500000000009094947017729282379150390625 is closer to 2597.53,
sprintf rounds to "2597.53".
"2597.625" in source is translated to 0x1.44b4p11, which is 2597.625.
Given the choices of "2597.62" and "2597.63", they are equally far from
2597.625. sprintf uses its rule for ties which is to round to even, so
it produces "2597.62".
"2597.725" in source is translated to 0x1.44b7333333333p11, which is
2597.72499999999990905052982270717620849609375. Given the choices of
"2597.72" and "2597.73", 2597.72 is closer, so sprintf produces "2597.72".
This also demonstrates there are two problems producing the behavior Paul
Sephton desires. One is that sprintf rounds ties in a way Paul Sephton does
not like. The second problem is that IEEE 754 double-precision does not
represent 15-decimal-digit numbers exactly. We can see this because even if
sprintf's rule for ties were changed, "2597.725" in source results in passing a
number smaller than 2597.725 to sprintf, so there is no tie, and sprintf rounds
it down.
Is there any basis for adopting a 15-decimal-digit model of arithmetic? This
is not part of the 1985 IEEE 754 specification. Paul Sephton does not cite any
source for his statements regarding the behavior of floating-point arithmetic.
The IEEE 754 standard says that floating-point numbers represent (-1)**s *
2**(e-1023) * (1.f). It does not say they represent anything else. Nothing in
the standard tells us that 0x1.44b0ccccccccdp+11 represents 2597.525.
There is no basis for treating IEEE 754 floating-point numbers as
15-decimal-digit numbers.
It is true that many people think of 2597.525 as being represented by
0x1.44b0ccccccccdp+11. When they type "2597.525" into source or in a text
string that is converted to a double-precision number, they get
0x1.44b0ccccccccdp+11. I expect this result leads them to think that
0x1.44b0ccccccccdp+11 represents 2597.525. Nevertheless, it does not.
0x1.44b0ccccccccdp+11 is only an approximation of 2597.525. It is actually
itself and is not the thing it approximates for a particular user.
When sprintf receives 0x1.44b0ccccccccdp+11 or 0x1.44b7333333333p11, it has no
way of knowing the user intends for these to be 2597.525 or 2597.725. It can
only interpret them with the values that IEEE 754 says they have, which are
2597.52500000000009094947017729282379150390625 and
2597.72499999999990905052982270717620849609375.
Paul Sephton stated, "I would assume that Microsoft does at least loosely
follow a standard." "Assume" is the correct word; he does not give any basis
for this belief. I know that Microsoft C neither conforms to the 1999 C
standard nor attempts to, and I do not have information that it conforms to the
earlier C standard or to IEEE 754.
Paul Sephton wrote: "Please. I really really really want a solution here."
From the above, we know what the two problems are and we can offer a solution:
sprintf is obeying its specifications and will not do what you want. You must
write your own code to produce the behavior you desire.
I suspect the behavior Paul Sephton desires can be produced in his application
by passing x * (1 + 0x1p-52) to sprintf in lieu of x. This makes certain
assumptions about the nature of the values he has -- it is not the same as
converting the double-precision argument to 15 decimal digits and then
rounding, but it may produce the same results in his situation. If a
double-precision number passed to sprintf is the double-precision number
nearest a 15-decimal-digit number, then I think (have not checked thoroughly)
that passing x * (1 + 0x1p-52) instead will yield the same result as if sprintf
were somehow passed the 15-decimal-digit number in a decimal format. This is
because the replacement number corrects both problems: It changes 2597.625 to
a slightly higher number, avoiding the problem with ties, and it changes the
double-precision number just below 2597.725 to a number slightly higher,
avoiding the second problem. However, if a number being passed to sprintf is
more than one ULP away from a 15-decimal-digit number, this method can fail.
Paul Sephton refers to "the rules of decimal mathematics" but gives no
citation for the rounding rule he proposes.
In summary:
If we use IEEE 754's statements about the values that double-precision
numbers represent, then sprintf's behavior is fully explained.
If we use Paul Sephton's statements about floating-point numbers being
15-decimal-digit numbers, then they conflict with IEEE 754 and are
inconsistent with sprintf's behavior.
This speaks for itself and is entirely sufficient to resolve the matter.
19521
47
paul
2007-10-05 19:52:55 +0000
(In reply to comment #46)
> This speaks for itself and is entirely sufficient to resolve the matter.
What can I say, other than "Thank you"? I do indeed regard this matter as resolved.
Kind regards,
Paul Sephton
66784
48
jackie.rosen
2014-02-16 19:41:34 +0000
*** Bug 260998 has been marked as a duplicate of this bug. ***
Seen from the domain http://volichat.com
Page where seen: http://volichat.com/adult-chat-rooms
Marked for reference. Resolved as fixed @bugzilla.