This is the mail archive of the cygwin mailing list for the Cygwin project.

Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

RE: Precision of doubles and stdio

On 06 March 2006 17:36, Phil Betts wrote:

[  This is *still* nothing to do with cygwin.  It's a newlib issue.  I have
set the Reply-To to take this thread to the talk list and removed the other
lists and Jim and Roberto from the Cc line.  ]

> I'm absolutely amazed that you are a professor of computer science!

  I'm not, but then again, I've been paying attention to what Roberto has
posted before, and therefore have some idea of his level of expertise and
experience, whereas you are assuming that you know all there is to know and
therefore do not need to check your beliefs against reality.
> If I had written software that relied on the _exact_ meaning of the
> least
> significant digit of a floating point number (either at university, or
> at
> work), I would have been the subject of ritual humiliation!

  See, the problem with indulging in this sort of posturing is that you better
be /very/ sure that you're right, or you're going to make a fool of yourself.

  In this case, you have not revealed your great intellect: you have revealed
that you have failed to understand what Roberto is doing in his application,
and why it is valid.  You have mistaken your lack of knowledge for an insight
into necessity.

  Bad call.

> You should NOT be using floating point numbers for such an application.
> Floating point numbers are _approximate_ representations of the
> continuous number series that shares the same approximate value.  That
> the IEEE format is an integer multiplied by some power of two is an
> implementation detail.

  You see, not only does your grandmother already /know/ how to suck eggs, but
she's also been doing it for years, has learnt a whole load of techniques and
methods of which you have never even imagined, and in this particular case has
a far more thorough grounding in the fundamental concepts.

  Floating point numbers are not approximate anything.  They are exactly
precise representations of a subset of the rational numbers.  Those numbers
can be exactly represented in floating point.  Other numbers cannot.  What you
choose to do about those other numbers is up to you.  You can collapse the
range centered around each precisely-representable number onto that number, or
you can use the next lowest exact number, or the next highest; then each
precise FP number represents a small range on the irrational number line.  And
in Roberto's application, he is using them to represent - with exact precision
- upper and lower error limits for his calculations.  You see, Roberto is
doing far more advanced maths than anything you have taken into consideration.
He is not just blindly throwing some number into a FP representation, passing
a couple of operators over it, and then trying to printf a result.  He is
measuring and tracking the error and uncertainty in those calculations,
starting with the potential range of error in the initial FP representation,
and taking into account the operations that are performed on it, in order to
have an absolutely certain upper and lower bounds of the margin-of-error for
the final result.  That is something he most certainly can do precisely.

> Floating point hardware is frequently not 100%
> accurate in the LSB of the mantissa due to internal rounding errors and
> the datasheets will often tell you so.

  This is utter fantasy.  Floating point hardware either conforms to IEEE754,
which specifies the exact algorithms to be used in different rounding modes,
or it is broken.  There is no room for ambiguity in the standard.  It
specifies guard and rounding bits precisely.  It does not say that any part of
the calculation may be random.  IEEE-754 is entirely deterministic, and every
implementation should produce EXACTLY the same results, down to the last bits.

> That being said, the thing which completely floors me is that you are
> relying on behaviour which is clearly beyond that given in the language
> specification.  

  Really?  Exactly what and where?  Nobody has yet pointed to a precise
paragraph in the C language spec.  If you are as gobsmacked as you claim to
be, you should be pretty damn sure exactly what behaviour he is relying on and
in what way the says that is unspecified/undefined; I'm sure you've already
referred to your copy of the standard to refresh your memory and make sure you
aren't mistaken, no?

> This is one of the most rudimentary mistakes in
> programming.  Frankly, this is beyond belief for someone in your
> position.
> C doubles are accurate to 16 or 17 decimal places, if you expect 48
> significant figures then you deserve all the bad results you get.

  This claim (ISTM) is based on the fallacious line of reasoning that if you
can represent an N-bit binary integer with D decimal digits, you can also
represent an N-bit binary fraction (the mantissa) with D decimal digits. 

  Alas, you cannot.

  A 3 bit binary number - assuming all three of those bits to be above the
decimal place - can represent the numbers 0 to 7, and hence requires only one
decimal sig.fig., as indeed you might expect from computing (log10(2) * 3) and
getting 0.903.  I agree with you there.

  However, a 3 bit floating point mantissa is aligned below the decimal place.
It can represent the binary numbers 0.000 - 0.111, where the place immediately
after the decimal is 0.5, the next lower place 0.25, and the lowest 0.125.
The values it can represent are from the set (0, 0.125, 0.25, 0.375, 0.5,
0.625, 0.75, 0.875), and as you can clearly see, many of them require three
significant figures.

  Now please explain why a 53-bit mantissa - a /fractional/ number - can
necessarily be represented with only 16 or 17 decimal places.

> That you should then choose a public forum (and the wrong one at that!)
> to complain about this is astounding.

  What's even more astounding is how you choose to grandstand on this subject
when you clearly aren't a specialist and haven't studied it at all.

> Ask yourself this: if your brain surgeon uses an axe, will the inquest
> find the axe at fault or the surgeon?

  I'd rather have someone doing surgery on me who goes to university and
studies and does research and experiments and asks people questions, than
someone who just /thinks/ they know it all and whose overconfidence leads them
not to bother checking their facts and to (mis)assume they have understood a
problem based on a superficial once-over in which they only see whatever fits
with their idea of the facts.  You completely just didn't even see Roberto's
statement that he was using FP numbers to represent bounds-of-uncertainty, did

Can't think of a witty .sigline today....

Unsubscribe info:
Problem reports:

Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]