This is the mail archive of the libc-alpha@sourceware.org mailing list for the glibc project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: IEEE128 binary float to decimal float conversion routines



On 11/17/2015 08:03 PM, Joseph Myers wrote:
> I read the paper Christoph helpfully pointed out.  But heuristically, if 
> you have a 128-bit input, you can expect there to be some input values for 
> which, on converting to binary, the initial 24 bits are followed by (1 
> then about 127 0s, then other nonzero bits, or likewise with 0 followed by 
> about 127 1s), just by random chance, and so you expect to need about 24 + 
> 128 bits internal precision for the conversion so as to get a result that 
> rounds correctly when truncated to float.

Joseph, can you elaborate on this a bit further? I agree with your point that
you need more precision to properly convert, but I'm having trouble following
this bit. 128-bit input == IEEE decimal128? binary == IEEE binary32?


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]