This is the mail archive of the gdb@sourceware.org mailing list for the GDB project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: Debugger support for __float128 type?


Mark Kettenis wrote:
> > B.t.w. is there interest in fixing this problem for Intel?  I notice
> > there is a GDB bug open on the issue, but nothing seems to have happened
> > so far: https://sourceware.org/bugzilla/show_bug.cgi?id=14857
> 
> Perhaps you should start with explaining what __float128 actually is
> on your specific platform?  And what long double actually is.
> 
> I'm guessing long double is a what we sometimes call an IBM long
> double, which is essentially two IEEE double-precision floating point
> numbers packed together and that __float128 is an attempt to fix
> history and have a proper IEEE quad-precision floating point type ;).
> And that __float128 isn't actually implemented in hardware.

Right, that's the current situation on PowerPC.  (On Intel, long double
is the 80-bit IEEE extended type, padded to either 12 bytes (32-bit)
or 16 bytes (64-bit), while __float128 is IEEE quad-precision.)
 
> I fear that the idea that it is possible to determine the floating
> point type purely from the size is fairly deeply engrained into the
> GDB code base.  Fixing this won't be easy.  The easiest thing to do
> would probably be to define a separate ABI where long double is IEEE
> quad-precision.  But the horse is probably already out of the barn on
> that one...

Actually, I think the GDB side should be reasonably straight-forward
to fix.  We can already decide on the correct floating-point format
right when a type is initially defined, and the lenght-based detection
of the format is only done for those types initially defined without
a format.  Currently, most of the "GDB-internal" types already provide
the format (or can be easily fixed to do so), but the types defined by
debug info do not.

However, there's no reason why e.g. dwarf2read couldn't be changed to
simply set the floating-point format directly, if there were enough
information in DWARF that could be used by some new architecture-specific
routine to detect the appropriate format.

> Making the decision based on the name is probably the easiest thing to
> do.  Butq keep in mind that other OSes that currently don't support
> IBM long doubles and where long double is the same as double, may want
> to define long double to be IEEE quad-precision floating point on
> powerpc.

Right.  So there's three somewhat separate issues:

- Code explicitly uses the new __float128 type. Since the __float128
  type can only come from debug info, once we detect the format based
  on debug info, this should be good.  It also should always be safe
  to recognize __float128 by name, since it will always be the 128-bit
  IEEE format.

- We have a "long double" type provided by debug info of the current
  executable.  Again, if we can detect the format from debug info,
  everything should work even if "long double" is defined differently
  on different OSes.  (It could be 64-bit IEEE, 128-bit IBM long double,
  or 128-bit IEEE, I guess.)  As long as we cannot reliably detect the
  format from debug info, we'll have to fall back on the built-in type
  (as below).

- We debug an executable whose debug info does *not* provide "long
  double", but the user uses the "long double" built-in type provided
  by GDB.  In this case, we'd ideally want to detect the OS/ABI and set
  the built-in type accordingly.  When we decide to switch the definition
  of long double on Linux/PowerPC at some point in the future, ideally
  there would be some way to detect this new ABI in the executable
  (some header bit, maybe).  There's still time to define this.

Bye,
Ulrich

-- 
  Dr. Ulrich Weigand
  GNU/Linux compilers and toolchain
  Ulrich.Weigand@de.ibm.com


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]