More SSE infrastructure

Richard Henderson
Wed Jul 5 10:56:00 GMT 2000

On Wed, Jul 05, 2000 at 03:44:42PM +0200, Mark Kettenis wrote:
> GDB uses the range info to determine the size of an object, even for
> global symbols where it might be able to get the size from the symbol
> table.  So using 0 and -1 wouldn't produce anything useful.

Ok.  I figured it couldn't be that easy.

> Giving the correct upper and lower bounds does work (tested on Solaris
> 2.6, with a recent GDB snapshot and egcs-2.91.66, where I added the
> stab for a 128-bit type by hand).

Next question: will GDB accept negative octal constants for
signed 128-bit types?  E.g. -017777.  I surely don't want to
do decimal output without libgmp on my side, which we don't
want to assume.

It wouldn't be the end of the world if we wound up considering
all such types unsigned in the debugger, but if it's possible...

> If printing correct lower and upper bounds is too hard for GCC, there
> are alternatives.  If the lower bound is 0 and the upper bound is a
> negative number, GDB assumes the size of the type (in bytes) is the
> absolute value of the upper bound.  I've verified that emitting:
> .stabs "__m128:t(0,20)=r(0,20);0;-16;",128,0,0,0
> does indeed work.  The GNU stabs info file suggests that this is a
> Convex convention.

Interesting to know.  However, I would imagine that not all
stabs system debuggers allow such a thing, so it'd be better
to go with printing proper bounds if possible.


More information about the Gdb mailing list