This is the mail archive of the
binutils@sourceware.cygnus.com
mailing list for the binutils project.
Re: A very "strange" bug in gcc 2.96
- To: drepper at cygnus dot com, ian at zembu dot com
- Subject: Re: A very "strange" bug in gcc 2.96
- From: Mike Stump <mrs at windriver dot com>
- Date: Mon, 22 May 2000 10:42:46 -0700 (PDT)
- Cc: binutils at sourceware dot cygnus dot com, egcs at egcs dot cygnus dot com, hjl at lucon dot org, mark at codesourcery dot com
> Date: 21 May 2000 13:42:42 -0700
> From: Ian Lance Taylor <ian@zembu.com>
> To: drepper@cygnus.com
> Note that the reason the new optimization fails is that the
> assembler translates the jmp into a short branch instruction. If
> the assembler translated the jmp into a 32 bit branch with a reloc,
> then I believe everything would still work correctly. At least on
> the i386. However, doing this would disable part of the
> optimization.
Thanks for the description, I certainly didn't understand the reason
from the other posts. Given this description, I think the default
should be off for targets that can only have fewer reloc bits than the
instruction it is replacing (if the number is sufficiently small).
I'd rather place the burden on people that want this to complete the
job, and arrange for potentially the same or more reloc bits, or place
the burden on people who want an optimization that can cause there
code to not link, to specifically ask for it.
Now, what is sufficiently small? Clearly <16 bits is probably
sufficiently small. >=30 probably isn't.
I've seen enough problems at 22 to make me want to shy away from it.
I generally don't like compilers that generate short code, then ask
you to recompile with other options when the code choice fails. I
think this goes against the spirit of the FSF coding conventions.