This is the mail archive of the
mailing list for the binutils project.
Re: New ELF linker code added to GNU binutils
"Joseph S. Myers" <firstname.lastname@example.org> writes:
> * The linker is implementing external ABI specifications, which enable
> interoperation with third-party compilers and assemblers and debuggers and
> other tools and operating systems that execute the resulting binaries, and
> assertions "this relocation in the input file causes a GOT entry to be
> generated in the output file against this symbol with this relocation" are
> relevant as well.
Good point. I do agree that these should also be tested. I think
that testing these in a reasonable manner will require some helper
program which can dig into the file in an appropriate manner. Running
regexps on nm/objdump/readelf output is inappropriate.
> (The present gold testsuite looks like it only has target-independent C++
> tests; target-dependent tests of each input relocation are in my view
> desirable, whether they are written as execution tests or not. The linker
> is not used just with GCC-generated or binutils-generated objects;
> certainly not just with those generated by a particular GCC version.)
I have no objection to target dependent tests of input relocations.
One good start this would be to use unittests: write a little
framework around Target::relocate_section which would pass in various
relocations. Then we could drive that framework from a script to
verify the results.
In general I do think that we can test pretty much everything we care
about with C/C++ code. The main thing I see that needs to be added to
the existing testsuite in this regard is running the compiler with
different options (e.g., the x86_64 -mcmodel options). Cases where
there is some relocation which gcc never generates but other tools do
are going to be pretty rare.
> * Also relevant are tests that the linker implements a particular
> optimization, where more than one output is correct but some outputs may
> not be optimized.
> * Tests that the linker implements a CPU erratum fix, if done with dump
> files, don't rely on having a CPU with the erratum in order to test the
> fix. (See <http://sourceware.org/ml/binutils/2007-01/msg00233.html> for
> example with rationale for such a fix present in binutils. The dump tests
> therein did subsequently show up a bug elsewhere in the linker with a bad
> qsort comparison function when tested on Windows host. If you don't like
> erratum fixes in the linker, see the more recent --fix-v4bx and
> --fix-v4bx-interworking options, where no CPU bug is involved but
> execution tests would still only be effective on certain CPUs.)
I agree that some tests require examining the linker output file.
Existing gold tests like ver_matching_test.sh and script_test_3.sh do
>> Of course, a consequence is that the testsuite only works for native
>> development (apart from the unittests, of which there are currently
>> only two). I recognize that this is a real deficiency for cross
>> development. I would prefer to address that by using some sort of
>> script to run the program in the remote environment. It would be fine
>> with me if that script uses DejaGNU.
> You need this for running programs on the host (the linker, compiler etc.
> themselves), which may not be the build system, as well as for running
> programs on the target.
This can all be controlled by make variables today.
> - And about how to link programs for different boards. (Remember all the
> "generic" ELF targets where by design $target-gcc will not link by
> default, without a linker script selecting particular board support code.
> DejaGnu board files contain the information about how to link.)
> Then there's the DejaGnu .sum and .log output format and scripts set up to
> process it. I think it's a good idea even for non-DejaGnu testsuites to
> produce output files in this format with the results of each test
> assertion. (It's possible of course to process the output of "make" - I
> do so even for the glibc testsuite, which is much worse in this regard -
> but testsuites can be helpful and create output in the usual form as
I tried to match the output format in the gold testsuite. It doesn't
create .sum and .log files, but it does generate PASS and FAIL lines.
It does not currently generate UNSUPPORTED lines, though.
> Yes, arbitrary testsuites can be wrapped so they use DejaGnu support for
> running programs on hosts and targets, and the output postprocessed into
> DejaGnu format. But every testsuite being randomly and arbitrarily
> different from every other is a pain to deal with, which is why I prefer
> testsuites to use the same standard infrastructure as each other; and in
> practical terms DejaGnu is that standard infrastructure for toolchain
> testing, even if QMTest (or your-preferred-test-harness-here) is in some
> ways more theoretically elegant.
DejaGNU has to be stopped somewhere.