This is the mail archive of the
libc-alpha@sourceware.org
mailing list for the glibc project.
Re: [PATCHv2, MIPS] Add support for O32 FPXX and program header based ABI information
- From: "Joseph S. Myers" <joseph at codesourcery dot com>
- To: Matthew Fortune <Matthew dot Fortune at imgtec dot com>
- Cc: Will Newton <will dot newton at linaro dot org>, Andrew Pinski <pinskia at gmail dot com>, Richard Sandiford <rdsandiford at googlemail dot com>, Rich Fuhler <Rich dot Fuhler at imgtec dot com>, "macro at codesourcery dot com" <macro at codesourcery dot com>, "libc-alpha at sourceware dot org" <libc-alpha at sourceware dot org>
- Date: Wed, 14 May 2014 16:46:02 +0000
- Subject: Re: [PATCHv2, MIPS] Add support for O32 FPXX and program header based ABI information
- Authentication-results: sourceware.org; auth=none
- References: <6D39441BF12EF246A7ABCE6654B0235352F38D at LEMAIL01 dot le dot imgtec dot org>
On Wed, 14 May 2014, Matthew Fortune wrote:
> I have not yet tested all the FP ABI combinations but have covered the
> ones which are related to FPXX. I am working through the rest as well
> as n32/n64 ABIs.
I'd like to understand how all the various combinations of ABIs of objects
and old and new compilers, binutils and libc (and kernel?) work. Could
you add information to the wiki page
<https://dmz-portal.mips.com/wiki/MIPS_O32_ABI_-_FR0_and_FR1_Interlinking>
about this?
That is:
* glibc might predate the new feature (and so implicitly require FR=0); it
might postdate the feature, in which case it might be built with any
combination of old or new GCC and old or new binutils. If built with old
GCC, it must be presumed to require FR=0; I don't know about the (new GCC,
old binutils) combination. (This is 5 cases for glibc.)
* The executable, and non-glibc shared libraries, might also be built with
any combination of old or new GCC and old or new binutils - and one
executable or shared library might contain a mixture of objects built with
different tools. (This is 4 cases for the tools building each .o file.
But at least the new GCC, new binutils case divides into FR=0, FR=1 and
interworking, so at least 6 cases. Then the executable could have an
arbitrary nonempty subset of those 6 cases - some of course would give
link-time errors - and likewise a shared library.) Then there's the
question of what tools linked the executable or shared library, separate
to what built the objects going into it - but one might say that the
link-time tools must be at least as recent as the compile-time ones, so
this doesn't add many more cases.
* However, a .o file requiring FR=1 may be presumed to be built with at
least new GCC, given that the old definition of -mfp64 is being abandoned.
And I think the following requirements apply:
* If any object requires FR=1, either it must get FR=1 or there must be an
error at static or dynamic link time.
* Likewise, FR=0.
* If all the objects' requirements are compatible, there must not be
errors except in the case where a new object is passed to an old static or
dynamic linker that gives an error because it doesn't understand or can't
handle a new feature used in the new object.
* It should be possible to use new GCC and binutils to build objects /
executables / shared libraries (not requiring FR=1) that work with old
glibc. This does not mean new FR=0 .o files need to be linkable with
older binutils than the binutils that produced them, just that the final
linked executables and shared libraries should be compatible with older
glibc if that's the C library linked against at static link time and there
are no FR=1 requirements.
So what is the logic that ensures that executables or shared libraries
containing a .o file requiring FR=1 cannot be loaded by an old dynamic
linker? What about by a new dynamic linker built with an older GCC (as
glibc will then require FR=0, though without an explicit markings to that
effect)?
There are lots of different cases for combinations of objects - the wiki
page needs to explain the reasoning that all of those cases are properly
covered. I'd guess it should discuss what combinations of GCC and
binutils will allow objects requiring FR=1, or objects allowing
interlinking, at all, and how (old objects, new objects requiring FR=0,
new objects requiring FR=1, new objects allowing interlinking) are (a)
distinguished as .o files, (b) linked, (c) distinguished as executables
and shared libraries - and then go on to how the requirements are
determined by the dynamic linker in a way that allows for old executables
and shared libraries, and what it is about new executables and shared
libraries that means old dynamic linkers won't handle them. Some
information is there, but it doesn't really seem to deal with the case of
mixed objects built with different tools.
Then, how have these cases been tested? It's probably not possible to
integrate tests that require at least two different toolchains to build
into the glibc testsuite, but I'd like to see the testsuite for these
combinations posted. Without a proper automated testsuite that covers
mixing of old and new objects - as well as things such as verifying setjmp
etc. work properly in the presence of mode changes - it's very hard to be
confident in the patch.
>From what I've listed you have at least 5 cases for glibc times 2^6-1 for
the executable times 2^6-1 for a shared library it uses - even if actually
it's more than 2^6-1 the numbers are small enough (unlikely to be more
than a million tests - I've generated larger sets of tests than that
before when verifying ABI compatibility issues) for exhausive testing that
the combinations give errors exactly when the should to be feasible. And
practically, the numbers could be reduced a lot by splitting things into
(a) verifying that each of the 2^6-1 combinations of .o files produces the
right ELF headers in the linked .so or executable, or is rejected when
appropriate, (b) just checking the different cases for those headers in
runtime tests.
(It's possible the old-dynamic-linker case can be handled by setting the
ABI version, depending on how far the bitrot discussed in
<https://sourceware.org/ml/libc-alpha/2014-01/msg00375.html> (which I
referred to in <https://sourceware.org/ml/binutils/2014-04/msg00237.html>)
extends. Or if that won't work, making FR=1 objects contain a reference
to a new symbol glibc exports at version GLIBC_2.20 would work.)
(I'm a bit less concerned about ensuring new .o files are rejected by the
old static linker - anyway, that's not a glibc issue - although it's
certainly good if they are, at least if they require FR=1, rather than
being quietly linked to an executable or shared library that appears to
require FR=0 when actually it requires FR=1 or has internally
contradictory requirements.)
> I would also like to add in another feature to check for the presence
> of MSA in an object and reject it if HWCAP_MIPS_MSA is not set. With
> that in place users can construct MSA and non-MSA optimised libraries
> and place the MSA library first in the search path and get the best
> supported by the host. This is possible because the MSA extension
> makes no changes to the calling convention. Does that sound OK?
What do you mean by "presence of MSA in an object"?
It's normal and OK for code to do things like
if (msa_present)
func_msa ();
else
func_non_msa ();
where the two functions are in different source files, built with
different options. Or to do the equivalent with IFUNCs. Or to use the
"target" GCC attribute to have the functions built with different options
in the same .o file. So the presence of MSA instructions in an object
file can't be taken to indicate user intent that the final linked
executable or shared library requires MSA. Do you have any existing
examples of such runtime rejection on other architectures?
The correct way to handle MSA and non-MSA libraries is to include
HWCAP_MIPS_MSA in HWCAP_IMPORTANT so that the dynamic linker will
automatically search appropriate subdirectories of shared library
directories.
Another possible issue with this patch:
* I don't think any floating-point asms should be compiled in for the
__mips_soft_float case (or equivalently, they should be conditioned on
__mips_hard_float) - for soft-float, the assembler may reject hard-float
instructions. Most of the new code is irrelevant in that case (though it
would be nice to reject hard-float libraries in soft-float ld.so, if the
new ELF information makes that possible).
* Floating-point asms also won't work when glibc is built as MIPS16, so
some files may need building -mno-mips16, or __attribute__ ((nomips16))
added to relevant functions, if it isn't already there.
--
Joseph S. Myers
joseph@codesourcery.com