This is the mail archive of the libc-hacker@cygnus.com mailing list for the glibc project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]

Re: Multilib and Linux.


> From: hjl@lucon.org (H.J. Lu)
> Date: Mon, 19 Apr 1999 23:12:19 -0700 (PDT)
> 
> Hi, Folks,
> 
> Linux will move to multilib soon. I know egcs supports multilib.
> But I have some doubts how well it will work on Linux. ELF/PPC
> has 2 ABIs. But I don't know any Linux/PPC distribution supports
> both ABIs. I don't think the "nof" ABI is supported. I don't think
> it is trivial to build such a distribution. I don't even know if
> it is necessary for Linux. I'd like to hear what people's views
> on it.

OK.  For powerpc, there are three issues that I think you mean by
'multilib'.  They are:

1. FPU vs. no FPU
2. 32-bit vs. 64-bit
3. big-endian vs. little-endian.

The first of these is supported by Linux, but at present not very
well.  But it's not something that someone would want to change
between executables; almost all ppc linux systems have FPUs, and
normally if you wanted to stay binary compatible you would use kernel
FPU emulation.  It's only an issue for people who would like to avoid
software emulation at the kernel level.  They end up
including emulation code in every application instead, so it doesn't
seem to save any space and it's not significantly faster; but it seems
to work for them.  In any case, these people are not interested in
running or compiling code for both FPU and no-FPU on the same machine,
except as cross-compilation (and it's probably cross-compilation
between x86 linux and no-FPU ppc linux).

The second and third of these, as far as I know, are not supported by
the kernel yet.  Eventually, I expect they will be.  However, it's
actually simpler than it looks.

If you use little-endian, you are usually doing so for a particular
application (like x86 emulation), and so it's not too terrible to ask
that you keep a separate development environment to handle this, and
link the result statically or put it in its own tree (you would
probably want to do one or both of these anyway).

The only difference between the 64-bit environment, and the 32-bit
environment on a 64-bit ppc chip, is:

a) Addresses are now 64 bits;
b) The carry flag is 2^65, not 2^33.

So you would only use 64-bit mode if you needed more than 4G or so of
address space in a single process, or you were doing lots of
multiple-precision calculations.  Because of (a), though, the
environments would be binary incompatible, so again you need a
completely different tree.

Note that it's not just libraries, but include files, binary data
files (you probably want to have a 64-bit 'long', if you have 64-bit
pointers), perl and python converted headers, and so on.  So it's
really much easier to think of them as two completely different
compilation environments, and since one of them is for specialised
applications it makes sense to the have the 32-bit-mode one the
`native' environment.

This is different to Merced, where I understand that code using the
native Merced instruction set will run faster than equivalent x86
code, so it would be better to have the 64-bit environment the native
one.  But this is tricky, because it means you have to have a much
more complete set of libraries; for instance, you'll probably want to
have the X shared libraries working in the x86 environment (and
talking to the Merced-based X server, of course).  I leave this
problem to the Intel people.

Of course, code written for 64-bit ppc chips running in 32-bit mode
won't run on a 32-bit ppc chip, because it'll expect 64-bit registers
and so on; but this is no different to saying that code written for
MMX Intel chips won't run on 486s, and we know how to deal with that
sort of thing.

-- 
Geoffrey Keating <geoffk@ozemail.com.au>


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]