This is the mail archive of the mailing list for the crossgcc project.

See the CrossGCC FAQ for lots more information.

Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: [crosstool-NG] Design discussion

On Saturday 11 April 2009 00:26:30 Thomas Charron wrote:
> On Sat, Apr 11, 2009 at 1:13 AM, Rob Landley <> wrote:
> > On Friday 10 April 2009 23:14:33 Thomas Charron wrote:
> >> ? And those of us who are caring about bare metal?
> >
> > I've used a jtag to install a bootloader and linux kernel on bare metal,
> > so presumably you mean you want to build something other than linux
> > system to install on that bare metal? ?(Such as building busybox against
> > newlib/libgloss?)
>   I'm talking about bare metal.  Typically, these systems have no more
> RAM then is present on the processor.  Like, 64k.  There is no
> bootloader, no busybox, and most specifically, no OS.

Yeah, I've encountered those, and written code for them.  Often in assembly, 
since with those constraints you need every byte, and thus you haven't got 
the luxury of coding in C, so I'm not sure how it's relevant here.  (Last I 
checked, gcc only supported 32 bit and higher targets as a policy decision, 
which rules out the z80 and such.)

I'm impressed by the way the OpenBios guys use C code directly from ROM before 
the DRAM controller is set up.  They put the CPU cache into direct mapped 
writeback mode, zero the first few cache lines, set the stack pointer to the 
start of the address range they just dirtied, and then jump to C code and 
make sure to never touch ANY other memory until the DRAM controller is up and 
stabilized.  I.E. they're using the cache as their stack, so they can 
initialize the dram controller from C code instead of having to do it in 

Neat trick, I thought.  But they don't consider it cross compiling, any more 
than the linux bootup code to set up page tables and jump from 16 bits to 32 
bits was cross compiling...

> > How does this differ from building a very complicated bootloader, or
> > linking against a different C library? ?(If you're building a complete
> > new system on the bare metal, do you particularly care about binutils or
> > gcc versions other than "fairly recent"?)
>   Yes.  Since GCC is generally tested on 'real' systems, some versions
> perform different then others depending on the target processor
> itself.

Some variants of gcc are broken, yes.

> In some cases, occasionally a version of GCC simply won't 
> work at all for a given processor.

Yes, that's why you need a version of GCC that's capable of outputting code 
for the processor.  (So how do _you_ configure stock gcc+binutils source to 
output code for z80 or 8086 targets?  Yes, I'm aware of and I'm also aware it's based on gcc 
2.7 and hasn't been updated in 11 years.)

> >> ? There is no single toolchain. ?That's an assumption that works for
> >> *your* environment.
> >
> > Could be, but I still don't understand why. ?Care to explain?
>   See above.  bare metal *ISN"T* running Linux on a small box.  It's
> another beast entirely.

Yes, I know.  You can't natively compile on a target that can't run a 
compiler.  I agree.  How does that mean you need more than one compiler 
targeting the same hardware?

I think you're confusing two different points I've made.  The first is "You 
should be able to have a usable somewhat generic cross compiler for a given 
target architecture" with "When your target is Linux you should be able to 
build natively under emulation, and thus avoid cross compiling."  Those are 
two completely different arguments.

(And the second argument never claimed that the emulator and the target 
hardware you actually deployed would have exactly the same hardware, any more 
than Ubuntu's build servers and my laptop have exactly the same hardware.  If 
I build a boot floppy image from my laptop, am I cross compiling?  But it's 
still a different argument.)

> > I've built many different strange things with generic-ish toolchains, and
> > other than libgcc* being evil without --disable-shared, swapping out the
> > built-in libraries and headers after the toolchain is built is fairly
> > straightforward (for C anyway). ?You can do it as a wrapper even.
> > (I suppose you could be referring to languages other than C? ?C++ is a
> > bit more complicated, but then it always is. ?Java is its own little
> > world, but they put a lot of effort into portability. ?Haven't poked at
> > Fortran since the 90's.)
>   Specifically, I've been working with a mashed in version of newlib,
> and newlib-lpc.  In those cases, you actually don't use the GNU C
> library at all.

Yes, that would be the swapping out the built-in libraries and headers part, 
above.  This can be done after the compiler is built with a fairly simple 
wrapper.  (If gcc wasn't constructed entirely out of unfounded assumptions, 
you wouldn't even need to wrap it.)

C Compilers only have a half dozen interesting search paths.  The four 
nontrivial ones are two #include paths (one for the compiler's built-in 
headers ala stdarg.h, and one for the system headers), and two library paths 
(one for the compiler's built-in libraries ala libgcc, and one for the system 
libraries).   The two trivial ones are the search $PATH to find the linker 
and assembler and so on, and the files specified on the command line (which 
is why it cares about the current directory).

This is a slight oversimplification, and assumes you can find crt1.o and 
friends in the library search path.  It also assumes that if you have a 
non-elf output format you'll either supply your own linking tools or have a 
tool that converts an ELF file into your desired format (such as binflat or 
the kernel's various zImage generators).  But generally these days, those 
assumptions are true.

Doing "gcc hello.c" on a gcc built with --disable-shared actually works out to 
a command line something like:

gcc -nostdlib -Wl,--dynamic-linker,/lib/ \
  -Wl,-rpath-link,/path/to/lib -L/path/to/lib -L/path/to/gcc/lib \
  -nostdinc -isystem /path/to/include -isystem /path/to/gcc/include \
  /path/to/lib/crti.o /path/to/gcc/lib/crtbegin.o /path/to/lib/crt1.o \
  hello.c -lgcc -lc -lgcc /path/to/gcc/lib/crtend.o /path/to/lib/crtn.o

That's telling gcc "no, you actually _don't_ know where anything is, here it 
all is explicitly".  (Yes, -lgcc is in there twice.  Long story.)

The builds of the linux kernel and uClibc already do this, they feed -nostdinc 
and --nostdlib to the compiler and then explicitly feed in the header files 
and library paths they want.  (The tricky part is that is 
horrible... but it's also optional, and the reason the "--static-libgcc" flag 

It's much easier to get the behavior you want out of the compiler if you 
understand what the compiler is actually _doing_.  Unfortunately gcc does not 
make this easy, but in theory the compiler just bounces off of to 
find or or newlib or what, and adds some .o files when 
linking an executable.  It shouldn't care at build time what C library it's 
using, and in fact I don't actually build and install uClibc until _after_ 
I've built binutils and gcc.  (Yes gcc cares about things it shouldn't, but 
you can whack it on the nose with a rolled up newspaper until it stops.)

This should all be simple and straightforward.  Sometimes it isn't, but it can 
be _fixed_.

> Additionally, you can use newlib and gcc to compile 
> C++ applications (however, no STL, etc support).  I also have a small
> side project to try to move the uclibc++ libraries to bare metal.

Sounds interesting.  I'm sure Garrett would love to see your patches when 
you're done.

GPLv3 is to GPLv2 what Attack of the Clones is to The Empire Strikes Back.

For unsubscribe information see

Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]