This is the mail archive of the crossgcc@sourceware.org mailing list for the crossgcc project.

See the CrossGCC FAQ for lots more information.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: [crosstool-NG] Design discussion


On Monday 06 April 2009 15:11:03 Yann E. MORIN wrote:
> On Sunday 05 April 2009 14:15:56 Rob Landley wrote:
> > > This post is to present the overall design of crosstool-NG, how and why
> > > I came up with it, and to eventually serve as a base for an open
> > > discussion on the matter.
> >
> > For comparison, my Firmware Linux project also makes cross compilers,
>
> So, it boils down to comparing _your_ project with _mine_.
> I am all for comparing, but I'm afraid we do not have the same goals.

I'm using my project as an example of how I would have designed a system to 
solve similar problems.  It's probably a bad habit, but it's easy to fall 
back on a context in which I've already worked through these issues to my own 
satisfaction.

> You've argued against cross-compiling ore often than not, and you did make
> good points at it. And, from what I understand, FWL is your proof of
> concept that cross-compiling can be avoided.
>
> And you proved it. Partially. But that's not the point.

That's not even the point I was trying to make here.

My cross-compiler.sh script builds reusable cross compilers.  It's wrapped to 
be relocatable (so you can extract the prebuilt binary into an arbitrary 
location, such as your user home directory, and use it from there), and it's 
tarred up and uploaded as a set of prebuilt binary tarballs which are 
compiled for 32-bit x86 and statically linked against uClibc, which is about 
as portable as I can make them:

  http://impactlinux.com/fwl/downloads/binaries/cross-compiler/host-i686/

(That should run on 32 bit hosts, and on 64 bit hosts without even the 32-bit 
support libraries installed.  It does require a 2.6 kernel, though.)

I wouldn't have bothered to wrap, package, and upload them if I didn't expect 
people to want to use those outside the context of building mini-native and 
system-image files.

Part of what I'm trying to point out is that the script that creates those 
cross compilers is 150 lines of bash.  (There's supporting code which 
provides functions for conceptually simple and irrelevant things such as 
downloading and extracting source tarballs, rm -rf them afterwards, checking 
for errors, creating a tarball of the resulting binaries, and so on.  But I 
tried very hard to make sure you don't have to read any of the supporting 
code to understand and modify what the script is doing.)

If someone wants to to do cross compiling, and prefers to build their cross 
compiler from source instead of downloading prebuilt binary toolchains, the 
only configuration decision that user is required to make is "what target do 
you want".  Every other configuration knob is optional and they're not 
presented with them unless they go looking for them.

That's as simple as I know how to make it, both in implementation and to set 
up and use it.

The "build natively under emulation" thing is a complete red herring in the 
comparison I'm trying to make.  I get easily distracted into talking about it 
because I've spent so much time working on it, but it's not what I'm _trying_ 
to talk about here.

> Just my turn to rant a little bit ;-)

Go for it!

> For many, cross-compiling can't be 
> avoided. Running under an emulator is biased. Your ./configure might
> detect some specifics of the machine it's running on (the emulator) that
> might prove wrong on the real hardware, Or the other way around, miss
> some specifics of the real hardware that the emulator does not provide.

This is back on the "to cross compile or not" tangent, but since you asked:

That's 99% a kernel issue.  Your ./configure primarily probes for userspace 
packages you have installed (what library APIs can I call and link against), 
and those don't really depend on hardware.  That's just what's installed in 
your root filesystem.

You could just as easily say "I can't build this package on a standard PC 
server because this package is designed to work on a laptop with a battery it 
can monitor, and an accelerometer and a 3D card and wireless internet, none 
of which this server has".  If that sort of thing stopped you from _building_ 
it, distros like Red Hat would have trouble supporting laptops.

When building natively the base architecture should match, but the associated 
peripherals don't matter much during the build.  (You can't test the result 
without that hardware, but you should be able to compile and even install 
it.)

If you have counter-examples, I'm all ears.

That said, I'll grant there are times cross compiling can't be avoided.  You 
need to cross compile to reproducibly _get_ a native environment starting 
from an arbitrary host.  You may not have an emulator or powerful enough 
native hardware to build natively (the xylinx microblaze comes to mind).  
Setting up a native build environment may be overkill to build something like 
a statically linked version of the linux 
kernel's "Documentation/networking/ifenslave.c" that you're adding to an 
existing filesystem supplied by the device manufacturer.  (Or you just may 
have many years of experience doing it and prefer that approach because it's 
what you're comfortable with. :)

> So, we're stuck. Or are we?
>
> ( Just a side note before I go on. I do not use the same meanings as you
>   do for the following words:
>     build (machine) : the machine that _builds_ stuff
>     host (machine)  : the machine that _runs_ stuff
>     target (machine): the machine the stuff _generates_ code for

Um, I'm confused: are these the meanings I use for these words, the meanings 
you use for these words, or the meaning the GCC manual ascribes to these 
words?

>   For a compiler (binutils, gcc, gdb), the three make sense, but for other
>   programs (glibc, bash, toybox...), only the first two do.
> )

Tangent: not an issue specific to crosstool-ng. :)

Actually even for a compiler like tinycc there are only two interesting ones.  
The fact that gcc did three of them is because gcc is more complicated than 
it actually needs to be, because it thinks it must to rebuild itself under 
itself.

Note that at compile time, you the compiler tells your C code what host you're 
building on via pre #defined __arm__ and __i386__ and similar.  (I.E. the 
old "$ARCH-gcc -dM -E - < /dev/null" trick.)  Your compiler already knows 
what host you're building on, and you can #include <endian.h> to query that 
and even do conditional #includes of different .c files if you need different 
code for different hosts.  (The more complicated little endianness detection 
dance busybox does in include/platform.should detect endianness for BSD and 
MacOS X and even digital unix.  During the compile, in a header file.)

So you should _never_ need to specify --build, because it can autodetect it.  
It just doesn't.

More generally, most portable programs should care all that much about the 
host they're running on, and that includes compilers.  This is true for the 
same reason that if the host processor type affects the output when you run a 
gif to jpeg converter, something is wrong.  If a program that converts known 
input files into known output files doesn't perform exactly the same function 
when it's running on arm as when it's running on x86, then it's _broken_.  
Compilers are fundamentally just programs that convert input files (C code) 
into output files (.o, .a, .so, executable, etc).  (Yeah, they suck in 
implicit input like libraries and header files from various search paths, but 
a docbook->pdf converter sucks in stylesheets and fonts in addition to the 
explicit xml input, and nobody thinks there's anything MAGIC about it.)  The 
fact those output files may (or may not) be runnable on the current system is 
irrelevant.  Things like sed and awk can produce shell scripts as its output, 
which are runnable on the current system if you set the executable bit.  It's 
not _special_.

The distinction between "--host" and "--target" exists because gcc wants to 
hardwire into its build system the ability to do a canadian cross.  You could 
just as easily do this by hand: first build a cross compiler on your current 
machine targeting "--host", and then cross compile with that to building a 
new compiler this time configured to target "--target".

The distinction between "--build" and "--host" exists because the gcc build 
wants to rebuild itself with itself, even when doing a canadian cross.  It 
doesn't want to convert source code into an executable using the available 
compiler.  It doesn't trust the available compiler.  It wants to build a 
temporary version of itself, and then build a new version of itself with that 
temporary version to PURIFTY itself from the VILE TAINT of the host compiler, 
and then build itself a _third_ time JUST TO BE SURE.

But why it can't autodetect --build even then, as described above, is a 
mystery for the ages...

I personally don't humore gcc.  A compiler is just a program, and it should 
build the way normal programs do, and if I have to hit its build system with 
a large rock repeatedly to make it agree with me, I'm ok with that.

> One of the problem I can see with FWL is how you end up with a firmware
> image that runs on the target platform (say my WRT54GL ;-) ), and contains 
> only what is needed to run it, without all that native build environment
> that will definitely not fit in the ~7MiB available in there.

Ok, I should have been more clear:

My project builds cross compilers.  I'm mostly trying to compare the cross 
compilers I build against the cross compilers your system builds, and what's 
involved to get a usable cross compiler out of each system.  (The fact that 
my cross compilers are produced as a side effect and yours are the focus of 
your project is a side issue, although I may have allowed myself to get 
distracted by it.)

If you just want a cross compiler and to take it from there your self, you can 
just grab the cross compiler tarball the build outputs and use it to build 
your own system.  That's why it's tarred up in the first place.

I admit I've been a lazy and said "run ./build.sh", which does extra stuff, 
instead of saying "run download.sh, host-tools.sh, and cross-compiler.sh, in 
that order.  The second two take the target $ARCH as an argument, the first 
one doesn't."  But that's because I'm not really trying to teach you how to 
use my build system, I'm just using it as an example of how creating a cross 
compiler can be simplified to the point where it can more or less be elided 
in passing.  I could trivially make a shell script wrapper that does that for 
you, or teach ./build to take a "--just-cross" command line argument.  I just 
haven't bothered.

(To answer your actual question, if you're not interested in building a new 
system in the native environment under qemu, you can always 
do "NATIVE_TOOLCHAIN=none ./mini-native.sh mipsel" and then add more stuff to 
the bare busybox+uClibc directory yourself before 
running "SYSIMAGE_TYPE=squashfs ./system-image.sh mipsel", although the 
bootloader is still your problem.  A slightly cleaner way to do it would be 
to create a hw- target fro the wrt64gl, see hw-wrt610n for an example.  
Several users have also modified their local copy of mini-native.sh to add 
extra packages they want to build.  But that's a tangent.)

> My point of view is that:
> 1- you need correct build tools, of which a correctly isolated toolchain
> 2- you build your packages and installs them in a rootfs/ directory
>    (rootfs/ will contain only your packages instaleld files, and is missing
>     the toolchain libs)
> 3- you use a copy of rootfs/ which you populate with libs from the
> toolchain 4- you use that populated copy to build your firmware images
>
> Of course, if your packages are not cross-compile friendly, you may have
> problems. But nowadays, most common packages do cross-compile neatly.

I've seen several large scale cross compiling efforts, from Timesys's 
TSRPM-based build system through Gentoo Embedded.  They all tend to top out 
at the same ~600 packages that can be made to cross compile with enough 
effort.  (Although each new version of these packages tends to subtly break 
stuff that _used_ to work on various targets.  For example, in the past year 
Python shipped a release or two that didn't work out of the box on mips and 
Perl broke on a couple non-x86 targets.  Little obscure packages like that, 
which obviously nobody really uses...)

The debian repository has somewhere north of 30,000 packages.  So somewhere 
under 2% of the available packages support cross compiling (and a _lot_ of 
effort goes into making even that much continue to cross compile as each new 
versions comes out), which is why I don't consider it a general solution.

Luckily, most embedded systems are happy restricting themselves to this 
existing subset.

> I have seen only a few of them requiring carefully crafted ./configure
> options or a few patches here and there (ltrace is such a sucker).
>
> For the records, I have some experience in that field as well ;-), as I've
> been doing exactly this stuff for the past for years as my day-time job,
> and I've played with LFS and cross-LFS for the previous three years or so.
>
> Note: crosstool-NG was *not* written at mu day-time job, but on my own
> spare time (which gave some frictions here at home from time to time...

FWL is my hobby project too.  I've gotten some sponsored time to work on it 
over the years, but altogether that's maybe 10% of the total time I've put 
into it.

> > so I
> > have some experience here too.
>
> And, I do acknowledge your experience and your technical merit.
> You know it, let's others know it as well. :-)

I'd like to be clear that I'm not denigrating your experience or expertise 
here either.  Your project works and people use it.  I'm just saying I either 
wouldn't have done it that way, or don't see why you did it that way, and you 
_did_ ask for details. :)

> > My project is carefully designed in layers so you don't have to use the
> > cross compilers I build.  It should be easy to use crosstool-ng output to
> > build the root filesystems and system images that the later scripts
> > produce.  (How easy it actually is, and whether there's any benefit in
> > doing so, is something I haven't really looked into yet.)  The point is
> > the two projects are not actually directly competing, or at least I don't
> > think they are.
>
> The main goals differ, but the underlying reason is the same: be able to
> build stuff that will run on an alien machine. crosstool-NG is limited to
> building the required tools (actual compiler, plus a few debug utilities),
> while FWL aims at building a native build environment.

Tangent.

I think that cross compiling will continue to be hard in general, even after 
you've got a working compiler, so my goal is to bridge to a different native 
build environment.  (I.E. Getting a reliably working cross compiler _is_ a 
hard part of cross compiling, but it's not the _only_ hard part.)

That said, some people want to tackle the hard part for themselves, so giving 
them a known working cross compiler saves a huge amount of hassle.  (It can 
take months to learn how to build one yourself, since there's so many subtly 
_wrong_ ways to do it.)  And if all you're building for the target is a 
couple of static "hello world" binaries, cross compiling's quite reasonable.

It doesn't scale very well, and breaks easily, but assuming you _are_ doing it 
(which is what this list is about)...

> > I came at it from a different background.  I was playing with Linux From
> > Scratch almost from the beginning,
>
> LFSer as well in the 2001-2004 era. Went as far as using it as my daily
> workstation using KDE. Yeah, I was *that* insane at the time. But taught
> me a whole lot in the end.

Tangent.

I generally had my hands full building server stuff, and didn't play with x11 
much.  (I built it a couple times, but I needed my laptop to _work_, 
including PDF and audio support and wireless networking and so on.)

Keep meaning to poke at building x.org from source, but they split it into so 
many different pieces that I'd need to get something like 10 packages working 
to run an xterm...

> [--SNIP the genesis of FWL--]
>
> > I looked at crosstool circa 2004-ish, but was turned off by the way it
> > replicated huge amounts of infrastructure for every single dot release of
> > every component.  (I remember it having separate patches, separate build
> > scripts, and so on.  I don't even remember what it did per-target.)
>
> How can you avoid having one patchest for each version of each component?

Ok, here's a design decision we disagree on.  Is there a significant advantage 
to supporting multiple versions of the same components, and does it outweigh 
the downsides?

In general, if a new package version doesn't do something an old version did 
(including "be small enough" for the embedded parts), the new version should 
probably be _fixed_.  (It's certainly something you want to know about.)

Fragmenting the tester base isn't useful.  There's never enough testing to 
find all the bugs, or enough developers to implement everything you want to 
do, so the next best thing you can do is have all your testers testing the 
same thing and focus the development effort to fixing that one thing.

Fixes can only be be pushed upstream against the _current_ version of 
packages.  (Testing the current version of rapidly changing projects with 
major unfinished features, such as uClibc and the linux kernel, is especially 
useful, because they're the most likely to cause strange subtle breakage in 
some obscure package or other.)  Testing that early while the developers 
still remember what they changed recently is a good thing.

Having the same behavior across different targets allows automated regression 
testing that isn't just a laundry list of special cases.

What are the corresponding advantages of supporting multiple versions?

> Of course, FWL uses only the latest versions availabe (which is wrong, it
> still uses gcc-4.1.2 for philosophical reasons)

Tangent.

I keep meaning to go to 4.2.1 but every time I hit a bug or a missing feature, 
I test 4.2.1 to see if it fixes it, and I have _never_ found any bug that 
4.2.1 fixed or new feature that 4.2.1 supports which 4.1.2 doesn't.  I should 
just bite the bullet and switch anyway, but so far I just haven't found an 
excuse other than "higher number".  (Not even "this supports a hardware 
target that the other one doesn't".  I keep _expecting_ to find one, but I've 
been looking on and off for two years now.  I'd have just upgraded anyway if 
I didn't have to rewrite the armv4 soft float patch for the new version...)

> > Along the way I wrote this:
> >   http://landley.net/writing/docs/cross-compiling.html
>
> But, of all packages I've been using, most are *now* cross-compile friendly
> (with some notable exceptions) and the ones that gave me the most headaches
> where the ones coming from company who don't grok the terms "open" and
> "free as in speech". *Those* were real suckers.

Tangent.

I bump into packages that don't want to cross compile all the time, and I 
already have people using my build system to compile packages I'm not 
personally messing with.

For example, currently uClibc++ is getting quite a workout from Vladimir 
Dronnikov at http://uclibc.org/~dvv/ building cmake and nmap and so on 
against it, and pushing bug reports upstream to Garrett.  (Mark also figured 
out how to make uClibc++ work on arm eabi over the weekend, which required 
another largeish patch.)

Apparently our experiences differ here.

> > >  a- ease overall maintenance
> > >  b- ease configuration of the toolchain
> > >  c- support newer versions of components
> > >  d- add new features
> > >  e- add alternatives where it was available
> >
> > Can't really argue with those goals, although mostly because they're a
> > bit vague.
>
> What do you mean, "vague" (I understand the word, it's the same in french)?
> The question is really: what in the above list qualifies as "vague"?

Tangent.

The important part of that sentence was "Can't really argue with those goals", 
and the rest of what I'm about to say here is really irrelevant, but since 
you asked (feel free to skip this bit, it's not real objections):

I was confused by "add alternatives where it was available"..  (Alternate 
package versions?  Alternate features?  Alternate configuration methods?)  I 
myself tend to lean towards the "do one thing and do it well" approach, so I 
try to make sure each alternative is justified and worth bothering the users 
to make a decision about.  To me, too _many_ alternatives means your project 
isn't well-focused.  From a user interface perspective, I tend to expect the 
default response to any "Now what do I do?" question to be "stop bothering me 
and get back to work".  (You have to let them override/customize the default 
behavior if they think you're doing it _wrong_, but pestering them about it 
up front isn't necessarily helpful.  Could be a stylistic difference here.)

You state "ease overall maintenance" but then go on to explicitly say "support 
newer versions of components" and "add new features" separately...  so what's 
left in maintenance that those two don't cover?  (Make it easy to fix bugs, 
maybe?)  That confused me a bit on the first reading too.

My answer to the rest is to question "how".  Ease configuration... how?  I 
chose to ease configuration by having as little of it as possible and making 
what there was completely optional, you chose to ease configuration by making 
it very granular and doing a configuration menu GUI with nine sub-menus.  
Both presumably supports this same goal in completely opposite ways, which 
means the goal itself seems a bit nebulous to me because it doesn't 
define "easy".

Thus the specific design choices were likely to be more interesting, and I 
expected them to come up later in the same post, so I preferred to argue with 
them when I got to them.

Again, you asked. :)

> > My current build system has very careful boundaries.
>
> Call it an API?

No, what I'm getting at is different from an API.  Read Brian Kernighan's 1983 
usenix paper (often called "Cat -v considered harmful").

Intro here: http://harmful.cat-v.org/cat-v/
Full paper here: http://harmful.cat-v.org/cat-v/unix_prog_design.pdf

These days we'd use the phrase "feature creep" in the discussion.  When 
designing my project I was very clear on what it would _not_ do.

Crosstool is a lot better than some about defining its boundaries.  You 
_didn't_ get sucked into becoming an entire distro generator like 15 others 
out there.

User interface is a separate (albeit related) issue.

> > This is why my current system is very carefully delineated.  I know
> > exactly what it does NOT do.  It builds the smallest possible system
> > capable of rebuilding itself under itself.  I.E. it bootstraps a generic
> > development environment for a target, within which you can build
> > natively.  It has to do some cross compiling to do this, but once it's
> > done you can _stop_ cross compiling, and instead fire up qemu and build
> > natively within that.
>
> Except that it in fact does cross-compiling, as it is escaping the qemu
> via distcc to call the cross tools on the build machine. :-/

My goal is to eliminate the _need_ for anybody else to do cross compiling, not 
the _ability_. :)

Tangent:

The distcc acceleration trick doesn't require any of the packages being built 
to be cross-aware, thus it doesn't restrict you to the ~600 packages that are 
already cross-aware.  As far as the packages being built are concerned, 
they're building fully natively.  (And in theory, distcc could be calling out 
to other qemu instances, or native hardware.  The fact that it _isn't_ is 
purely an implementation detail.)

> > What
> > I'm mostly disagreeing with is your assumptions.
>
> There are two things:
> - the goal
> - the assumptions made to reach that goal
>
> Both make the "why". What I came up with makes the "how".
>
> As for the goal, I wanted to be able to build dependable
> (cross-)toolchains. On the assumptions, I saw that I could not rely on
> binutils/gcc/glibc/... to build easily (which is quite the case), and that
> I need a kind of framework to make them build seamlessly.

Tangent:

One big difference between our projects' goals (and strangely enough I wound 
up siding with buildroot on this one) is that I chose to only build uClibc, 
while your build offers glibc cross compiled to various targets.

(This isn't a criticism, it's a difference in scope.  You project don't build 
native system images as part of its mandate, mine doesn't build glibc.  From 
a certain point of view, the C library is part of the target platform, as 
much as the processor, endianness, or the OABI/EABI decision on arm, so if I 
_was_ going to support it I'd just add extra targets.  From the vantage of 
the build scripts it's just one more package, and "whether or not to support 
multiple versions of the same package" thing doesn't come up when it's not 
the same package.)

> No we can discuss this, but the original discussion was not to see if the
> "how" was the best way to answer the "why".
>
> > In my case, I separated my design into layers, the four most interesting
> > of which are:
> > download.sh - download all the source code and confirm sha1sums
> > cross-compiler.sh - create a cross compiler for a target.
>
> crosstool-NG stops here. And strives at offering more options than your
> solution.
>
> A great many people are stuck with a specific version of any one or more
> or the components (gcc/glibc/...) for historical reasons I am not ready
> to discuss. Having a tool that can not cope with earlier versions is
> not helpfull.

Yes and no.  If they stuck with earlier versions of the components, why didn't 
they stick with earlier versions of the cross compiler (or earlier versions 
of the cross compiler build system that build cross compilers with those 
components back when they were current)?  Why would you upgrade some packages 
and be "stuck" with others?

Binutils and gcc are something of a special case because they don't affect the 
resulting target system much.  Code built with gcc 4.3 and 4.1 should be able 
to seamless link together.  (If it doesn't, there's a bug.  Yes, even C++ 
according to the ABI, although I wouldn't personally trust it.)

You can upgrade the installed kernel without changing the kernel headers.

Ah, I get it.  One of the things I hadn't noticed about your design before now 
is the assumption that you _won't_ be building a new system to install, but 
that you must build a cross compiler that matches an existing binary image, 
to which you'll incrementally be adding packages.

That's a lot harder task than the one I chose to deal with, and explains 
rather a lot of the complexity of your build system.

Hmmm, one of the reasons I was uncomfortable with your build system is I 
couldn't quite figure out the goals of the project, and that just helped a 
lot.  The _strength_ of crosstool is if you need to supplement an existing 
root filesystem without rebuilding any of the parts of it that are already 
there.  In that case, you may need fine-grained selection of all sorts of 
little details in order to get it to match up precisely, details which would 
be completely irrelevant if your goal was to just "build a system for this 
target".

Ok, that makes a lot more sense now.

> > mini-native.sh - build a root filesystem containing a native toolchain
> > system-image.sh - package the root filesystem into something qemu can
> > boot
>
> Those two _use_ the above, htey are not part of it.

Eactly. :)

> > > The first step was to split up this script into smaller ones, each
> > > dedicated to building a single component. This way, I hoped that it
> > > would be easier to maintain each build procedure on its own.
> >
> > I wound up breaking the http://landley.net/code/firmware/old version into
> > a dozen or so different scripts.  My earlier versions the granularity was
> > too coarse, in that one the granularity got too fine.  I think my current
> > one has the granularity about right; each script does something
> > interesting and explainable.
>
> So are each scripts in scripts/build/ : they are dedicated to building a
> single piece of the toolchain, and each can be replaced without the others
> noticing (or so it should be the case).

The problem I encountered was that doing this made it significantly more 
difficult to follow the logic, especially the build prerequisites.  (One of 
the harder parts is figuring out what order stuff needs to be built in.  The 
cross-gcc stuff winds up building a lot of things twice to get a clean 
toolchain.)

That said, you're not really trying to avoid this kind of complexity, because 
being fiddly and granular seems to be the point.  (I still think you've gone 
overboard in a few cases, there's still reason to care about the -pipe option 
of gcc.)

> > Notice there is _nothing_ target-specific in there.  All the target
> > information is factored out into sources/targets.  The build scripts
> > _do_not_care_ what target you're building for.
>
> That's the same in crosstool-NG: the configuration and the wrapper scripts
> set up some variables that the build scripts rely upon to build their
> stuff.

Although which scripts get called in which order is dependent on 
your .config...

> The target specific configuration is generic, but can be overidden by
> target-specofic code (eg. ARM can overide the target tupple to append
> "eabi" to the tupple if EABi is enabled, and to not add it if not enabled;
> this can *not* be done in a generic way, as not all architectures
> behave the same in this respect).

It can be fairly generic, but the target tuple's varies per target no matter 
what you do.  Some targets (ala blackfin the first time I tried it) won't 
give you a -linux and have to do -elf instead, yet Linux builds on 'em.

One dirty trick I pulled is having the _host_ tuple be `uname -m`-walrus-linux 
and the target tuple be variants of $ARCH-unknown-linux.  Since "unknown != 
walrus", it never did the "oh you're not really cross compiling, lemme short 
circuit the logic" thing which used to screw up uClibc on a gcc host.  
They've since patched that specific case by expecting uClibc in the tuple 
(even though I don't build the C library until _after_ I build the compiler 
so technically that decision hasn't been made yet), but in general I like the 
build to continue to use the one codepath I've most thoroughly tested and not 
drastically change its behavior behind my back.  A variant of the "all 
targets should behave as similarly as possible" thing.

> > Ok, a few questions/comments that come to mind here:
> >
> > 1) Why do we have to install your source code?  The tarball we download
> > from your website _is_ source code, isn't it?  We already chose where to
> > extract it.  The normal order of operations is "./configure; make; make
> > install". With your stuff, you have to install it in a second location
> > before you can configure it.  Why?  What is this step for?
>
> I don't understand. crosstool-NG is like any other package:
> - it requires some pre-existing stuff in your environemnt, hence the
>   "./configure" step
> - you have to build it, hence the "make" step (although this is only
>   sed-ing a few place-holders here and there)
> - you have to install it to use it, hence the "make install" step
> - you add its location/bin to the PATH

But the result is source code, which you then compile to get _another_ binary.  
If it was "like any other package" you would download, ./configure, make, 
make install, and the result would be the binary you actually run (I.E. the 
cross compiler).

Instead you download the package, configure it, make, install, and then you 
configure it AGAIN, make AGAIN, and install AGAIN.

I.E. you need to install your source code before you build it.  I find that 
odd.  (There is the ./configure --local thing, but why isn't that the 
default?)

Once upon a time linux systems tended to have a common copy of the linux 
source code, installed in /usr/src/linux.  They did that by extracting the 
tarball into that location.

> Then you can run ct-ng, from anywhere and you build your toolchain.
> I mat repeat myself, but do you expect to build your own program in the
> gcc source tree?

Yes, if I could.  (I admit gcc and binutils check for this and error out, 
never did figure out why.)  I build everything else in its own source tree, 
including the kernel, uClibc, busybox, and so on.  This is actually the 
default way to build most packages, the FSF ones are unusual.  (That's part 
of the ./configure; make; make install thing.)

I tend not to see FSF designs as a good example of anything.  As Linus says 
near the start of the kernel's Documentation/CodingStyle:

  First off, I'd suggest printing out a copy of the GNU coding standards,
  and NOT read it.  Burn them, it's a great symbolic gesture.

> > 2) Your configuration menu is way too granular.  You ask your users
> > whether or not to use the gcc "-pipe" flag.  What difference does it
> > make?  Why ask this?  Is there a real benefit to bothering them with
> > this, rather than just picking one?
>
> I will answer this in an answer to your other post, if you will.

I'll catch up.  Might take it off list if people continue to get annoyed by 
the discussion being too long, though.

> > I want to do a more detailed critique here, but I had to reinstall my
> > laptop a couple weeks ago and my quick attempt to bring up your
> > menuconfig only made it this far:
> >
> > ./configure --prefix=/home/landley/cisco/crosstool-ng-1.3.2/walrus
> > Computing version string... 1.3.2
> > Checking for '/bin/bash'... /bin/bash
> > Checking for 'make'... /usr/bin/make
> > Checking for 'gcc'... /usr/bin/gcc

I note that if any of these aren't there, the build will die very early on 
with an error that it couldn't find the appropriate command, so explicitly 
checking for them seems a bit redundant.  (Doesn't actually _hurt_, but it 
seems unnecessary.  Judgement call, that.  Possibly a matter of personal 
taste.)

> > Checking for 'gawk'... not found
> > Bailing out...
> >
> > I note that Ubuntu defaults to having "awk" installed, why you _need_ the
> > gnu version of specifically is something I don't understand.
>
> I could not make my awk script work with mawk, which is the default under
> the obscur distribution I am using (Debian, I think). So I fallback to
> installing gawk. But that was an *enourmous* error. Its main use it to try
> to build a correct tsocks setup given the options. *That* is purely insane.
> It should be going away. No, it /should/ not be going away. It *is* going
> away.
>
> The fact that I shoehorned proxy settings in crosstool-NG is an error,
> granted, but because I'm using GNU extensions in there, so I must check
> for GNU awk.

My design approach is to ruthlessly minimize complexity.  If I'm not sure 
something is going to be there on all systems, I try to figure out if I can 
do without it or build it from source.

The approach you've taken is to require the user to build up their system to a 
minimum set of requirements.

> > For example, you require libtool.  Why are you checking for libtool?
>
> crosstool-NG itself does not require libtool. The components that it builds
> will use it if they find it. But if the version is too old, the build will
> break (I think it was mpfr at fault there, but am not sure), instead of
> simply ignoring it.
>
> So I have also to ensure a correct environment for the components I build.

That's a good reason for checking that the libtool that's installed isn't too 
old, but not a good reason for failing if libtool isn't there at all (which 
as I understand it would still mean your cross compilers build correctly).

As I said, I trimmed the $PATH to remove everything that wasn't actually used.  
Fairly draconian approach to making the build reliable, I know. :)

> > I note
> > that libtool exists to make non-elf systems work like ELF, I.E. it's a
> > NOP on Linux, so it's actually _better_ not to have it installed at all
> > because libtool often screws up cross compiling.
>
> But what if it *is* already installed?

Your test should be able to distinguish "bad version installed" from "not 
installed at all".

> The end-user is feree to install  
> whatever he/she wants on his/her computer, no? It's just that I want to
> check that the environment is sane before going on any further.

They could change it after you run ./configure.  Downgrade it and install the 
broken version, or upgrade to some new version you've never heard of that's 
buggy.  New changes to your environment can break things, fact of life.  You 
can move the sanity tests to the start of each build if that bothers you, 
but "libtool is not installed" is not a bad thing in this context needing to 
be fixed.

> The fact that libtool sucks is totaly irrelevant to the problem.
> And yes, it sucks.
>
> > (In my experience, when a project
> > is designed to do nothing and _fails_ to successfully do it, there's a
> > fairly high chance it was written by the FSF.  One of the things my
> > host-tools.sh does is make sure libtool is _not_ in the $PATH, even when
> > it's installed on the host.
>
> Oh, come-on... ;-) My libtool is in /usr/bin. Do you want to remove
> /usr/bin from my PATH? You'll end-up missing a lot of stuff, in this case.

I add that stuff to my path explicitly, knowing exactly what I need to build.

As I said, it's a draconian approach and I wasn't saying I expect other people 
to be that extreme. :)

> OK, so now moving on to answer your other post(s)... Took me about two
> hours trying to answer this one... :-(

Yeah, this has gotten really long really fast.  Not entirely surprised that 
happened when we both started talking about our big hobby projects. :)

I still have to reply to the second half of your first message... :)

Rob
-- 
GPLv3 is to GPLv2 what Attack of the Clones is to The Empire Strikes Back.

--
For unsubscribe information see http://sourceware.org/lists.html#faq


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]