This is the mail archive of the crossgcc@sourceware.org mailing list for the crossgcc project.

See the CrossGCC FAQ for lots more information.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: More crosstool-0.42-glibc-2.4-gcc-4.1.0-nptl


On 5/11/06, Robert Schwebel <r.schwebel@pengutronix.de> wrote:
I've tried it and it doesn't work. Perl is fundamentally broken by
design with regard to cross compiling, and the Perl crew doesn't want to
do anything against it.

IIRC it takes Perl to compile Perl (at least until the Sixth Coming), it just happens that the (mini)Perl that it takes to compile Perl can be compiled without Perl. There were good reasons for this back in the day, and you do have other options now; for instance, you can build the GNU Haskell compiler, compile Pugs with it, and induce it to translate Perl 5 to Perl 6 and run that. Child's play, neh?

Python itself works, I havn't tried libxml2 integration. But you are
surely right with the general problem.

Is there a cross variant of distutils? That would be ultra-nifty.


How did you know that I already tried to crosscompile TAO in PTXdist? ;)

Best C++ compiler stress test I know of, except maybe KDE. 'Course you gave it a shot, whether or not you care about CORBA. Make it work and I'll buy you a beer. Actually, make that a case of beer.

Packaging is a good thing, although I've the impression that at the
momemt none of the mainstream packaging mechanisms (rpm, deb) are the
right thing for embedded. That's why we currently use ipkg, although the
code is a horrible mess.

Yep. You might also look at LRP's packaging system, which is about the simplest thing that could possibly work. It would be very satisfying on systems like ours in which the rootfs is decompressed into a ramdisk on boot anyway.

One design decision of PTXdist (vs. just using Debian) was full
configurability. If you go this way, you'll end up with one distribution
variant for one customer project. Full configuration doesn't work well
with precompiled binaries. But it lets us build 4 MB root images
including kernel, whereas debian x86 standard minimum installation is
something around 200 MB these days. Still too much to fit into some NOR
flash.

That's not really a function of dpkg; it's easy enough to change some compiler defaults and a couple of debhelper scripts to remove debugging symbols and man pages and other irrelevancies, then rebuild with few or no source changes to get about the same stuff that goes into a .ipk. Using even cruder methods I got a pretty full ARM userspace down to <50MB just by removing docs and such from a debootstrap made with stock sarge binaries, at which point I stopped fiddling with it because I had lots of flash. That's without even bothering with busybox and tinylogin and dropbear, which of course you want to do anyway on real systems for the sake of RAM footprint.

Well usually this quickly leads to a situation where speed and space are
the arguments (although I know that this is a temporary argument, as
hardware becomes more and more powerful).

Heavyweight build procedures don't necessarily imply heavyweight target binaries, and finite hardware isn't always an argument against rich languages either. If I really needed an ORB on ARM and didn't have hardware to burn, my time would be far better invested in porting Fast Address Space Switching to kernel 2.6.x than in code-bumming TAO or ORBit, let alone trying to make them fit into a cross-compiling straitjacket. And if you know what you're doing you can probably make your SOAP server (or whatever is slinging XML) faster and more compact in Python than in C, since the performance of a real system is often dominated by cache<->DRAM bandwidth and Python bytecode and data structures are impressively compact. Take a look at the dict implementation inside Python 2.4 some time, and ponder the consequences of immutable strings for locality when walking a syntax tree.

Next time I need to do something like that, though, I will probably
use OCaml, if I can figure out how to argue past the difficulty of
finding someone to maintain it afterwards.  Strongly typed functional
languages rock, especially when their performance doesn't suck.  You
don't happen to have cross-compilation procedures for the OCaml
run-time, do you?  An incantation to build a cross ocamlopt would be
even better.

We usually try to design our customer software in a way that it can be
debugged on the host, not on the target. I can compile PTXdist projects
completely with host-gcc instead of cross-gcc, just by running with
somehting like "ptxdist --native go". The end of this method is usually
when you'll have to access hardware (which is very often the case with
our embedded / automation projects), but then you are lost with other
methods anyway. At least with things like ethernet you can use UML's
virtual networks (and with CAN as well).

Running on other processors is also a useful column in the test matrix; it's another kind of proxy for future platforms, and sometimes it makes developers' lives a lot easier. It also gives you access to tools that you may not have on the target arch, emulator or no emulator; I mentioned valgrind and oprofile, and the profiling API in JDK 1.5 also comes to mind. But for the kinds of things I build, none of these is a full substitute for having a sophisticated test and debug environment for the actual target binaries.

Maybe you have to stub out some device interfaces _outside_ your
binary and provide remote shims to others; I am fond of tunneling with
named pipes, and with qemu I can even proxy ioctls and message queues
over to the target without having to stick proxy drivers in the
kernel.  But you can't catch timeslice-dependent race conditions, let
alone toolchain bugs, by building and running native on your amd64 dev
box.  And when you inherit (as I recently have) a big, opaque
autobuilder system for release binaries, and it's not clear why it's
failing for QA in a way that it didn't on your desk, you want to be
able to exercise the bits that QA has under a microscope.

Thanks :-) I very much understand your approach. I'm just asking myself
if the community shouldn't decide between the ability to cross compile
or not. If everyone goes your way, cross compilation support in OSS
tools becomes less and less tested and ends up like so many of these
sourceforge projects out there who never gained critical mass.

It doesn't have to be an either/or. Structuring your build system to support cross-compiling is good discipline. It makes you think through toolchain requirements and compile-time vs. run-time resource tests, and it encourages a kind of compile-time regression testing that is too often neglected: when you add a conditionally compiled feature or support for new hardware, can you turn it off with a flag and get the same preprocessor output you got before? If not, you are going to have to go through a complete QA cycle just to make sure your change doesn't break anything when disabled. (Guess who got bit by this one last week when merging someone else's code onto the branch for an upcoming release.)

Unit tests that run during the build process are also good discipline.
Ditto smoke tests under simulated load.  Double ditto constructing a
fresh, self-contained build and test environment on a regular basis.
Taken together, these things are much more easily accomplished with a
native toolchain running inside an emulator box than in any other way.
And if you're hard-core, you can build everything both native and
cross and diff the binaries to confirm that they're identical modulo
timestamps and things.  It's not too good for gcc; I aspire someday to
reach that level of discipline in my embedded work.  In the meantime,
I'm glad that some people do it each way and submit patches when
things break.

Cheers,
- Michael

P. S.  Yes, I was kidding about putting Perl 6 on an embedded system.
You can clean the coffee off your monitor now.  :-)

--
For unsubscribe information see http://sourceware.org/lists.html#faq


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]