Newbie questions - compiling Linux exes on Windows OS

Brian Dessent
Mon Jan 2 09:42:00 GMT 2006

Daniel White wrote:

> Is it inefficient in terms of transferring data between
> server and user, or in the processing and manipulation of
> data itself on the server? For example, I want to run a
> conversion script where the user sends a single MIDI or a
> picture to be converted. This single file is sent to the server
> for complicated processing with maths and stuff, but no data
> is actually being communicated between server and user until
> the very end - where a single converted file is easily sent
> to the user. So perhaps the inefficiency you speak of isn't
> an issue in this case?

CGI is inefficient because a new process must be created for each
request.  For every hit to the server, a process has to fork, exec, wait
to initialize, process the data, send it back to the webserver, and then
terminate.  This is horrendously inefficient and so using C to do CGI
for performance reasons makes little sense, and hardly anybody does this
in the last decade.

All modern web servers use modular interpreters (such as mod_php,
mod_perl, mod_python) which means that the language itself is running
inside the web server.  When a request comes, the interpreter is already
running and can immediately serve the request.  Some of them even have
accelerators that store the parsed/tokenized version of the scripts so
that the startup cost is almost zero.  This means that a modern web
server running a scripting language can perform circles around something
that is doing CGI, even if the target is written in C -- because for
most tasks the bottleneck is IO (network, disk, database) and not CPU. 

> The reason I want to use C/C++ is because I eventually also want to
> make the program as a standalone product so people can use it offline
> aswell as online. But to avoid the hassle of compilation every
> time I want to try the program out, I might use perl or PHP
> after all. In this case, is perl or PHP closer to simple C/C++ code?
> One other factor to use perl or PHP is the ease of use required
> to test a program on my own PC. The last thing I want to do keep
> uploading the program to my server in the bug-testing stage to see
> if the programs works. I'd rather just test it on my own system,
> and upload it at the end.

You don't have to upload the script every time.  Just run PHP or perl on
your local machine.  There's nothing linux-specific about these

> I use CoreFTP to upload to my site using SSH. If what you're saying
> is as simple as I think it is, is there any instant way of performing
> a compile on a file (using a shell or something), or will I need
> to enquire further to obtain information to obtain the location
> of the server's compiler exe?

You need ssh (shell) access.  It sounds like at the moment you are using
scp or sftp, which use the same protocol but are not the same thing. 
For instance, some hosts set up the system to use a restricted shell so
that you can scp but you cannot ssh to an interactive shell.

> Right, I'll probably stick with the compiler on my host. But out of
> interest, if I compiled as linux/unix exe, wouldn't that be a
> generic exe that would be compatible with most unix/linux
> setups? If I can compile a Windows exe and assume it to run
> on most Windows setups, then why can't I do the same for linux/unix?
> After all, my (image/MIDI) conversion program will only use simple
> maths and file accessing commands to read and write data to a file.

It doesn't work that way at all.  Windows is a single unified platform
that runs on a very small number of architectures.  It is controlled by
a single entity.  Linux runs on dozens of different architectures, and
has a multitude of different versions of C and C++ libraries, some of
which are not compatible.  It can be compiled and modified by anyone,
and so the particular versions of
compiler/libraries/kernel/architecture  means that there could be
hundreds of different permutations.  But in the linux world this is seen
as an advantage, since most software is distributed in source, so the
binary compatibility is irrelevent.  And if it's not distributed via
source, it's done through a distribution that has absolute control over
all these numerous variables from kernel version to C library version to
compiler version, etc.  The closest you can come to universality is by
distributing a static binary, and that is a) not always technically
possible b) inefficient and c) somewhat frowned upon and seen as only a
last resort for closed-source applications.  Again, most *nix people
expect to be given source, not binaries.

I think you are underestimating the quality of modern scripting
languages like PHP/Python/Ruby/Perl.  I strongly suggest that you write
your code in one of them.  Only after you can demonstrate a clear
performance bottleneck should you resort to C, and even then it's
usually only necessary for the most performance critical parts of your
code.  All of the above languages make it pretty easy to implement
specific parts of your code in C, if necessary.

>From a distribution standpoint, for *nix people a perl script is about a
million times easier to distribute than a binary.  For windows folks,
this is not the case but there are tools that will package a script and
an interpreter into one exe - for example py2exe.  And for windows users
especially it's quite irrelevent - they are used to being given an
installer, so if that installer installs an interpreter + script files,
then they really don't care that the program is not a single .exe file. 
All that matters is that they can click on something on the start menu. 
In short, there is no reason why a program written in any of the above
languages cannot be distributed in standalone form to windows users.


Want more information?  See the CrossGCC FAQ,
Want to unsubscribe? Send a note to

More information about the crossgcc mailing list