This is the mail archive of the mailing list for the glibc project.

Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]

Re: glibc conditioning


I tested again on all the machines I have access to. The following

    double d=0.3;
    int i = (int)(1000*d);

Only on Linux i is 299, while on all other machines (Solaris, AIX,
HPUX, FreeBSD) i is 300. I guess this is just one of the demonstrated
behavior that says Linux is different from everyone else on floating
point operations.

There is no point in arguing with me what is the strictly right answer
according to some standard. The point is, as an OS/library user, I
want to have consistent results. We have millions of lines of codes
running on various types of workstations, and it is only unthinkable
to debug the previously mature codes, whose authors are no longer
around, on Linux because of this low level of incompatibility.

Our software are heavily numerical in nature. I understand that
writing codes that compare doubles or do casting is like playing with
fire. Unfortunately, we have to live with the history and the
legacy. For me, for now I have to stop working on porting to Linux and
regret that I cannot enjoy the speed of Intel computers. The floating
point problems are showing in the thousands of lines codes written by

So, please, if you consider people like me as one of your customers
who want to bring the EDA tools (a $3 billion annual business) onto
Linux, please make the glibc results consistent with what we are
familiar with and what we expect; if you want to argue with me the
strict standard and leave me with the unthinkable job of fixing legacy
codes, my only option is to run away.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]