Differences between revisions 30 and 31
Revision 30 as of 2013-11-28 22:12:09
Size: 35287
Comment:
Revision 31 as of 2013-11-28 22:22:49
Size: 34742
Comment:
Deletions are marked like this. Additions are marked like this.
Line 469: Line 469:
Line 470: Line 471:
The GNU C Library includes a math library that contains a considerable amount of code donated by IBM. The IBM code uses specialized algorithms to compute approximate results for a given input to a specific mathematical function. In some cases during the computation of intermediate results higher precision is required in order to provide an accurate final result. There is in fact a lot of academic research attempting to prove the maximum precision required from intermediate results to produce an output of a given precision (these proofs are generally per function). If the intermediate result requires a higher precision than is available in hardware the function simulates the required precision using what is called integer multi-precision. If you need 100-bits, you gang together enough integers to simulate 100-bits and operate on those larger numbers using the same specialized algorithms to attain a result. Eventually the 100-bit result is rounded down to the size of float, double, or long double depending on the function called. Thus the input of the function may require higher precision intermediate calculations that may in turn use slower integer multi-precision values to compute an accurate result. Without a higher intermediate precision the results of the accuracy mathematical functions would be terrible. Are there ways in which we can make this faster or better? Certainly. For example we could gang together floating point registers instead of integer registers to compute the higher precision intermediate results. This is not currently implemented in glibc, but it's something that an interested volunteer could help with to speed up slow libm functions. Alternatively the precision requirements of functions could be lowered to make them faster, for example -ffast-math could select an alternate libm that avoids the slow-paths and purposely returns an inaccurate answer in order to keep the runtime performance high. None of these techniques are currently used by glibc. Hopefully this explains why libm functions are slow for some inputs. Some inputs trigger cases where intermediate results need higher precision to calculate an accurate answer and if that intermediate precision exceeds the precision provided by hardware we must use the slower integer multi-precision support code. The GNU C Library includes a math library that contains a considerable amount of code donated by IBM. The IBM code uses specialized algorithms to compute approximate results for a given input to a specific mathematical function. In some cases during the computation of intermediate results higher precision is required in order to provide an accurate final result. There is in fact a lot of academic research attempting to prove the maximum precision required from intermediate results to produce an output of a given precision (these proofs are generally per function). If the intermediate result requires a higher precision than is available in hardware the function simulates the required precision using what is called integer multi-precision. If you need 100-bits, you gang together enough integers to simulate 100-bits and operate on those larger numbers using the same specialized algorithms to attain a result. Eventually the 100-bit result is rounded down to the size of float, double, or long double depending on the function called. Thus the input of the function may require higher precision intermediate calculations that may in turn use slower integer multi-precision values to compute an accurate result. Without a higher intermediate precision the accuracy of the functions would be terrible. You can detect if you are calling the slow path by using the libm systemtap probe points for the slow paths in several libm functions. It is our expectation that you can then use the probe trigger information to tweak your code to avoid slow paths. The community is looking at providing an alternate implementation of libm that is faster, perhaps selected by {{{-ffast-math}}}, which skips the slow paths at the expense of accuracy and provides faster results.

Frequently Asked Questions about the GNU C Library

This document tries to answer questions a user might have when installing and using glibc. Please make sure you read this before sending questions or bug reports to the maintainers.

The GNU C library is very complex. The installation process has not been completely automated; there are too many variables. You can do substantial damage to your system by installing the library incorrectly. Make sure you understand what you are undertaking before you begin.

Contents

  1. Frequently Asked Questions about the GNU C Library
    1. Compiling glibc
      1. What systems does the GNU C Library run on?
      2. What tools do I need to build GNU libc?
      3. What version of the Linux kernel headers should be used?
      4. When I run `nm -u libc.so' on the produced library I still find unresolved symbols. Can this be ok?
      5. What are these `add-ons'?
      6. My kernel emulates a floating-point coprocessor for me. Should I enable --with-fp?
      7. Why do I get messages about missing thread functions when I use librt? I don't even use threads.
      8. I get failures during `make check'. What should I do?
      9. What is symbol versioning good for? Do I need it?
      10. How can I compile on my fast ix86 machine a working libc for an older and slower ix86? After installing libc, programs abort with "Illegal Instruction".
      11. `make' fails when running rpcgen the first time, what is going on? How do I fix this?
      12. Why do I get:`#error "glibc cannot be compiled without optimization"', when trying to compile GNU libc with GNU CC?
    2. Installation and configuration issues
      1. How do I configure GNU libc so that the essential libraries like libc.so go into /lib and the other into /usr/lib?
      2. Do I need to use GNU CC to compile programs that will use the GNU C Library?
      3. Looking through the shared libc file I haven't found the functions `stat', `lstat', `fstat', and `mknod' and while linking on my Linux system I get error messages. How is this supposed to work?
      4. Programs using libc have their messages translated, but other behavior is not localized (e.g. collating order); why?
      5. How do I create the databases for NSS?
      6. Even statically linked programs need some shared libraries which is not acceptable for me. What can I do?
      7. I need lots of open files. What do I have to do?
      8. Why shall glibc never get installed on GNU/Linux systems in /usr/local?
    3. Source and binary incompatibilities
      1. The prototypes for `connect', `accept', `getsockopt', `setsockopt', `getsockname', `getpeername', `send', `sendto', and `recvfrom' are different in GNU libc from any other system I saw. This is a bug, isn't it?
      2. Why don't signals interrupt system calls anymore?
      3. I've got errors compiling code that uses certain string functions. Why?
      4. I get compiler messages "Initializer element not constant" with stdin/stdout/stderr. Why?
      5. I get some errors with `gcc -ansi'. Isn't glibc ANSI compatible?
      6. I can't access some functions anymore. nm shows that they do exist but linking fails nevertheless.
      7. The sys/sem.h file lacks the definition of `union semun'.
      8. My program segfaults when I call fclose() on the FILE* returned from setmntent(). Is this a glibc bug?
      9. I get "undefined reference to `atexit'".
    4. Miscellaneous
      1. How can I set the timezone correctly?
      2. What other sources of documentation about glibc are available?
      3. How can I find out which version of glibc I am using in the moment?
      4. Context switching with setcontext() does not work from within signal handlers.
    5. New FAQ Entries that are not complete yet
      1. What are the accuracy goals for libm functions?
      2. Why are libm functions slow on some inputs?
      3. Why no strlcpy / strlcat?
      4. Compiling for ARM fails
      5. How do I build a binary that works on older GNU/Linux distributions?
      6. How do I build glibc on Ubuntu (list other distros here with similar problems)?
      7. After installation of glibc 2.15, I cannot compile GCC anymore

Compiling glibc

What systems does the GNU C Library run on?

Please see the file README for upto date details.

The GNU C Library supports these configurations for using Linux kernels:

  • i[4567]86-*-linux-gnu
  • x86_64-*-linux-gnu
  • powerpc-*-linux-gnu Hardware floating point required
  • powerpc64-*-linux-gnu
  • s390-*-linux-gnu
  • s390x-*-linux-gnu
  • sh[34]-*-linux-gnu Requires Linux 2.6.11 or newer
  • sparc*-*-linux-gnu
  • sparc64*-*-linux-gnu

Additional configurations are part of the ports directory, see the README for details.

What tools do I need to build GNU libc?

You need:

  • GCC, both the c compiler and the c++ compiler (for the testsuite)
  • GNU binutils
  • GNU make
  • perl
  • GNU awk
  • GNU sed
  • On Linux: The header files of the Linux kernel

Developers that modify glibc might need additionally:

  • gperf
  • GNU autoconf
  • GNU gettext
  • GNU texinfo

For details, see the manual section on "Tools for Compilation" or read the INSTALL file in the glibc sources.

What version of the Linux kernel headers should be used?

The headers from the most recent Linux kernel should be used. The headers used while compiling the GNU C library and the kernel binary used when using the library do not need to match. The GNU C library runs without problems on kernels that are older than the kernel headers used. The other way round (compiling the GNU C library with old kernel headers and running on a recent kernel) does not necessarily work as expected. For example you can't use new kernel features if you used old kernel headers to compile the GNU C library.

Even if you are using an older kernel on your machine, we recommend you compile GNU libc with the most current kernel headers. That way you won't have to recompile libc if you ever upgrade to a newer kernel. To tell libc which headers to use, give configure the --with-headers switch (e.g. --with-headers=/usr/src/linux-3.3/include).

To install Linux kernel headers, run make header_install in the kernel source tree. This is described in the kernel documentation.

When I run `nm -u libc.so' on the produced library I still find unresolved symbols. Can this be ok?

Yes, this is ok. There can be several kinds of unresolved symbols:

  • magic symbols automatically generated by the linker. These have names like __start_* and __stop_*

  • symbols starting with _dl_* come from the dynamic linker
  • weak symbols, which need not be resolved at all (fabs for example)

Generally, you should make sure you find a real program which produces errors while linking before deciding there is a problem.

What are these `add-ons'?

To enhance glibc there are additional add-ons which might be distributed as separate packages. Currently the nptl, libidn add-ons are part of glibc and the ports add-on is a separate package.

To use these packages as part of GNU libc, just unpack the tarfiles in the libc source directory and tell the configuration script about them using the --enable-add-ons option. If you give just --enable-add-ons configure tries to find all the add-on packages in your source tree. If you want to select only a subset of the add-ons, give a comma-separated list of the add-ons to enable:

  • configure --enable-add-ons=nptl,libidn

for example.

Add-ons can add features (including entirely new shared libraries), override files, provide support for additional architectures, and just about anything else. The existing makefiles do most of the work; only some few stub rules must be written to get everything running.

Most add-ons are tightly coupled to a specific GNU libc version. Please check that the add-ons work with the version of GNU libc you use.

With glibc 2.2 the crypt add-on and with glibc 2.1 the localedata add-on have been integrated into the normal glibc distribution, crypt and localedata are therefore not anymore add-ons. Also, the linuxthreads add-on is obsolete with the usage of nptl.

My kernel emulates a floating-point coprocessor for me. Should I enable --with-fp?

This is only relevant for certain platforms like PowerPC or MIPS. The configuration of GNU libc must be consistent with the ABI that your compiler uses: both must be configured the same way.

An emulated FPU is just as good as a real one, as far as the C library and compiler are concerned. You only need to say --without-fp, and configure your compiler accordingly, if your machine has no way to execute floating-point instructions.

People who are interested in squeezing the last drop of performance out of their machine may wish to avoid the trap overhead by doing so.

Why do I get messages about missing thread functions when I use librt? I don't even use threads.

In this case you probably mixed up your installation. librt uses threads internally and has implicit references to the thread library. Normally these references are satisfied automatically but if the thread library is not in the expected place you must tell the linker where it is. When using GNU ld it works like this:

  • gcc -o foo foo.c -Wl,-rpath-link=/some/other/dir -lrt

The /some/other/dir' should contain the thread library.  ld' will use the given path to find the implicitly referenced library while not disturbing any other link path.

I get failures during `make check'. What should I do?

The testsuite should compile and run cleanly on your system; every failure should be looked into. Depending on the failures, you probably should not install the library at all.

You should consider reporting it in bugzilla providing as much detail as possible. If you run a test directly, please remember to set up the environment correctly. You want to test the compiled library - and not your installed one. The best way is to copy the exact command line which failed and run the test from the subdirectory for this test in the sources.

There are some failures which are not directly related to the GNU libc:

  • Some compilers produce buggy code. No compiler gets single precision complex numbers correct on Alpha. Otherwise, gcc-3.2 should be ok.
  • The kernel might have bugs. For example the tst-cpuclock2 test needs a fix that went in Linux 3.1 (patch).

What is symbol versioning good for? Do I need it?

Symbol versioning solves problems that are related to interface changes. One version of an interface might have been introduced in a previous version of the GNU C library but the interface or the semantics of the function has been changed in the meantime. For binary compatibility with the old library, a newer library needs to still have the old interface for old programs. On the other hand, new programs should use the new interface. Symbol versioning is the solution for this problem. The GNU libc uses symbol versioning by default unless it gets disabled via a configure switch.

We don't advise building without symbol versioning, since you lose binary compatibility - forever! The binary compatibility you lose is not only against the previous version of the GNU libc but also against all future versions. This means that you will not be able to execute programs that others have compiled.

How can I compile on my fast ix86 machine a working libc for an older and slower ix86? After installing libc, programs abort with "Illegal Instruction".

glibc and gcc might generate some instructions on your machine that aren't available on an older machine. You've got to tell glibc that you're configuring for e.g. i586 with adding i586 as your machine, for example:

  • ../configure --prefix=/usr i586-pc-linux-gnu

And you need to tell gcc to only generate i586 code, just add `-mcpu=i586' (just -m586 doesn't work) to your CFLAGS.

Note that i486 is the oldest supported architecture since nptl needs atomic instructions and those were introduced with i486.

`make' fails when running rpcgen the first time, what is going on? How do I fix this?

The first invocation of rpcgen is also the first use of the recently compiled dynamic loader. If there is any problem with the dynamic loader it will more than likely fail to run rpcgen properly. This could be due to any number of problems.

The only real solution is to debug the loader and determine the problem yourself. Please remember that for each architecture there may be various patches required to get glibc HEAD into a runnable state. The best course of action is to determine if you have all the required patches.

Why do I get:`#error "glibc cannot be compiled without optimization"', when trying to compile GNU libc with GNU CC?

There are a couple of reasons why the GNU C library will not work correctly if it is not complied with optimization.

In the early startup of the dynamic loader (_dl_start), before relocation of the PLT, you cannot make function calls. You must inline the functions you will use during early startup, or call compiler builtins (__builtin_*).

Without optimizations enabled GNU CC will not inline functions. The early startup of the dynamic loader will make function calls via an unrelocated PLT and crash.

Without auditing the dynamic linker code it would be difficult to remove this requirement.

Another reason is that nested functions must be inlined in many cases to avoid executable stacks.

In practice there is no reason to compile without optimizations, therefore we require that GNU libc be compiled with optimizations enabled.

Installation and configuration issues

How do I configure GNU libc so that the essential libraries like libc.so go into /lib and the other into /usr/lib?

Like all other GNU packages GNU libc is designed to use a base directory and install all files relative to this. The default is /usr/local, because this is safe (it will not damage the system if installed there). If you wish to install GNU libc as the primary C library on your system, set the base directory to /usr (i.e. run configure --prefix=/usr <other_options>).

Some systems like Linux have a filesystem standard which makes a difference between essential libraries and others. Essential libraries are placed in /lib because this directory is required to be located on the same disk partition as /. The /usr subtree might be found on another partition/disk. If you configure for Linux with --prefix=/usr, then this will be done automatically.

To install the essential libraries which come with GNU libc in /lib on systems other than Linux one must explicitly request it. Autoconf has no option for this so you have to use a configparms' file (see the INSTALL' file for details). It should contain:

slibdir=/lib
sysconfdir=/etc

The first line specifies the directory for the essential libraries, the second line the directory for system configuration files.

Do I need to use GNU CC to compile programs that will use the GNU C Library?

In theory, no; the linker does not care, and the headers are supposed to check for GNU CC before using its extensions to the C language.

However, there are currently no ports of glibc to systems where another compiler is the default, so no one has tested the headers extensively against another compiler. You may therefore encounter difficulties. If you do, please report them as bugs.

Also, in several places GNU extensions provide large benefits in code quality. For example, the library has hand-optimized, inline assembly versions of some string functions. These can only be used with GCC.

Looking through the shared libc file I haven't found the functions `stat', `lstat', `fstat', and `mknod' and while linking on my Linux system I get error messages. How is this supposed to work?

Believe it or not, stat and lstat (and fstat, and mknod) are supposed to be undefined references in libc.so.6! Your problem is probably a missing or incorrect /usr/lib/libc.so file; note that this is a small text file now, not a symlink to libc.so.6. It should look something like this:

GROUP ( libc.so.6 libc_nonshared.a )

Programs using libc have their messages translated, but other behavior is not localized (e.g. collating order); why?

Translated messages are automatically installed, but the locale database that controls other behaviors is not. You need to run localedef to install this database, after you have run `make install'. For example, to set up the French Canadian locale, simply issue the command

    localedef -i fr_CA -f ISO-8859-1 fr_CA

Please see localedata/README in the source tree for further details.

How do I create the databases for NSS?

If you have an entry "db" in /etc/nsswitch.conf you should also create the database files. The glibc sources contain a Makefile which does the necessary conversion and calls to create those files. The file is db-Makefile' in the subdirectory nss' and you can call it with `make -f db-Makefile'. Please note that not all services are capable of using a database.

Even statically linked programs need some shared libraries which is not acceptable for me. What can I do?

NSS (for details just type `info libc "Name Service Switch"') won't work properly without shared libraries. NSS allows using different services (e.g. NIS, files, db, hesiod) by just changing one configuration file (/etc/nsswitch.conf) without relinking any programs. The only disadvantage is that now static libraries need to access shared libraries. This is handled transparently by the GNU C library.

A solution is to configure glibc with --enable-static-nss. In this case you can create a static binary that will use only the services dns and files (change /etc/nsswitch.conf for this). You need to link explicitly against all these services. For example:

  gcc -static test-netdb.c -o test-netdb \
    -Wl,--start-group -lc -lnss_files -lnss_dns -lresolv -Wl,--end-group

The problem with this approach is that you've got to link every static program that uses NSS routines with all those libraries.

In fact, one cannot say anymore that a libc compiled with this option is using NSS. There is no switch anymore. Therefore it is highly recommended not to use --enable-static-nss since this makes the behaviour of the programs on the system inconsistent.

I need lots of open files. What do I have to do?

This is at first a kernel issue. The kernel defines limits with OPEN_MAX the number of simultaneous open files and with FD_SETSIZE the number of used file descriptors. You need to change these values in your kernel and recompile the kernel so that the kernel allows more open files. You don't necessarily need to recompile the GNU C library since the only place where OPEN_MAX and FD_SETSIZE is really needed in the library itself is the size of fd_set which is used by select.

The GNU C library is now select free. This means it internally has no limits imposed by the fd_set type. Instead all places where the functionality is needed the poll function is used.

If you increase the number of file descriptors in the kernel you don't need to recompile the C library.

You can always get the maximum number of file descriptors a process is allowed to have open at any time using

        number = sysconf (_SC_OPEN_MAX);

This will work even if the kernel limits change.

Why shall glibc never get installed on GNU/Linux systems in /usr/local?

The GNU C compiler treats /usr/local/include and /usr/local/lib in a special way, these directories will be searched before the system directories. Since on GNU/Linux the system directories /usr/include and /usr/lib contain a --- possibly different --- version of glibc and mixing certain files from different glibc installations is not supported and will break, you risk breaking your complete system. If you want to test a glibc installation, use another directory as argument to --prefix. If you like to install this glibc version as default version, overriding the existing one, use --prefix=/usr and everything will go in the right places.

Source and binary incompatibilities

The prototypes for `connect', `accept', `getsockopt', `setsockopt', `getsockname', `getpeername', `send', `sendto', and `recvfrom' are different in GNU libc from any other system I saw. This is a bug, isn't it?

No, this is no bug. GNU libc already follows the Single Unix specifications (and I think the POSIX.1g draft which adopted the solution). The type for a parameter describing a size is socklen_t.

Why don't signals interrupt system calls anymore?

By default GNU libc uses the BSD semantics for signal(), unlike Linux libc 5 which used System V semantics. This is partially for compatibility with other systems and partially because the BSD semantics tend to make programming with signals easier.

There are three differences:

  • BSD-style signals that occur in the middle of a system call do not
    • affect the system call; System V signals cause the system call to

      fail and set errno to EINTR.

  • BSD signal handlers remain installed once triggered. System V signal
    • handlers work only once, so one must reinstall them each time.
  • A BSD signal is blocked during the execution of its handler. In other
    • words, a handler for SIGCHLD (for example) does not need to worry about being interrupted by another SIGCHLD. It may, however, be interrupted by other signals.

There is general consensus that for `casual' programming with signals, the BSD semantics are preferable. You don't need to worry about system calls returning EINTR, and you don't need to worry about the race conditions associated with one-shot signal handlers.

If you are porting an old program that relies on the old semantics, you can quickly fix the problem by changing signal() to sysv_signal() throughout. Alternatively, define _XOPEN_SOURCE before including <signal.h>.

For new programs, the sigaction() function allows you to specify precisely how you want your signals to behave. All three differences listed above are individually switchable on a per-signal basis with this function.

If all you want is for one specific signal to cause system calls to fail and return EINTR (for example, to implement a timeout) you can do this with siginterrupt().

I've got errors compiling code that uses certain string functions. Why?

glibc has special string functions that are faster than the normal library functions. Some of the functions are additionally implemented as inline functions and others as macros. This might lead to problems with existing codes but it is explicitly allowed by ISO C.

The optimized string functions are only used when compiling with optimizations (-O1 or higher). The behavior can be changed with two feature macros:

  • __NO_STRING_INLINES: Don't do any string optimizations.

  • __USE_STRING_INLINES: Use assembly language inline functions (might increase code size dramatically).

Since some of these string functions are now additionally defined as macros, code like "char *strncpy();" doesn't work anymore (and is unnecessary, since <string.h> has the necessary declarations). Either change your code or define __NO_STRING_INLINES.

Another problem in this area is that gcc still has problems on machines with very few registers (e.g., ix86). The inline assembler code can require almost all the registers and the register allocator cannot always handle this situation.

One can disable the string optimizations selectively. Instead of writing

        cp = strcpy (foo, "lkj");

one can write

        cp = (strcpy) (foo, "lkj");

This disables the optimization for that specific call.

I get compiler messages "Initializer element not constant" with stdin/stdout/stderr. Why?

Constructs like:

static FILE *InPtr = stdin;

lead to this message. This is correct behaviour with glibc since stdin is not a constant expression. Please note that a strict reading of ISO C does not allow above constructs.

One of the advantages of this is that you can assign to stdin, stdout, and stderr just like any other global variable (e.g. stdout = my_stream;), which can be very useful with custom streams that you can write with libio (but beware this is not necessarily portable). The reason to implement it this way were versioning problems with the size of the FILE structure.

To fix those programs you've got to initialize the variable at run time. This can be done, e.g. in main, like:

   static FILE *InPtr;
   int main(void)
   {
     InPtr = stdin;
   }

or by constructors (beware this is gcc specific):

   static FILE *InPtr;
   static void inPtr_construct (void) __attribute__((constructor));
   static void inPtr_construct (void) { InPtr = stdin; }

I get some errors with `gcc -ansi'. Isn't glibc ANSI compatible?

The GNU C library is compatible with the ANSI/ISO C standard. If you're using `gcc -ansi', the glibc includes which are specified in the standard follow the standard. The ANSI/ISO C standard defines what has to be in the include files - and also states that nothing else should be in the include files (btw. you can still enable additional standards with feature flags).

The GNU C library is conforming to ANSI/ISO C - if and only if you're only using the headers and library functions defined in the standard.

I can't access some functions anymore. nm shows that they do exist but linking fails nevertheless.

With the introduction of versioning in glibc 2.1 it is possible to export only those identifiers (functions, variables) that are really needed by application programs and by other parts of glibc. This way a lot of internal interfaces are now hidden. nm will still show those identifiers but marking them as internal. ISO C states that identifiers beginning with an underscore are internal to the libc. An application program normally shouldn't use those internal interfaces (there are exceptions, e.g. __ivaliduser). If a program uses these interfaces, it's broken. These internal interfaces might change between glibc releases or dropped completely.

The sys/sem.h file lacks the definition of `union semun'.

Nope. This union has to be provided by the user program. Former glibc versions defined this but it was an error since it does not make much sense when thinking about it. The standards describing the System V IPC functions define it this way and therefore programs must be adopted.

My program segfaults when I call fclose() on the FILE* returned from setmntent(). Is this a glibc bug?

No. Don't do this. Use endmntent(), that's what it's for.

In general, you should use the correct deallocation routine. For instance, if you open a file using fopen(), you should deallocate the FILE * using fclose(), not free(), even though the FILE * is also a pointer.

In the case of setmntent(), it may appear to work in most cases, but it won't always work. Unfortunately, for compatibility reasons, we can't change the return type of setmntent() to something other than FILE *.

I get "undefined reference to `atexit'".

This means that your installation is somehow broken. The situation is the same as for stat(), fstat(), etc (see question 2.7). Investigate why the linker does not pick up libc_nonshared.a.

If a similar message is issued at runtime this means that the application or DSO is not linked against libc. This can cause problems since atexit() is not exported anymore.

Miscellaneous

How can I set the timezone correctly?

You first have to install yourself the timezone database, it is hosted at http://www.iana.org/time-zones.

Then, simply run the tzselect shell script, answer the question and use the name printed in the end by making a symlink /etc/localtime pointing to /usr/share/zoneinfo/NAME (NAME is the returned value from tzselect). That's all. You never again have to worry. Instead of the system wide setting of /etc/localtime}}, you can also set the {{{TZ environment variable.

The GNU C Library supports the extented POSIX method for setting the TZ variable, this is documented in the manual.

What other sources of documentation about glibc are available?

The glibc manual is part of glibc, it is also available online.

The Linux man-pages project documents the Linux kernel and the C library interfaces.

The official home page of glibc is at http://www.gnu.org/software/libc.

The glibc wiki is at http://sourceware.org/glibc/wiki/HomePage.

For bugs, the glibc project uses the sourceware bugzilla with component 'glibc'.

How can I find out which version of glibc I am using in the moment?

If you want to find out about the version from the command line simply run the libc binary. This is probably not possible on all platforms but where it is simply locate the libc shared library and start it as an application. On Linux like:

        /lib/libc.so.6

This will produce all the information you need.

What always will work is to use the API glibc provides. Compile and run the following little program to get the version information:

#include <stdio.h>
#include <gnu/libc-version.h>
int main (void) { puts (gnu_get_libc_version ()); return 0; }

This interface can also obviously be used to perform tests at runtime if this should be necessary.

Context switching with setcontext() does not work from within signal handlers.

XXX: Is the following still correct?

The Linux implementations (IA-64, S390 so far) of setcontext() supports synchronous context switches only. There are several reasons for this:

  • UNIX provides no other (portable) way of effecting a synchronous context switch (also known as co-routine switch). Some versions support this via setjmp()/longjmp() but this does not work universally.

  • As defined by the UNIX '98 standard, the only way setcontext() could trigger an asychronous context switch is if this function were invoked on the ucontext_t pointer passed as the third argument to a signal handler. But according to draft 5, XPG6, XBD 2.4.3, setcontext() is not among the set of routines that may be called from a signal handler.

  • If setcontext() were to be used for asynchronous context switches, all kinds of synchronization and re-entrancy issues could arise and these problems have already been solved by real multi-threading libraries (e.g., POSIX threads).

  • Synchronous context switching can be implemented entirely in user-level and less state needs to be saved/restored than for an asynchronous context switch. It is therefore useful to distinguish between the two types of context switches. Indeed, some application vendors are known to use setcontext() to implement co-routines on top of normal (heavier-weight) pre-emptable threads.

It should be noted that if someone was dead-bent on using setcontext() on the third arg of a signal handler, then IA-64 Linux could support this via a special version of sigaction() which arranges that all signal handlers start executing in a shim function which takes care of saving the preserved registers before calling the real signal handler and restoring them afterwards. In other words, we could provide a compatibility layer which would support setcontext() for asynchronous context switches. However, given the arguments above, I don't think that makes sense. setcontext() provides a decent co-routine interface and we should just discourage any asynchronous use (which just calls for trouble at any rate).

New FAQ Entries that are not complete yet

The following entries are not part of the existing FAQ in the glibc git repository. Feel free to add entries here and those will be moved later to the right place.

What are the accuracy goals for libm functions?

See a libc-alpha message discussing these goals in details. Except for functions such as sqrt, fma and rint that are specified to be bound to particular IEEE 754 operations and that have results (including exceptions raised) that are fully defined to be correctly rounding for all rounding modes, libm functions are not intended to be correctly rounding, are not intended to have errors below 1ulp (may have errors of up to a few ulp on some inputs), and are not intended to be monotonic on regions where the underlying mathematical function is monotonic. A draft set of C bindings to IEEE 754-2008 is under development, part of which (TS 18661-4) is expected to define standard names such as crsin for correctly rounding functions, and in future glibc may provide some such functions under such names.

Why are libm functions slow on some inputs?

The GNU C Library includes a math library that contains a considerable amount of code donated by IBM. The IBM code uses specialized algorithms to compute approximate results for a given input to a specific mathematical function. In some cases during the computation of intermediate results higher precision is required in order to provide an accurate final result. There is in fact a lot of academic research attempting to prove the maximum precision required from intermediate results to produce an output of a given precision (these proofs are generally per function). If the intermediate result requires a higher precision than is available in hardware the function simulates the required precision using what is called integer multi-precision. If you need 100-bits, you gang together enough integers to simulate 100-bits and operate on those larger numbers using the same specialized algorithms to attain a result. Eventually the 100-bit result is rounded down to the size of float, double, or long double depending on the function called. Thus the input of the function may require higher precision intermediate calculations that may in turn use slower integer multi-precision values to compute an accurate result. Without a higher intermediate precision the accuracy of the functions would be terrible. You can detect if you are calling the slow path by using the libm systemtap probe points for the slow paths in several libm functions. It is our expectation that you can then use the probe trigger information to tweak your code to avoid slow paths. The community is looking at providing an alternate implementation of libm that is faster, perhaps selected by -ffast-math, which skips the slow paths at the expense of accuracy and provides faster results.

Why no strlcpy / strlcat?

The strlcpy and strlcat functions have been promoted as a way of copying strings more safely, to avoid buffer overruns when retrofitting large bodies of existing code without understanding the code in detail. Annex K of the C11 standard defines optional functions strcpy_s and strcat_s that serve a similar need, albeit less efficiently and with different calling conventions. Unfortunately, in practice these functions can cause trouble, as their intended use encourages silent data truncation, adds complexity and inefficiency, and does not prevent all buffer overruns in the destinations. New standard library functions should reflect good existing practice, and since it is not clear that these functions are good practice they have been omitted from glibc.

Compiling with gcc -D_FORTIFY_SOURCE can catch many of the errors that these functions are supposed to catch, without having to modify the source code. Also, if efficiency is not paramount the snprintf function can often be used as a portable substitute for these functions.

Compiling for ARM fails

You need to use the ports add-on.

How do I build a binary that works on older GNU/Linux distributions?

(with the answer pointing to LSB, with information about distro LSB packages).

How do I build glibc on Ubuntu (list other distros here with similar problems)?

Some distribution compilers by default have -fstack-protector enabled. The GNU C library cannot be compiled with it and thus you need to add to CFLAGS "-fno-stack-protector -U_FORTIFY_SOURCE".

After installation of glibc 2.15, I cannot compile GCC anymore

Advise: However, it may be useful to have a similar new question regarding the siginfo_t changes and libgcc build failures - existing GCC releases (predating Thomas's patches) won't build with current glibc because of that.

None: FAQ (last edited 2013-11-28 22:22:49 by CarlosODonell)