This is the mail archive of the
mailing list for the glibc project.
[Bug ports/3775] New: kernel's zlib code upgrade triggers glibc>=2.4 misbehaviour
- From: "ya-cbou at yandex dot ru" <sourceware-bugzilla at sourceware dot org>
- To: glibc-bugs at sources dot redhat dot com
- Date: 20 Dec 2006 21:21:11 -0000
- Subject: [Bug ports/3775] New: kernel's zlib code upgrade triggers glibc>=2.4 misbehaviour
- Reply-to: sourceware-bugzilla at sourceware dot org
During 2.6.18 release cycle these changes(1) made to mainline kernel:
31925c8857ba17c11129b766a980ff7c87780301 [PATCH] Fix ppc32 zImage inflate
b762450e84e20a179ee5993b065caaad99a65fbf [PATCH] zlib inflate: fix function
0ecbf4b5fc38479ba29149455d56c11a23b131c0 move acknowledgment for Mark Adler to
4f3865fb57a04db7cca068fed1c15badc064a302 [PATCH] zlib_inflate: Upgrade library
code to a recent version
Which triggers glibc>=2.4 misbehaviour: applications segfaults on libutil (and
probably some other libraries) usage, when libutil placed into compressed
filesystem. Confirmed using EABI and old ABI, using different arm boards.
To reproduce do these steps:
1. Grab test.c from http://article.gmane.org/gmane.linux.ports.arm.kernel/28068
(I'll also attach it)
2. arm-linux-gcc test.c -lutil -o test
On target with glibc>=2.4 and kernel>=2.6.18:
3. mkcramfs /lib testfs
4. mount -t cramfs testfs /mnt/testfs -o loop
5. LD_LIBRARY_PATH=/mnt/testfs ./test
(segfault or "illegal instruction" expected, no core dumped)
On the second run there will no segfault, which shows that if library file
cached then glibc properly operates with it. As another prove, you can "cat
/mnt/testfs/libutil.so.1 > /dev/null" prior to running test, and it will not
Bug also reproducible using JFFS2 filesystem with zlib compression. Bug can be
reproduced several times without reboot using umount/mount sequence, which
flushes files cache.
Bug also irreproducible using glibc-2.3.x. Bug also irreproducible if using
LD_BIND_NOW. Reverting changes(1) from the kernel also eliminates glibc
Also known, that first and second run of md5sum on libutil file producing same
results, which puts kernel's new zlib code almost above suspicion: cramfs/jffs2
using only one inflation/deflation path inside kernel.
New kernel's zlib code is a bit faster, and this could trigger race or timing
issue, which proves by these commands:
(/tmp is tmpfs, fast)
LD_DEBUG=all LD_LIBRARY_PATH=/mnt/testfs ./test 2> /tmp/2 <- segfaults
LD_DEBUG=all LD_LIBRARY_PATH=/mnt/testfs ./test <- not segfaults
I.e. (IMHO) printf's which outputs to slow terminal "delays" dynamic loader
execution and thus we're not seeing segfault.
Of course, it could be still kernel bug, not glibc.. but we're stuck at that
moment and asking for help.
Summary: kernel's zlib code upgrade triggers glibc>=2.4
AssignedTo: roland at gnu dot org
ReportedBy: ya-cbou at yandex dot ru
CC: glibc-bugs at sources dot redhat dot com
GCC build triplet: i686-pc-linux-gnu
GCC host triplet: arm-unknown-linux
GCC target triplet: arm-unknown-linux
------- You are receiving this mail because: -------
You are on the CC list for the bug, or are watching someone who is.