When a program using select() starts tracking file descriptors above 1024,
the fd_set vector (128 bytes) will overflow, writing to whatever is
beyond the vector, leading to stack/heap corruption.
This is a known, old, issue. Examples:
It is perfectly valid to use select() on a user-allocated vector that IS large
enough to handle the the fds being tracked, but it seems that glibc should take
some proactive measures to help applications that are not checking FD_SETSIZE.
It was pointed out that SSH, e.g. uses this to work around the issue:
fdset = (fd_set *)xcalloc(howmany(maxfd + 1, NFDBITS)
Some ideas could be to flag FD_ZERO as dangerous? Or to check sizeof(...) on
I would love to see a reasonable approach to protecting applications that aren't
prepared for RLIMIT_NOFILE to be >1024. :)
Also being tracked here: https://bugs.launchpad.net/bugs/386558
select is what it is. Every program using it must be considered buggy.
Could I convince you to revisit this bug? This issue is currently being hit by some enterprise sized daemons (lots of open fds). The biggest issue is that almost every use of select is wrong, so fixing them all in a timely manner is rather impractical. Some projects like Samba have already moved to poll(), but they're now hitting fd issues in various libraries.
I do agree that this is a library bug, but I think given the situation, it could make sense to add a fix for this to glibc to prevent buggy select use from overwriting arbitrary bits in memory. It's obvious that most projects don't use select() properly, even though its correct use is documented in the man pages.
(In reply to comment #2)
> Could I convince you to revisit this bug?
No. Any such change breaks existing code since there are programs which redefine the set size and do other stupid things.
Wouldn't it be reasonable to range-check the file descriptor when security-related feature test macros (perhaps FORTIFY_SOURCE) are enabled?
By the way, POSIX specifies that passing fd values greater than or equal to FD_SETSIZE to the FD_* macros/functions results in undefined behavior, so programs which want to *try* using select with higher fds should do it by allocating an *array of fd_set objects* with (maxfd+FD_SETSIZE)/FD_SETSIZE elements, then performing operations like FD_SET(fd%FD_SETSIZE, &fds[fd/FD_SETSIZE]); -- this also avoids dependency on nonstandard and nonportable macros like NFDBITS.
FD_SET fortification was implemented in glibc 2.15.
Author: Ulrich Drepper <email@example.com>
Date: Thu Sep 8 19:48:47 2011 -0400
Add range checking for FD_SET, FD_CLR, and FD_ISSET
These checks break programs compiled with _FORTIFY_SOURCE that allocate fd_sets on the heap. This has long been supported by Linux, all BSDs and many commercial Unix as a way to avoid FD_SETSIZE limits.
Please consider revising the checks to detect explicitly allocated fd_sets or add a preprocessor flag to disable the check.