The pthread_once algorithm (see nptl/sysdeps/unix/sysv/linux/pthread_once.c after the pthread_once unification) has an ABA issue related to the fork generation: If an initialization is interrupted, we then fork 2^30 times (30 bits of once_control are used for the fork generation), and try to initialize again, we can deadlock because we can't distinguish the in-progress and interrupted cases anymore. This should be rather unlikely to hit in practice given that we need to be suspended for exactly 2^30 fork calls; also, any subsequent pthread_once call to the same pthread_once instance will wake the futex_wait again.
(In reply to Torvald Riegel from comment #0) > The pthread_once algorithm (see nptl/sysdeps/unix/sysv/linux/pthread_once.c > after the pthread_once unification) has an ABA issue related to the fork > generation: If an initialization is interrupted, we then fork 2^30 times (30 > bits of once_control are used for the fork generation), and try to > initialize again, we can deadlock because we can't distinguish the > in-progress and interrupted cases anymore. > > This should be rather unlikely to hit in practice given that we need to be > suspended for exactly 2^30 fork calls; also, any subsequent pthread_once > call to the same pthread_once instance will wake the futex_wait again. My problem with "unlikely" is that eventually this leads to security issues. At the very least I'd like to know the cost of detecting the overflow and asserting, which prevents deadlock by killing the process.
I don't see a significant difference between a deadlock and killing the process -- at least from a security perspective. With the latter, a user might become aware of the problem without starting a debugger; with the former, just another pthread_once call would repair the deadlock, so make this a benign failure (when ignoring performance).