This is the mail archive of the
mailing list for the glibc project.
Re: [PATCH 4/4] rseq registration tests (v2)
- From: Carlos O'Donell <codonell at redhat dot com>
- To: Florian Weimer <fweimer at redhat dot com>
- Cc: Mathieu Desnoyers <mathieu dot desnoyers at efficios dot com>, carlos <carlos at redhat dot com>, Joseph Myers <joseph at codesourcery dot com>, Szabolcs Nagy <szabolcs dot nagy at arm dot com>, libc-alpha <libc-alpha at sourceware dot org>, Thomas Gleixner <tglx at linutronix dot de>, Ben Maurer <bmaurer at fb dot com>, Peter Zijlstra <peterz at infradead dot org>, "Paul E. McKenney" <paulmck at linux dot vnet dot ibm dot com>, Boqun Feng <boqun dot feng at gmail dot com>, Will Deacon <will dot deacon at arm dot com>, Dave Watson <davejwatson at fb dot com>, Paul Turner <pjt at google dot com>
- Date: Fri, 5 Apr 2019 13:38:48 -0400
- Subject: Re: [PATCH 4/4] rseq registration tests (v2)
- References: <firstname.lastname@example.org> <email@example.com> <firstname.lastname@example.org> <1950762361.2578.1554408264573.JavaMail.email@example.com> <firstname.lastname@example.org> <email@example.com> <firstname.lastname@example.org> <email@example.com>
On 4/5/19 11:27 AM, Florian Weimer wrote:
* Carlos O'Donell:
On 4/5/19 6:01 AM, Florian Weimer wrote:
* Carlos O'Donell:
The above commit is a good example of a failure to provide a comment
that gives intent for the implementation and therefore you have no
idea why 1MiB was selected. Magic numbers should have comments, and
a patch like the one you reference would not be accepted today.
The only real worry we have with testing is thread reap rate which
seems to be slow in the kernel and sometimes we've seen the kernel
be unable to clone new threads because of this reason. Even then on
the worst architecture, hppa, I can create ~300 threads in a test
without any problems.
Delayed reaping in the kernel (after signaling thread exit) does *not*
affect the stack allocation. With a valid test, the stack is queued for
reuse. Only kernel-side data structures stick around.
Unless you run out of mappings? The kernel must handle CLONE_CHILD_CLEARTID
in a timely fashion or glibc will be unable to free the stacks and the cache
could grow beyond the maximum limit (note that free_stacks() is only a
one-shot attempt to lower the limit and does not need to succeed).
The reaping problem is that we get CLONE_CHILD_CLEARTID notification
(and the application spawns a new thread) before the old thread is
completely gone. It's no longer running, so we can safely remove the
stack, but not all kernel data structures have been deallocated at that
A call to pthread_join does not re-evaluate the stack cache limits and does
not free anything from the cache.
Therefore you can have hundreds of threads exit, go through free_stacks(),
fail to free their stacks, and *then* hit pthread_join(), still fail to
free any stacks (because we don't re-reap the stacks there, is that our
fault in our __deallocate_stack() impl?), and then try to do some other
operation that requires memory and run out.
Or do we have tests that spawn detached threads in a loop, expecting not
to exceed the thread/task limit of the user? That's a different problem
and probably an invalid test.
No, that only happens in out 4 test cases and doesn't involve many threads.