This is the mail archive of the libc-alpha@sourceware.org mailing list for the glibc project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: [PATCH 4/4] rseq registration tests (v2)


On 4/5/19 11:27 AM, Florian Weimer wrote:
* Carlos O'Donell:

On 4/5/19 6:01 AM, Florian Weimer wrote:
* Carlos O'Donell:

The above commit is a good example of a failure to provide a comment
that gives intent for the implementation and therefore you have no
idea why 1MiB was selected. Magic numbers should have comments, and
a patch like the one you reference would not be accepted today.

The only real worry we have with testing is thread reap rate which
seems to be slow in the kernel and sometimes we've seen the kernel
be unable to clone new threads because of this reason. Even then on
the worst architecture, hppa, I can create ~300 threads in a test
without any problems.

Delayed reaping in the kernel (after signaling thread exit) does *not*
affect the stack allocation.  With a valid test, the stack is queued for
reuse.  Only kernel-side data structures stick around.

Unless you run out of mappings? The kernel must handle CLONE_CHILD_CLEARTID
in a timely fashion or glibc will be unable to free the stacks and the cache
could grow beyond the maximum limit (note that free_stacks() is only a
one-shot attempt to lower the limit and does not need to succeed).

The reaping problem is that we get CLONE_CHILD_CLEARTID notification
(and the application spawns a new thread) before the old thread is
completely gone.  It's no longer running, so we can safely remove the
stack, but not all kernel data structures have been deallocated at that
point.

A call to pthread_join does not re-evaluate the stack cache limits and does
not free anything from the cache.

Therefore you can have hundreds of threads exit, go through free_stacks(),
fail to free their stacks, and *then* hit pthread_join(), still fail to
free any stacks (because we don't re-reap the stacks there, is that our
fault in our __deallocate_stack() impl?), and then try to do some other
operation that requires memory and run out.

Or do we have tests that spawn detached threads in a loop, expecting not
to exceed the thread/task limit of the user?  That's a different problem
and probably an invalid test.

No, that only happens in out 4 test cases and doesn't involve many threads.
--
Cheers,
Carlos.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]