This is the mail archive of the glibc-bugs@sourceware.org mailing list for the glibc project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

[Bug libc/16291] feature request: provide simpler ways to compute stack and tls boundaries


http://sourceware.org/bugzilla/show_bug.cgi?id=16291

--- Comment #47 from Rich Felker <bugdal at aerifal dot cx> ---
On Tue, Feb 04, 2014 at 02:18:14PM +0000, konstantin.s.serebryany at gmail dot
com wrote:
> Properly catching thread exit is a challenge by itself, 
> Today we are using yet another hack to catch thread exit -- I wonder
> if you could suggest a better approach. 
> I added a relevant section to the wiki page above.

I don't see the lack of a hook-based approach for DTLS destruction as
a new problem, just another symptom of your lack of a good approach to
catching thread exit. Your usage case is harder than the usual (which
can be achieved simply by wrapping pthread_create to use a special
start function that installs a cancellation handler then calls the
real start function) because you want to catch the point where the
thread is truely dead (all dtors having been called, DTLS freed, etc.)
which is, formally, supposed to be invisible to application code (i.e.
atomic with respect to thread exit).

I'm not yet sure what the right approach to this is, but I'm skeptical
of providing a public API to allow applications to observe a state
that's not supposed to be observable.

> > or a call to dlclose. 
> 
> When dlclose happens in one thread, we need to do something with DTLS in 
> all threads, which is tricky, if at all possible, w/o knowing 
> how exactly glibc itself handles this case. 
> hook-based approach will not have this problem,

If the TLS query API works correctly, you should not care how glibc
implements it internally. You should just trust the results to be
correct after dlclose returns, so that wrapping dlclose to call the
real dlclose then re-query after it returns just works.

Of course this has a race window where the memory has already been
freed but you don't know about that. I'm not sure if you care about
that, but if you do, I think the right approach is just to be wrapping
mmap and malloc so that you can see if they allocate a range you
thought belonged to something else, and if so, patch up your records.

-- 
You are receiving this mail because:
You are on the CC list for the bug.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]