This is the mail archive of the libc-alpha@sourceware.org mailing list for the glibc project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: [PATCH v6] Also use l_tls_dtor_count to decide on object unload (BZ #18657)


On Thu, Jul 23, 2015 at 04:01:43PM -0400, Carlos O'Donell wrote:
> However, it remains that we use "load lock" and "load_lock" to talk about
> "dl_load_lock".
> 
> I would prefer that all references be made to the real name of the lock
> e.g. "dl_load_lock", in the even that some day we split the lock in two
> that we don't have to go through and clarify which of the two load locks
> we're talking about.

Done, I've pushed this now:

Siddhesh

commit a81a00ff94a43af85f7aefceb6d31f3c0f11151d
Author: Siddhesh Poyarekar <siddhesh@redhat.com>
Date:   Fri Jul 24 06:09:47 2015 +0530

    Mention dl_load_lock by name in the comments
    
    Mention dl_load_lock by name instead of just 'load lock' in the
    comments.  This makes it unambigious which lock we're talking about.

diff --git a/ChangeLog b/ChangeLog
index f1b7bd7..d43a564 100644
--- a/ChangeLog
+++ b/ChangeLog
@@ -1,3 +1,8 @@
+2015-07-24  Siddhesh Poyarekar  <siddhesh@redhat.com>
+
+	* stdlib/cxa_thread_atexit_impl.c: Use the lock name dl_load_lock
+	instead of just saying load lock in the comments.
+
 2015-07-23  Roland McGrath  <roland@hack.frob.com>
 
 	* sysdeps/unix/Subdirs: Moved ...
diff --git a/stdlib/cxa_thread_atexit_impl.c b/stdlib/cxa_thread_atexit_impl.c
index 8e26380..2d5d56a 100644
--- a/stdlib/cxa_thread_atexit_impl.c
+++ b/stdlib/cxa_thread_atexit_impl.c
@@ -25,14 +25,15 @@
    combinations of all three functions are the link map list, a link map for a
    DSO and the link map member l_tls_dtor_count.
 
-   __cxa_thread_atexit_impl acquires the load_lock before accessing any shared
-   state and hence multiple of its instances can safely execute concurrently.
+   __cxa_thread_atexit_impl acquires the dl_load_lock before accessing any
+   shared state and hence multiple of its instances can safely execute
+   concurrently.
 
-   _dl_close_worker acquires the load_lock before accessing any shared state as
-   well and hence can concurrently execute multiple of its own instances as
+   _dl_close_worker acquires the dl_load_lock before accessing any shared state
+   as well and hence can concurrently execute multiple of its own instances as
    well as those of __cxa_thread_atexit_impl safely.  Not all accesses to
-   l_tls_dtor_count are protected by the load lock, so we need to synchronize
-   using atomics.
+   l_tls_dtor_count are protected by the dl_load_lock, so we need to
+   synchronize using atomics.
 
    __call_tls_dtors accesses the l_tls_dtor_count without taking the lock; it
    decrements the value by one.  It does not need the big lock because it does
@@ -51,8 +52,8 @@
    it is safe to unload the DSO.  Hence, to ensure that this does not happen,
    the following conditions must be met:
 
-   1. In _dl_close_worker, the l_tls_dtor_count load happens before the DSO
-      is unload and its link map is freed
+   1. In _dl_close_worker, the l_tls_dtor_count load happens before the DSO is
+      unloaded and its link map is freed
    2. The link map dereference in __call_tls_dtors happens before the
       l_tls_dtor_count dereference.
 
@@ -122,7 +123,7 @@ __cxa_thread_atexit_impl (dtor_func func, void *obj, void *dso_symbol)
 
   /* This increment may only be concurrently observed either by the decrement
      in __call_tls_dtors since the other l_tls_dtor_count access in
-     _dl_close_worker is protected by the load lock.  The execution in
+     _dl_close_worker is protected by the dl_load_lock.  The execution in
      __call_tls_dtors does not really depend on this value beyond the fact that
      it should be atomic, so Relaxed MO should be sufficient.  */
   atomic_fetch_add_relaxed (&lm_cache->l_tls_dtor_count, 1);


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]