This is the mail archive of the libc-alpha@sourceware.org mailing list for the glibc project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: [PATCH v2 3/3] Mutex: Optimize adaptive spin algorithm


On 04/25/2018 04:56 AM, Kemi Wang wrote:
@@ -124,21 +125,24 @@ __pthread_mutex_lock (pthread_mutex_t *mutex)
        if (LLL_MUTEX_TRYLOCK (mutex) != 0)
  	{
  	  int cnt = 0;
+	  int max_cnt = MIN (__mutex_aconf.spin_count,
+			mutex->__data.__spins * 2 + 100);
+
+      /* MO read while spinning */
+      do
+        {
+         atomic_spin_nop ();
+        }
+      while (atomic_load_relaxed (&mutex->__data.__lock) != 0 &&
+            ++cnt < max_cnt);
+        /* Try to acquire the lock if lock is available or the spin count
+         * is run out, call into kernel to block if fails
+         */
+      if (LLL_MUTEX_TRYLOCK (mutex) != 0)
+        LLL_MUTEX_LOCK (mutex);
+      mutex->__data.__spins += (cnt - mutex->__data.__spins) / 8;
+    }

The indentation is off. Comments should end with a ”. ” (dot and two spaces). Multi-line comments do not start with “*” on subsequent lines. We don't use braces when we can avoid them. Operators such as “&&” should be on the following line when breaking up lines.

Why is the LLL_MUTEX_TRYLOCK call still needed? Shouldn't be an unconditional call to LLL_MUTEX_LOCK be sufficient?

But the real question is if the old way of doing CAS in a loop is beneficial on other, non-Intel architectures. You either need get broad consensus from the large SMP architectures (various aarch64 implementations, IBM POWER and Z), or somehow make this opt-in at the source level.

Ideally, we would also get an ack from AMD for the x86 change, but they seem not particularly active on libc-alpha, so that may not be realistic.

Thanks,
Florian


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]