This is the mail archive of the libc-hacker@sourceware.cygnus.com mailing list for the glibc project.

Note that libc-hacker is a closed list. You may look at the archives of this list, but subscription and posting are not open.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]

Re: [PATCH] membar for sparc64


On Mon, Apr 17, 2000 at 11:16:36PM -0700, Ulrich Drepper wrote:
> Jakub Jelinek <jakub@redhat.com> writes:
> 
> > 2000-04-17  Jakub Jelinek  <jakub@redhat.com>
> > 
> > 	* sysdeps/sparc/sparc64/pt-machine.h (MEMORY_BARRIER): Make sure all
> > 	stores before MEMORY_BARRIER complete before all stores after it and
> > 	similarly with loads.
> 
> Can you take a look at this again, now with the WRITE_MEMORY_BARRIERs
> in place?

stbar is deprecated and is doing something else (stbar is functionally
equivalent to membar #StoreStore, ie. something like Alpha wmb).
So, I've defined MEMORY_BARRIER to be something like Alpha mb and introduced
READ_MEMORY_BARRIER which ensures Load->Load ordering.

2000-04-18  Jakub Jelinek  <jakub@redhat.com>

	* sysdeps/sparc/sparc64/pt-machine.h (MEMORY_BARRIER): Use membar,
	not stbar.
	(READ_MEMORY_BARRIER): Define.
	* spinlock.c (__pthread_spin_unlock): Use READ_MEMORY_BARRIER, not
	MEMORY_BARRIER.
	* internals.h (READ_MEMORY_BARRIER): Define if not defined in sysdep
	headers.

--- libc/linuxthreads/sysdeps/sparc/sparc64/pt-machine.h.jj	Tue Apr 18 08:13:13 2000
+++ libc/linuxthreads/sysdeps/sparc/sparc64/pt-machine.h	Tue Apr 18 08:36:56 2000
@@ -38,9 +38,11 @@ testandset (int *spinlock)
 
 
 /* Memory barrier; default is to do nothing */
-/* FIXME: is stbar OK, or should we use the more general membar instruction?
-   If so, which mode to pass to membar? */
-#define MEMORY_BARRIER() __asm__ __volatile__("stbar" : : : "memory")
+#define MEMORY_BARRIER() \
+     __asm__ __volatile__("membar #LoadLoad | #LoadStore | #StoreLoad | #StoreStore" : : : "memory")
+/* Read barrier.  */
+#define READ_MEMORY_BARRIER() \
+     __asm__ __volatile__("membar #LoadLoad | #LoadStore" : : : "memory")
 /* Write barrier.  */
 #define WRITE_MEMORY_BARRIER() \
      __asm__ __volatile__("membar #StoreLoad | #StoreStore" : : : "memory")
--- libc/linuxthreads/spinlock.c.jj	Tue Apr 18 08:13:13 2000
+++ libc/linuxthreads/spinlock.c	Tue Apr 18 08:38:36 2000
@@ -122,12 +122,12 @@ again:
        several iterations of the while loop.  Some processors (e.g.
        multiprocessor Alphas) could perform such reordering even though
        the loads are dependent. */
-    MEMORY_BARRIER();
+    READ_MEMORY_BARRIER();
     thr = *ptr;
   }
   /* Prevent reordering of the load of lock->__status above and
      thr->p_nextlock below */
-  MEMORY_BARRIER();
+  READ_MEMORY_BARRIER();
   /* Remove max prio thread from waiting list. */
   if (maxptr == (pthread_descr *) &lock->__status) {
     /* If max prio thread is at head, remove it with compare-and-swap
--- libc/linuxthreads/internals.h.jj	Tue Apr 18 08:13:13 2000
+++ libc/linuxthreads/internals.h	Tue Apr 18 08:38:02 2000
@@ -359,10 +359,13 @@ static inline pthread_descr thread_self 
 
 /* If MEMORY_BARRIER isn't defined in pt-machine.h, assume the architecture
    doesn't need a memory barrier instruction (e.g. Intel x86).  Some
-   architectures distinguish between normal/read and write barriers.  */
+   architectures distinguish between full, read and write barriers.  */
 
 #ifndef MEMORY_BARRIER
 #define MEMORY_BARRIER()
+#endif
+#ifndef READ_MEMORY_BARRIER
+#define READ_MEMORY_BARRIER() MEMORY_BARRIER()
 #endif
 #ifndef WRITE_MEMORY_BARRIER
 #define WRITE_MEMORY_BARRIER() MEMORY_BARRIER()

	Jakub

Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]