This is the mail archive of the libc-alpha@sources.redhat.com mailing list for the glibc project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

linuxthreads may deadlock with __pthread_sig_debug enabled


If the application blocks or masks __pthread_sig_debug and then
creates a thread, the manager thread won't be able to notify the
debugger about the new thread, so the debugger will never wake it back
up.  This patch fixes it.

The problem was originally observed in frv-uclinux, but it occurs on
i686-pc-linux-gnu as well, using rda (the Remote Debug Agent) to
ptrace the process, instead of gdb itself.

Ok to install?

Index: libpthread/linuxthreads/ChangeLog
from  Alexandre Oliva  <aoliva@redhat.com>

	* signals.c (pthread_sigmask): Don't ever block or mask
	__pthread_sig_debug.

Index: libpthread/linuxthreads/signals.c
===================================================================
RCS file: /var/cvs/uClibc/libpthread/linuxthreads/signals.c,v
retrieving revision 1.4
diff -u -p -r1.4 signals.c
--- libpthread/linuxthreads/signals.c 3 Mar 2003 20:57:18 -0000 1.4
+++ libpthread/linuxthreads/signals.c 5 Apr 2004 21:31:40 -0000
@@ -38,9 +38,13 @@ int pthread_sigmask(int how, const sigse
     case SIG_SETMASK:
       sigaddset(&mask, __pthread_sig_restart);
       sigdelset(&mask, __pthread_sig_cancel);
+      if (__pthread_sig_debug > 0)
+	sigdelset(&mask, __pthread_sig_debug);
       break;
     case SIG_BLOCK:
       sigdelset(&mask, __pthread_sig_cancel);
+      if (__pthread_sig_debug > 0)
+	sigdelset(&mask, __pthread_sig_debug);
       break;
     case SIG_UNBLOCK:
       sigdelset(&mask, __pthread_sig_restart);
-- 
Alexandre Oliva             http://www.ic.unicamp.br/~oliva/
Red Hat Compiler Engineer   aoliva@{redhat.com, gcc.gnu.org}
Free Software Evangelist  oliva@{lsd.ic.unicamp.br, gnu.org}

Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]