Question about sigset.h
Neale.Ferguson@softwareAG-usa.com
Neale.Ferguson@softwareAG-usa.com
Fri Jul 20 08:27:00 GMT 2001
In sysdeps/unix/sysv/linux/bits/sigset.h there are the following lines:
/* A `sigset_t' has a bit for each signal. */
#define _SIGSET_NWORDS (1024 / (8 * sizeof (unsigned long int)))
Which results in a value of 16 in 64-bit environment or 32 in a 32-bit
one. In signal.h the number of signals is defined as 64. Should there
be a correspondence between these values? The Linux kernel uses the
NSIG value to determine how large a signal mask is and thus comes up
with a 1 or 2 (64/32-bit). This isn't much of a problem until the kernel
builds a ucontext_t using its values and passes it to an application
built using the glibc value. I'm not sure if there's a fault here or
if so where the problem belongs but I'm simply after someone who may
be able to explain the origin of the sigset.h value. I'm assuming a
POSIX spec. sets out a max. no. of signals of this order.
TIA... Neale.
More information about the Libc-alpha
mailing list