This is the mail archive of the
mailing list for the glibc project.
Re: What *is* the API for sched_getaffinity? Should sched_getaffinity always succeed when using cpu_set_t?
- From: Florian Weimer <fweimer at redhat dot com>
- To: "Michael Kerrisk (man-pages)" <mtk dot manpages at gmail dot com>, "Carlos O'Donell" <carlos at redhat dot com>, Roland McGrath <roland at hack dot frob dot com>
- Cc: KOSAKI Motohiro <kosaki dot motohiro at gmail dot com>, libc-alpha <libc-alpha at sourceware dot org>, linux-man at vger dot kernel dot org
- Date: Wed, 01 Jul 2015 14:37:40 +0200
- Subject: Re: What *is* the API for sched_getaffinity? Should sched_getaffinity always succeed when using cpu_set_t?
- Authentication-results: sourceware.org; auth=none
- References: <51E42BFE dot 7000301 at redhat dot com> <51E4A0BB dot 2070802 at gmail dot com> <51E4A123 dot 9070001 at gmail dot com> <51E6F3ED dot 8000502 at redhat dot com> <51E6F956 dot 5050902 at gmail dot com> <51E714DE dot 6060802 at redhat dot com> <CAHGf_=oZW3kNA3V-9u+BZNs3tL3JKCsO2a0Q6f0iJzo=N4Wb8w at mail dot gmail dot com> <51E7B205 dot 3060905 at redhat dot com> <20130722214335 dot D9AFF2C06F at topped-with-meat dot com> <51EDB378 dot 8070301 at redhat dot com> <558D6171 dot 1060901 at gmail dot com> <558DB0A0 dot 2040707 at gmail dot com>
On 06/26/2015 10:05 PM, Michael Kerrisk (man-pages) wrote:
> +.SS Handling systems with more than 1024 CPUs
> +.I cpu_set_t
> +data type used by glibc has a fixed size of 128 bytes,
> +meaning that the maximum CPU number that can be represented is 1023.
> +.\" FIXME . See https://sourceware.org/bugzilla/show_bug.cgi?id=15630
> +.\" and https://sourceware.org/ml/libc-alpha/2013-07/msg00288.html
> +If the system has more than 1024 CPUs, then calls of the form:
> + sched_getaffinity(pid, sizeof(cpu_set_t), &mask);
> +will fail with the error
> +.BR EINVAL ,
> +the error produced by the underlying system call for the case where the
> +.I mask
> +size specified in
> +.I cpusetsize
> +is smaller than the size of the affinity mask used by the kernel.
I think it is best to leave this as unspecified as possible. Kernel
behavior already changed once, and I can imagine it changing again.
Carlos and I tried to get clarification of the future direction of the
kernel interface here:
No reply so far, unless I missed something.
> +The underlying system calls (which represent CPU masks as bit masks of type
> +.IR "unsigned long\ *" )
> +impose no restriction on the size of the mask.
> +To handle systems with more than 1024 CPUs, one must dynamically allocate the
> +.I mask
> +argument using
> +.BR CPU_ALLOC (3)
> +and manipulate the mask using the "_S" macros described in
> +.BR CPU_ALLOC (3).
> +Using an allocation based on the number of online CPUs:
> + cpu_set_t *mask = CPU_ALLOC(CPU_ALLOC_SIZE(
> + sysconf(_SC_NPROCESSORS_CONF)));
I believe this is incorrect in several ways:
CPU_ALLOC uses the raw CPU counts. CPU_ALLOC_SIZE converts from the raw
count to the size in bytes. (This API is misdesigned.)
sysconf(_SC_NPROCESSORS_CONF) is not related to the kernel CPU mask
size, so it is not the correct value.
> +is probably sufficient, although the value returned by the
> +.BR sysconf ()
> +call can in theory change during the lifetime of the process.
> +Alternatively, one can obtain a value that is guaranteed to be stable for
> +the lifetime of the process by proby for the size of the required mask using
> +.BR sched_getaffinity ()
> +calls with increasing mask sizes until the call does not fail with the error
This is the only possible way right now if you do not want to read
It's also worth noting that the system call and the glibc function have
different return values.
Florian Weimer / Red Hat Product Security