This is the mail archive of the libc-alpha@sourceware.org mailing list for the glibc project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: What *is* the API for sched_getaffinity? Should sched_getaffinity always succeed when using cpu_set_t?


On 07/21/2015 05:03 PM, Michael Kerrisk (man-pages) wrote:
> Hello Florian,
> 
> Thanks for your comments, and sorry for the delayed follow-up.
> 
> On 07/01/2015 02:37 PM, Florian Weimer wrote:
>> On 06/26/2015 10:05 PM, Michael Kerrisk (man-pages) wrote:
>>
>>> +.SS Handling systems with more than 1024 CPUs
>>> +The
>>> +.I cpu_set_t
>>> +data type used by glibc has a fixed size of 128 bytes,
>>> +meaning that the maximum CPU number that can be represented is 1023.
>>> +.\" FIXME . See https://sourceware.org/bugzilla/show_bug.cgi?id=15630
>>> +.\" and https://sourceware.org/ml/libc-alpha/2013-07/msg00288.html
>>> +If the system has more than 1024 CPUs, then calls of the form:
>>> +
>>> +    sched_getaffinity(pid, sizeof(cpu_set_t), &mask);
>>> +
>>> +will fail with the error
>>> +.BR EINVAL ,
>>> +the error produced by the underlying system call for the case where the
>>> +.I mask
>>> +size specified in
>>> +.I cpusetsize
>>> +is smaller than the size of the affinity mask used by the kernel.
>>
>> I think it is best to leave this as unspecified as possible.  Kernel
>> behavior already changed once, and I can imagine it changing again.
> 
> Hmmm. Something needs to be said about what the kernel is doing though.
> Otherwise, it's hard to make sense of this subsection. Did you have a
> suggested rewording that removes the piece you find problematic?

What about this?

âIf the kernel affinity mask is larger than 1024 then
â
is smaller than the size of the affinity mask used by the kernel.
Depending on the system CPU topology, the kernel affinity mask can
be substantially larger than the number of active CPUs in the system.
â

I.e., make clear that the size of the mask can be quite different from
the CPU count.

>    Handling systems with more than 1024 CPUs
>        The  underlying  system calls (which represent CPU masks as bit
>        masks of type unsigned long *) impose  no  restriction  on  the
>        size of the CPU mask.  However, the cpu_set_t data type used by
>        glibc has a fixed size of 128 bytes, meaning that  the  maximum
>        CPU  number that can be represented is 1023.  If the system has
>        more than 1024 CPUs, then calls of the form:
> 
>            sched_getaffinity(pid, sizeof(cpu_set_t), &mask);
> 
>        will fail with the error EINVAL,  the  error  produced  by  the
>        underlying  system call for the case where the mask size speciâ
>        fied in cpusetsize is smaller than the  size  of  the  affinity
>        mask used by the kernel.
> 
>        When  working  on  systems  with  more than 1024 CPUs, one must
>        dynamically allocate the mask argument.   Currently,  the  only
>        way  to do this is by probing for the size of the required mask
>        using sched_getaffinity()  calls  with  increasing  mask  sizes
>        (until the call does not fail with the error EINVAL).
> 
> Better?

âmore than 1024 CPUsâ should be âlarge [kernel CPU] affinity masksâ
throughout.

-- 
Florian Weimer / Red Hat Product Security


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]