This is the mail archive of the
libc-alpha@sourceware.org
mailing list for the glibc project.
Re: What *is* the API for sched_getaffinity? Should sched_getaffinity always succeed when using cpu_set_t?
- From: Samuel Thibault <samuel dot thibault at ens-lyon dot org>
- To: Carlos O'Donell <carlos at redhat dot com>
- Cc: KOSAKI Motohiro <kosaki dot motohiro at gmail dot com>, libc-alpha <libc-alpha at sourceware dot org>
- Date: Thu, 18 Jul 2013 11:23:07 +0200
- Subject: Re: What *is* the API for sched_getaffinity? Should sched_getaffinity always succeed when using cpu_set_t?
- References: <51E42BFE dot 7000301 at redhat dot com> <51E4A0BB dot 2070802 at gmail dot com> <51E4A123 dot 9070001 at gmail dot com> <51E6F3ED dot 8000502 at redhat dot com> <51E6F956 dot 5050902 at gmail dot com> <51E714DE dot 6060802 at redhat dot com> <CAHGf_=oZW3kNA3V-9u+BZNs3tL3JKCsO2a0Q6f0iJzo=N4Wb8w at mail dot gmail dot com> <51E7B205 dot 3060905 at redhat dot com>
Carlos O'Donell, le Thu 18 Jul 2013 05:14:45 -0400, a écrit :
> Either way the question remains:
>
> (1) Should glibc's sched_getaffinity never fail?
>
> or
>
> (2) Should glibc's sched_getaffinity fail with EINVAL when the size
> of the cpu set is smaller than the size of the affinity mask for
> all possible cpus?
>
> I believe that a call to sched_getaffinity should not fail.
>
> What do you think?
Ideally there would have never been a static cpu_set_t type, so the
answer would have been 2). It happens that the hwloc library, in use in
a lot of MPI implementations and batch schedulers, and thus does care
a lot about lots of CPUs, currently depends on 2), because we had seen
that documented in RedHat. We will probably migrate to using "possible"
instead, but it will take time to get propagated to systems, not surely
faster than a newer glibc that breaks it.
Samuel