This is the mail archive of the cygwin-developers mailing list for the Cygwin project.

Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: Implement sched_[gs]etaffinity()

On Apr 16 01:19, Mark Geisert wrote:
> On Fri, 12 Apr 2019, Corinna Vinschen wrote:
> > Yeah, right, I missed to notice that.  I'll add a few notes inline
> > over @ cygwin-patches.
> I've updated my code locally to account for your notes on cygwin-patches;
> thanks!  I've also spent some time researching Windows affinities vs Linux
> affinities and have come to some conclusions.  I'm airing these for review
> before I start coding in earnest.  I appreciate all comments from anybody
> interested.
> (1) On Linux, one deals with processor affinities using a huge mask that
> allows to choose from all processors on the system.  On Windows, one deals
> with processor affinities for only the current processor group, max 64
> processors in a group.  This implies conversion between the two "views" when
> getting or setting processor affinities on Cygwin.
> (2) On Linux, sched_get/setaffinity() take a pid_t argument, but it can be
> either a process id or a thread id.  If one selects a process id, the action
> affects just the main thread of that process.  On Windows, selecting the
> process id affects all threads of that process.
> (3) For completeness, Linux's pthread_get/setaffinity_np() should probably
> be supplied by the proposed code too.
> (4) I was looking at Cygwin's, function
> format_proc_cpuinfo().  There's a call to __get_cpus_per_group() which is
> implemented in  I haven't seen in the MSDN docs whether each
> processor group is guaranteed to have the same number of processors.  I
> might even expect variations on a NUMA system.  Anybody know if one can
> depend on the group membership of the first processor group to apply to all
> groups?

Maybe helps?

 "If the number of logical processors exceeds the maximum group size,
  Windows creates multiple groups by splitting the node into n groups,
  where the first n-1 groups have capacities that are equal to the group

We were over that already when creating the code in format_proc_cpuinfo.
So, IIUC, , and IIRC, the idea is that the logical CPUs are split into
equal chunks of logical CPUs, along NUMA node bordres on a NUMA system,
and the last group has potentially, but seldomly, less nodes.
in the end, the important thing is that all groups have equal size,
except the last one.


  WORD cpu_group = cpu_number / num_cpu_per_group;
  KAFFINITY cpu_mask = 1L << (cpu_number % num_cpu_per_group);

That also means the transposition between the groupless linux system
and the WIndows system is fairly easy.

> (5) On Windows, a process starts out in a particular processor group.  One
> can then change thread affinities in such a way that some threads run in a
> different processor group than other threads of the same process.  The
> process becomes a "multi-group" process.  This has implications for the
> conversions discussed in (1).

Don't see how.  Care to explain?

> (6) On Linux, processor affinity is inherited across fork() and execve().
> I'll need to ensure Cygwin's implementation of those calls handle affinity
> the same way.

Just passing the INHERIT_PARENT_AFFINITY flag to CreateProcess{AsUser}
should do the trick.


Corinna Vinschen
Cygwin Maintainer

Attachment: signature.asc
Description: PGP signature

Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]