This is the mail archive of the cygwin-developers mailing list for the Cygwin project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: Implement sched_[gs]etaffinity()


On Tue, 16 Apr 2019, Corinna Vinschen wrote:
On Apr 16 01:19, Mark Geisert wrote:
(4) I was looking at Cygwin's fhandler_proc.cc, function
format_proc_cpuinfo().  There's a call to __get_cpus_per_group() which is
implemented in miscfuncs.cc.  I haven't seen in the MSDN docs whether each
processor group is guaranteed to have the same number of processors.  I
might even expect variations on a NUMA system.  Anybody know if one can
depend on the group membership of the first processor group to apply to all
groups?

Maybe https://go.microsoft.com/fwlink/p/?linkid=147914 helps?

"If the number of logical processors exceeds the maximum group size,
 Windows creates multiple groups by splitting the node into n groups,
 where the first n-1 groups have capacities that are equal to the group
 size."

Great; thanks for that.

We were over that already when creating the code in format_proc_cpuinfo.
So, IIUC, , and IIRC, the idea is that the logical CPUs are split into
equal chunks of logical CPUs, along NUMA node bordres on a NUMA system,
and the last group has potentially, but seldomly, less nodes.
in the end, the important thing is that all groups have equal size,
except the last one.

Therefore:

 WORD cpu_group = cpu_number / num_cpu_per_group;
 KAFFINITY cpu_mask = 1L << (cpu_number % num_cpu_per_group);

That also means the transposition between the groupless linux system
and the WIndows system is fairly easy.

Yes, dealing with an array of unsigned longs vs bitblt ops FTW.

(5) On Windows, a process starts out in a particular processor group.  One
can then change thread affinities in such a way that some threads run in a
different processor group than other threads of the same process.  The
process becomes a "multi-group" process.  This has implications for the
conversions discussed in (1).

Don't see how.  Care to explain?

I was just whinging in advance that a single sched_get/setaffinity will result in multiple Windows affinity ops to gather/scatter among processor groups the process belongs to. At least they won't be bitblt ops.

(6) On Linux, processor affinity is inherited across fork() and execve().
I'll need to ensure Cygwin's implementation of those calls handle affinity
the same way.

Just passing the INHERIT_PARENT_AFFINITY flag to CreateProcess{AsUser}
should do the trick.

OK.  Hope so.

(7), to make a prime number: I don't see any need for the Cygwin DLL to keep any affinity info (process or thread) or processor group membership info around, do you? I believe the sched_get/setaffinity functions will do whatever Windows ops they need to do on the fly based on the args passed in. That allows the user to do Windows affinity ops at will outside of Cygwin without screwing up any Cygwin-maintained context.

Thanks again,

..mark


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]