This is the mail archive of the
mailing list for the glibc project.
Re: FOR REVIEW: New x86-64 vsyscall vgetcpu()
- From: "Tony Luck" <tony dot luck at intel dot com>
- To: "Andi Kleen" <ak at suse dot de>
- Cc: discuss at x86-64 dot org, linux-kernel at vger dot kernel dot org, libc-alpha at sourceware dot org, vojtech at suse dot cz
- Date: Thu, 15 Jun 2006 11:44:31 -0700
- Subject: Re: FOR REVIEW: New x86-64 vsyscall vgetcpu()
- References: <email@example.com>
On 6/14/06, Andi Kleen <firstname.lastname@example.org> wrote:
But at a closer look it really makes sense:
- The kernel has strong thread affinity and usually keeps a process on the
same CPU. So switching CPUs is rare. This makes it an useful optimization.
Alternatively it means that this will almost always do the right thing, but
once in a while it won't, your application will happen to have been migrated
to a different cpu/node at the point it makes the call, and from then on
this instance will behave oddly (running slowly because it allocates most
of its memory on the wrong node). When you try to reproduce the problem,
the application will work normally.
The alternative is usually to bind the process to a specific CPU - then it
"know" where it is - but the problem is that this is nasty to use and
requires user configuration. The kernel often can make better decisions on
where to schedule. And doing it automatically makes it just work.
Another alternative would be to provide a mechanism for a process
to bind to the current cpu (whatever cpu that happens to be). Then
the kernel gets to make the smart placement decisions, and processes
that want to be bound somewhere (but don't really care exactly where)
have a way to meet their need. Perhaps a cpumask of all zeroes to a
sched_setaffinity call could be overloaded for this?
Or we can dig up some of the old virtual cpu/virtual node suggestions (we
will eventually need to do something like this, but most systems now don't
have enough cpus for this to make much sense yet).