This is the mail archive of the libc-alpha@sourceware.org mailing list for the glibc project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: [RFC PATCH] getcpu_cache system call: caching current CPU number (x86)


On Mon, Jul 20, 2015 at 1:50 PM, Linus Torvalds
<torvalds@linux-foundation.org> wrote:
>
> On Jul 20, 2015 10:41 AM, "Andy Lutomirski" <luto@amacapital.net> wrote:
>>
>> glibc will have to expose a way to turn it off, I guess. (ELF flag?)
>
> Ugh. That just sounds nasty.
>
> I'm on mobile, so can't check right now, but don't we already have a per-cpu
> gdt? We could just make a very simple rule:
>
> - create a single gdt entry with a segment that is per-cpu and points to one
> single read-only page in kernel space that contains the virtual address of
> that segment in vmalloc space (and maybe we can have the CPU number there
> somewhere, and extend it to something else later)

Annoying problem one: the segment base field is only 32 bits in the GDT.

>
> - make the rule be that if you hold that segment in %fs or %gs in your user
> space state, it gets cleared when the thread is scheduled out.

That sounds a bit evil, but okay.

>
> What does this get you?
>
> It basically means that:
>
> - user space can just load the segment selector in %gs
>

IIRC this is very expensive -- 40 cycles or so.  At this point
userspace might as well just use a real lock cmpxchg.

> - user space can load the virtual address of the segment base into a bar
> register, and use that to calculate a pointer to regular data structures.
>
> - user space can use that "reverse offset" to access any data it wants, and
> access that data with a gs override.
>
> - if the user space thread is scheduled, that access will fault with a GP
> fault, because %gs became NUL.

Cute.

>
> So basically you can do any memory access you want, and you'll be guaranteed
> that it will be done "atomically" on the same CPU you did the segment load
> on, or it will fault because you got scheduled away.
>
> And it's very cheap for both kernel and user space. One extra gdt entry (not
> per process or anything like that - it's system global, although different
> cpus all end up with different entries), and for each cpu one virtually
> mapped page. And all user space needs to do is to do a segment load.
>
> No system calls, no nothing.
>
> Would that be useful?
>

Does it solve the Wine problem?  If Wine uses gs for something and
calls a function that does this, Wine still goes boom, right?

Could Wine just save and restore gs on calls into and out of Windows
code?  That would solve all the problems, right?

--Andy


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]