This is the mail archive of the libc-alpha@sourceware.org mailing list for the glibc project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: Support for Intel X1000


On Wed, May 20, 2015 at 04:54:28PM +0200, Florian Weimer wrote:
> On 05/20/2015 03:54 PM, Kinsella, Ray wrote:
> > 
> >> I did a little bit of research and it appears that only lock-prefixed
> >> instructions which can page fault are affected by this bug. So why not
> >> just make the kernel enforce mlockall for all processes on affected
> >> cpus? 
> > 
> > mlockall doesn't help in situations where a page is marked as CoW, you
> > still get the page fault on write (i.e. lock cmpxchg -
> > read/modify/write). 
> > I am seeing this in two places :-
> > 1. when the data section of an elf binary is loaded.
> > 2. memory shared between a child and a parent process after a fork. 
> 
> What kind of bug are we talking about?  Does the CPU hang if the
> conditions are triggered?  Or a GP exception the kernel can handle in
> some way?

The original bug this thread is about is that the X1000 cpu has a bug
in the lock prefix where it corrupts the state in an allegedly
unrecoverable way (only for the userspace task; the kernel is
unaffected) when using a lock-prefixed instruction on an address that
causes a page fault.

My proposed workaround, rather than making broken binaries with
missing lock prefix, was to enforce mlockall for all processes on
affected cpu models. But Ray claims (and is probably right) that
Linux's implementation of mlockall does not preclude page faults;
processes running with mlockall still can have shared COW pages that
require faults. I responded by noting that failure to ensure that the
process has its own unique copy of every writable page that will not
page fault is a bug in Linux's mlock[all] implementation.

Rich


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]