This is the mail archive of the mailing list for the glibc project.

Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: [PATCH] Add Prefer_MAP_32BIT_EXEC for Silvermont

On Fri, Dec 11, 2015 at 3:59 PM, Andi Kleen <> wrote:
> Zack Weinberg <> writes:
>> Just to back up this assertion: 16 bits of base address randomization
>> was brute-forceable in less than five minutes (on average) in 2004,
>> per
>> .  Digging into the kernel a little, it appears that MAP_32BIT (in
>> 4.3) selects a page-aligned address at random from within a 1GB (not
>> 2GB) space; that's *thirteen* bits of randomness, so we don't even
>> have to have the argument about how many more than 16 bits it would
>> take to be good enough in 2016; clearly *fewer* than 16 bits is
>> unacceptable.
> The patch brings ASLR into a similar ballpark as with 32bit (plus minus
> 1 or 2 bits)

For the record, the amount of entropy in each mmap() for 32-bit x86
might be as low as 8 or as high as 20 bits depending on which bit of
the code I decide to believe.  arch_get_unmapped_area() is a real
mess.  However, 20 bits does not strike me as adequate either, so ...

> You're basically arguing that running 32bit is unacceptable, and that the
> main motivation for users to use 64bit is security and not performance.

... essentially yes, but I'd put it differently: some subset of 64-bit
users are using it *because* it makes ASLR more effective; kneecapping
ASLR for those users would be an unacceptable regression.

> 32bit is widely used,
> and clearly users find the risk from that acceptable.

I think that reads too much into the lack of data.  I would expect
that the majority of people still using 32-bit user space are doing so
either because the computer is a black box to them, or because they
have a business-critical 32-bit binary that can't be upgraded.

> Also there are no security advisories around that ask users to stop using 32bit.

The paper I cited is a counterexample.

> Also 3% (likely more in other workloads) is a significant performance
> difference, especially when we're talking about something as common as
> function calls.

I saw a claim of 3% *overall* performance increase on an artificial
benchmark and that's all.  I currently suspect this will turn out to
be unmeasurable on realistic workloads.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]