[PATCH v2] arc4random: simplify design for better safety
Florian Weimer
fweimer@redhat.com
Tue Jul 26 11:12:28 GMT 2022
* Jason A. Donenfeld:
> Hi Florian,
>
> On Tue, Jul 26, 2022 at 11:55:23AM +0200, Florian Weimer wrote:
>> * Jason A. Donenfeld:
>>
>> > + pfd.fd = TEMP_FAILURE_RETRY (
>> > + __open64_nocancel ("/dev/random", O_RDONLY | O_CLOEXEC | O_NOCTTY));
>> > + if (pfd.fd < 0)
>> > + arc4random_getrandom_failure ();
>> > + if (__poll (&pfd, 1, -1) < 0)
>> > + arc4random_getrandom_failure ();
>> > + if (__close_nocancel (pfd.fd) < 0)
>> > + arc4random_getrandom_failure ();
>>
>> What happens if /dev/random is actually /dev/urandom? Will the poll
>> call fail?
>
> Yes. I'm unsure if you're asking this because it'd be a nice
> simplification to only have to open one fd, or because you're worried
> about confusion. I don't think the confusion problem is one we should
> take too seriously, but if you're concerned, we can always fstat and
> check the maj/min. Seems a bit much, though.
Turning /dev/random into /dev/urandom (e.g. with a symbolic link) used
to be the only way to get some applications working because they tried
to read from /dev/random at a higher rate than the system was estimating
entropy coming in. We may have to do something differently here if the
failing poll causes too much breakage.
>> Running the benchmark, I see 40% of the time spent in chacha_permute in
>> the kernel, that is really quite odd. Why doesn't the system call
>> overhead dominate?
>
> Huh, that is interesting. I guess if you're reading 4 bytes for an
> integer, it winds up computing a whole chacha block each time, with half
> of it doing fast key erasure and half of it being returnable to the
> caller. When we later figure out a safer way to buffer, ostensibly this
> will go away. But for now, we really should not prematurely optimize.
Yeah, I can't really argue against that, given that I said before that I
wasn't too worried about the implementation.
Thanks,
Florian
More information about the Libc-alpha
mailing list