shm_open passes an arbitrarily-large value resulting from strlen to alloca, resulting in stack overflow. As there is no interface for supporting "directories" of shared memory, it makes sense to just bound the length by NAME_MAX and return an error if the input name is longer. Then a safe fixed-size buffer can be used.
shm_open is definitely not performance critical. You could simply use malloc. Or copy malloca pattern.
Well despite the standard not requiring it, it may be nice to provide a shm_open which is async-signal-safe. Using malloc would preclude that. Limiting the buffer length to NAME_MAX+sizeof("/dev/shm/") should work just as well.
On Sun, May 05, 2013 at 03:17:54PM +0000, bugdal at aerifal dot cx wrote: > http://sourceware.org/bugzilla/show_bug.cgi?id=14752 > > --- Comment #2 from Rich Felker <bugdal at aerifal dot cx> 2013-05-05 15:17:54 UTC --- > Well despite the standard not requiring it, it may be nice to provide a > shm_open which is async-signal-safe. Using malloc would preclude that. Limiting > the buffer length to NAME_MAX+sizeof("/dev/shm/") should work just as well. > Then bug is in not checking size. You can add test if it is more than PATH_MAX and set errno to ENAMETOOLONG. Alloca will run fine.
Ping. Note that allowing up to PATH_MAX is not useful, since there's no shm_mkdir and thus no way to use "directories" of shm. The limit should simply be NAME_MAX.
Fixed by 5d30d853295a5fe04cad22fdf649c5e0da6ded8c.