There two problems with fread when size * nmemb > SIZE_MAX: 1) it seems that fread will read only size * nmemb % (SIZE_MAX + 1) bytes instead of whole file (the buffer can hardly be larger than SIZE_MAX bytes, hence the file should be smaller than this); 2) fread will report that all nmemb elements are read if, additionally, size * nmemb % (SIZE_MAX + 1) != 0. Both things are wrong and can lead to security problems when nmemb (or size) comes from an untrusted source and the program depends on the assumption that fread cannot read more bytes than file size and that its result is accurate. The following program works with the file of size 12 and outputs this (on 64-bit platform): want to read: 4611686018427387905 actually read: 4611686018427387905 Segmentation fault Tested on Debian jessie (glibc 2.19-18+deb8u1). #include <stdint.h> #include <stdlib.h> #include <stdio.h> /* victim function trying to be safe */ int process_file(void) { FILE *f = fopen("fread-overflow.dat", "rb"); if (!f) return 1; /* get file size */ if (fseek(f, 0, SEEK_END)) return 2; long size = ftell(f); if (size <= 0) return 3; if (fseek(f, 0, SEEK_SET)) return 4; if (size > 1000) { /* This could be omitted given that malloc is checked. */ printf("File too large.\n"); return 5; } /* file size should be enough for everything read from the file */ unsigned *buf = malloc(size); if (!buf) { printf("Not enough memory.\n"); return 6; } /* read number of elements from the file */ size_t nmemb; if (fread(&nmemb, sizeof(nmemb), 1, f) != 1) return 7; printf("want to read: %zu\n", nmemb); /* fread cannot read more bytes than the file size */ size_t n = fread(buf, sizeof(buf[0]), nmemb, f); printf("actually read: %zu\n", n); if (n != nmemb) { printf("Truncated file\n"); return 8; } fclose(f); unsigned sum; for (size_t i = 0; i < nmemb; i++) sum += buf[i]; printf("sum = %u\n", sum); return 0; } /* preparing attack */ void prepare_file(void) { FILE *f = fopen("fread-overflow.dat", "wb"); /* number of elements */ size_t nmemb = SIZE_MAX / sizeof(unsigned) + 2; fwrite(&nmemb, sizeof(nmemb), 1, f); /* some garbage */ fwrite(&nmemb, sizeof(unsigned), 1, f); fclose(f); } int main(void) { prepare_file(); return process_file(); }
Depending on fread properties for security is not a very good practice and probably not very widespread so filing this publicly.
If your interpretation is that the dest pointer passed in must point to ab object of size size*nmemb, this is a non-issue. However perhaps it's valid to pass a size larger than any possible object if you know the read will hit eof before overflowing the buffer. In that case fortify almost surely mishandles this, too...
(In reply to Rich Felker from comment #2) > If your interpretation is that the dest pointer passed in must point to ab > object of size size*nmemb, this is a non-issue. It could be debated but it's not my point. > However perhaps it's valid > to pass a size larger than any possible object if you know the read will hit > eof before overflowing the buffer. Exactly! Processing an untrusted file in a trusted location and using its size as a buffer size for all reads from it.
The fread implementation in OpenBSD (and thus Android's Bionic, since stdio & a lot more is taken from there) just does this: /* * Extension: Catch integer overflow */ if ((size >= MUL_NO_OVERFLOW || count >= MUL_NO_OVERFLOW) && size > 0 && SIZE_MAX / size < count) { errno = EOVERFLOW; fp->_flags |= __SERR; return (0); } So it's a similar assumption as the one made by _FORTIFY_SOURCE: it's treated as invalid even if there's a guarantee that it won't read that much. It's not possible to make a buffer larger than SIZE_MAX though, so if it's considered invalid to pass a total size larger than the buffer (as _FORTIFY_SOURCE checks for) then the check is kind of pointless (beyond a optional sanity check for hardening).
The following patch seems like it's both more efficient *and* provides correct answer: diff --git a/libio/iofread.c b/libio/iofread.c index eb69b05..a8ea391 100644 --- a/libio/iofread.c +++ b/libio/iofread.c @@ -37,7 +37,7 @@ _IO_fread (void *buf, _IO_size_t size, _IO_size_t count, _IO_FILE *fp) _IO_acquire_lock (fp); bytes_read = _IO_sgetn (fp, (char *) buf, bytes_requested); _IO_release_lock (fp); - return bytes_requested == bytes_read ? count : bytes_read / size; + return bytes_read / size; } libc_hidden_def (_IO_fread) diff --git a/libio/iofread_u.c b/libio/iofread_u.c index 997b714..28651bf 100644 --- a/libio/iofread_u.c +++ b/libio/iofread_u.c @@ -38,7 +38,7 @@ __fread_unlocked (void *buf, _IO_size_t size, _IO_size_t count, _IO_FILE *fp) if (bytes_requested == 0) return 0; bytes_read = _IO_sgetn (fp, (char *) buf, bytes_requested); - return bytes_requested == bytes_read ? count : bytes_read / size; + return bytes_read / size; } libc_hidden_def (__fread_unlocked) weak_alias (__fread_unlocked, fread_unlocked) The fwrite case is slightly more complicated by EOF being an indication that "all bytes were written". I'll mail a patch to libc-alpha...
It's not more efficient but much less efficient; it introduces a divide into ALL fread/fwrite calls rather than only ones which terminate early (the latter should be extremely rare, happening only at EOF or on error).
Some patches and discussion: https://sourceware.org/ml/libc-alpha/2015-10/msg00894.html https://sourceware.org/ml/libc-alpha/2015-12/msg00128.html
This looks more awful than we imagined; see my latest findings on the list: https://sourceware.org/ml/libc-alpha/2016-02/msg00274.html