This is the mail archive of the
libc-alpha@sourceware.org
mailing list for the glibc project.
[ping][PATCH][BZ #16071] Fix file reads for large AF_INET requests from hosts file
- From: Siddhesh Poyarekar <siddhesh at redhat dot com>
- To: libc-alpha at sourceware dot org
- Date: Mon, 28 Oct 2013 12:23:32 +0530
- Subject: [ping][PATCH][BZ #16071] Fix file reads for large AF_INET requests from hosts file
- Authentication-results: sourceware.org; auth=none
- References: <20131022054744 dot GD11038 at spoyarek dot pnq dot redhat dot com>
Ping!
On Tue, Oct 22, 2013 at 11:17:44AM +0530, Siddhesh Poyarekar wrote:
> Hi,
>
> Currently for AF_INET lookups from the hosts file, buffer sizes larger
> than INT_MAX silently overflow and may result in access beyond bounds
> of a buffer. This happens when the number of results in an AF_INET
> lookup in /etc/hosts are very large.
>
> There are two aspects to the problem. One problem is that the size
> computed from the buffer size is stored into an int, which results in
> overflow for large sizes. Additionally, even if this size was
> expanded, the function used to read content into the buffer (fgets)
> accepts only int sizes. As a result, the fix is to have a function
> wrap around fgets that calls it multiple times with int sizes if
> necessary.
>
> Tested with the reproducer described in the bug report and also
> verified that it does not regress any test cases (xcheck) on x86_64.
> OK to commit?
>
> Siddhesh
>
> [BZ #16071]
> * nss/nss_files/files-XXX.c (get_contents): New function.
> (internal_getent): Use it.
>
> diff --git a/nss/nss_files/files-XXX.c b/nss/nss_files/files-XXX.c
> index 082d1ea..713b774 100644
> --- a/nss/nss_files/files-XXX.c
> +++ b/nss/nss_files/files-XXX.c
> @@ -179,8 +179,48 @@ CONCAT(_nss_files_end,ENTNAME) (void)
> return NSS_STATUS_SUCCESS;
> }
>
> -/* Parsing the database file into `struct STRUCTURE' data structures. */
>
> +/* Hack around the fact that fgets only accepts integer sizes. */
> +static char *
> +get_contents (char *linebuf, size_t len, FILE *stream)
> +{
> + if (__glibc_unlikely (len > (size_t) INT_MAX))
> + {
> + size_t remaining_len = len;
> + char *curbuf = linebuf;
> +
> + /* fgets copies one less than the input length. Our last iteration is of
> + REMAINING_LEN and once that is done, REMAINING_LEN is decremented by
> + REMAINING_LEN - 1, leaving the result as 1. */
> + while (remaining_len > 1)
> + {
> + int curlen = ((remaining_len > (size_t) INT_MAX) ? INT_MAX
> + : remaining_len);
> + char *p = fgets_unlocked (curbuf, curlen, stream);
> +
> + ((unsigned char *) curbuf)[curlen - 1] = 0xff;
> +
> + /* EOF or read error. */
> + if (p == NULL)
> + return NULL;
> +
> + /* Done reading in the line. */
> + if (((unsigned char *) curbuf)[curlen - 1] == 0xff)
> + return linebuf;
> +
> + /* Drop the terminating '\0'. */
> + remaining_len -= curlen - 1;
> + curbuf += curlen - 1;
> + }
> +
> + /* This means that the current buffer was not large enough. */
> + return linebuf;
> + }
> + else
> + return fgets_unlocked (linebuf, len, stream);
> +}
> +
> +/* Parsing the database file into `struct STRUCTURE' data structures. */
> static enum nss_status
> internal_getent (struct STRUCTURE *result,
> char *buffer, size_t buflen, int *errnop H_ERRNO_PROTO
> @@ -188,7 +228,7 @@ internal_getent (struct STRUCTURE *result,
> {
> char *p;
> struct parser_data *data = (void *) buffer;
> - int linebuflen = buffer + buflen - data->linebuffer;
> + size_t linebuflen = buffer + buflen - data->linebuffer;
> int parse_result;
>
> if (buflen < sizeof *data + 2)
> @@ -202,8 +242,8 @@ internal_getent (struct STRUCTURE *result,
> {
> /* Terminate the line so that we can test for overflow. */
> ((unsigned char *) data->linebuffer)[linebuflen - 1] = '\xff';
> + p = get_contents (data->linebuffer, linebuflen, stream);
>
> - p = fgets_unlocked (data->linebuffer, linebuflen, stream);
> if (p == NULL)
> {
> /* End of file or read error. */