#include <stdio.h> #include <stdint.h> int main(void) { char buf[100] = {}; printf("snprintf: %d \nbuf: %s\n", snprintf(buf, SIZE_MAX, "test"), buf); } Compiling this program with: $ gcc -ggdb3 test.c -o glibc and then running it with: $ /tmp/tmp.qoS3ELDYbg/testrun.sh ./glibc outputs: snprintf: 4 buf: tes I believe this is not conforming behavior, as although the provided buffer size is larger than the buffer, snprintf should not be affected (i.e. I would not expect the program to be considered to be invoking undefined behavior) as it should only write as many characters as required by the formatted string. As such, writing "tes" (truncating the buffer for no reason) is not conforming behavior, as the provided buffer was large enough to contain the formatted string "test". (PS: Looks like POSIX says this call should result in EOVERFLOW, which libcs like musl do, but looking at previous discussion it appears glibc has chosen to disregard this - I'd doubt the intent was to randomly truncate strings when given massive buffer sizes, though) Tested on: $ /tmp/tmp.qoS3ELDYbg/libc.so.6 | head -n1 GNU C Library (GNU libc) development release version 2.37.9000. (note: I initially found this on my Fedora laptop with glibc 2.37, but was able to reproduce it on recent trunk (specifically commit d6c72f976c61d3c1465699f2bcad77e62bafe61d))
PS: The bug appears to be caused by the fact that glibc internally tries to compute a pointer to the end of the provided buffer, which results in a pretty much guaranteed overflow given the provided value. This in turn ends up making a later if statement of: `if (buf->base.write_ptr < buf->base.write_end)` fail, where that if statement seems to have been meant to check for the case where less characters than the buffer can contain have been written (which is the case here, but the if statement thinks this is not the case because of the pointer arithmetic overflow).
This was discussed on libc-alpha in hhttps://inbox.sourceware.org/libc-alpha/CAOOWow1L2ZMXE6S5pd3uKvAeHNQXMPtjew42LbAiQE-Pnd2ULg@mail.gmail.com/t/#u which didn't really reach a strong conclusion but didn't seem to regard this as something that should be supported.
Quite the interesting discussion, thanks for the link. w.r.t. the points raised there, I generally agree that passing a value for n larger than the actual buffer size is extremely dubious, but if the glibc project's position is that this is an error in the C standard and the bug is thus invalid, then I think a DR or something like that should be raised with WG14 about this issue. Has anything like this been done ?
Not to the best of my knowledge no. I agree it would be a good next step but it's not something I am likely to get around to myself, if I'm honest.
I'm currently attempting to do so right now, if you want to know, although it's somewhat unlikely that I'll get much progress on it anytime soon - right now I haven't even managed to actually determine whether defect reports are still a thing, and WG14 appears to be currently focused on getting C23 released (though I'm thinking about potentially trying to get this into a ballot comment or something like that).
WG14 stopped using the "defect report" terminology when it turned out that the issues WG14 was using it for did not meet the ISO definition of what defect reports should be used for. At that point, WG14 changed to referring to the issues as clarification requests instead. Maintenance of a CR log then stopped when work on C2x started. https://www.open-std.org/jtc1/sc22/wg14/www/docs/n3002.pdf has a proposal for a new issue tracking process. This has not yet been discussed at a WG14 meeting, but maybe there could be an opportunity for discussion at the proposed October meeting (since a ballot will be running at that time, that meeting won't be discussing any proposed for changes to the C standard itself, which should allow more time for such administrative discussions).
*** Bug 31251 has been marked as a duplicate of this bug. ***