Wide character IO will write past the end of an internal buffer in some cases. In this case, it manifests as stomping the __malloc_hook function pointer. Reproduced with glibc 2.23 on Linux x86-64 (Ubuntu MATE 16.04.1 LTS) Reduced test case: #include <stdio.h> #include <wchar.h> #include <unistd.h> #include <stdlib.h> int main(void) { /* Close stderr */ close(2); /* Output long string */ const int sz = 4096; wchar_t *buff = calloc(sz+1, sizeof *buff); for (int i=0; i < sz; i++) buff[i] = L'x'; fputws(buff, stderr); /* Output shorter string */ for (int i=0; i < 1024; i++) { fputws(L"0123456789ABCDEF", stderr); /* Call malloc, which should not crash. However it will if malloc's function pointers have been stomped. */ free(malloc(1)); } return 0; } compile and run as `gcc test.c; ./a.out`. Results: this SIGSEGVs in the call to malloc, because the __malloc_hook function pointer has been overwritten. Analysis: There's some discussion in https://github.com/fish-shell/fish-shell/issues/3401#issuecomment-249394369 My diagnosis: 1. The initial large write calls into `_IO_wfile_overflow`. This has a bug that results in a FILE* that has _IO_write_ptr exceeding _IO_write_end by exactly 1 2. This bug is typically masked by the call to _IO_do_flush(), however this call doesn't successfully flush because stderr has been closed 3. The subsequent shorter writes call into `_IO_wfile_xsputn`. This calculates the available space in the buffer as `_IO_write_end - _IO_write_ptr` (a negative value) and stores the result in an unsigned value (i.e. huge). Since it determines it has enough space, it writes arbitrarily much into _IO_write_ptr This seems quite exploitable to me: we end up overwriting a function pointer that malloc invokes. If an attacker can invoke the process with stderr closed (easy to do from a shell), and can control what text the process outputs to stderr, the attacker can execute arbitrary code.
The test program hangs for me using 2.19-0ubuntu6.9: (gdb) bt #0 __lll_lock_wait_private () at ../nptl/sysdeps/unix/sysv/linux/x86_64/lowlevellock.S:95 #1 0x00007ffff7a8cc11 in _L_lock_48 () from /lib/x86_64-linux-gnu/libc.so.6 #2 0x00007ffff7a8cb36 in fputws (str=0x4007e8 L"0123456789ABCDEF", fp=0x7ffff7dd41c0 <_IO_2_1_stderr_>) at iofputws.c:38 #3 0x0000000000400731 in main () at t.c:18 When using my own build of libc-2.19, I do get a crash when i==21: Program received signal SIGSEGV, Segmentation fault. 0x0000003400000033 in ?? () (gdb) bt #0 0x0000003400000033 in ?? () #1 0x000000000040072b in main () at t.c:23
It crashes for me (with glibc 2.31) even if I comment out the line with the malloc. It crashes already at one of the short writes. Is it legal at all, to write to a stream of which you have close(2)'d the fileno? Why would anyone do that?
I wonder why writing to stderr involves handling any buffers in the first place. By default, stderr is unbuffered, that is, character buffered. So any attempt to write a character, or even a byte, to stderr from the stdio level, should immediately cause a write system call to file descriptor 2. From an extended version of the test program, which does more error checking, I learnt that the first fputws returns -1 and sets errno to 9, Bad file descriptor. That is correct because file desciptor 2 was closed. Subsequent calls of fputws however do go through and return 1, as if the write succeeded. For comparison, I ran the same program under FreeBSD 12.2, and there _all_ the calls of fputws were rejected returning -1, and setting errno to "Bad file descriptor". No crash happens there.
thank you all for the tips. https://www.depannage-auto-remorquage-bordeaux33.com