This is the mail archive of the mailing list for the glibc project.

Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: [PATCH 0/2] nptl: Update struct pthread_unwind_buf

On Sat, Feb 24, 2018 at 7:46 AM, Florian Weimer <> wrote:
> * H. J. Lu:
>> PLEASE take a closer look:
>> Yes, there are
>> void *__pad[4];
>> But the name is misleading.   It isn't real padding.  This is
>> an opaque array:
>> /* Private data in the cleanup buffer.  */
>> union pthread_unwind_buf_data
>> {
>>   /* This is the placeholder of the public version.  */
>>   void *pad[4];
>>   struct
>>   {
>>     /* Pointer to the previous cleanup buffer.  */
>>     struct pthread_unwind_buf *prev;
>>     /* Backward compatibility: state of the old-style cleanup
>>        handler at the time of the previous new-style cleanup handler
>>        installment.  */
>>     struct _pthread_cleanup_buffer *cleanup;
>>     /* Cancellation type before the push call.  */
>>     int canceltype;
>>   } data;
>> };
>> Only the last element in __pad[4] is unused.  There is
> The entire __pad array is unused until the handler is registered,
> which happens *after* the call to __sigsetjmp, in the
> __pthread_register_cancel function.  This means that __sigsetjmp may
> clobber it.

Please check out hjl/setjmp/pad branch and check it on x86-64.

1. It uses pad array in struct pthread_unwind_buf to save and restore shadow
stack register if size of struct pthread_unwind_buf is no less than
offset of shadow stack pointer + shadow stack pointer size.

2. It stores (int64_t) -1 as shadow stack register in x86-64 setjmp and read
it back in x86-64 longjmp to verify that it is unchanged.

I got

FAIL: nptl/tst-basic3
FAIL: nptl/tst-cancel-self
FAIL: nptl/tst-cancel-self-cancelstate
FAIL: nptl/tst-cancel-self-canceltype
FAIL: nptl/tst-cancel-self-testcancel
FAIL: nptl/tst-cancel1
FAIL: nptl/tst-cancel10
FAIL: nptl/tst-cancel11
FAIL: nptl/tst-cancel12
FAIL: nptl/tst-cancel13
FAIL: nptl/tst-cancel14
FAIL: nptl/tst-cancel15
FAIL: nptl/tst-cancel16
FAIL: nptl/tst-cancel17
FAIL: nptl/tst-cancel18
FAIL: nptl/tst-cancel20
FAIL: nptl/tst-cancel21
FAIL: nptl/tst-cancel21-static
FAIL: nptl/tst-cancel24
FAIL: nptl/tst-cancel24-static
FAIL: nptl/tst-cancel25
FAIL: nptl/tst-cancel4
FAIL: nptl/tst-cancel4_1
FAIL: nptl/tst-cancel4_2
FAIL: nptl/tst-cancel5
FAIL: nptl/tst-cancel7
FAIL: nptl/tst-cancel9
FAIL: nptl/tst-cancelx13
FAIL: nptl/tst-cancelx15
FAIL: nptl/tst-cancelx21
FAIL: nptl/tst-cancelx7
FAIL: nptl/tst-cleanup0
FAIL: nptl/tst-cleanup0-cmp
FAIL: nptl/tst-cleanup1
FAIL: nptl/tst-cleanup3
FAIL: nptl/tst-cleanup4
FAIL: nptl/tst-cleanupx0
FAIL: nptl/tst-cleanupx4
FAIL: nptl/tst-cond-except
FAIL: nptl/tst-cond22
FAIL: nptl/tst-cond25
FAIL: nptl/tst-cond7
FAIL: nptl/tst-cond8
FAIL: nptl/tst-cond8-static
FAIL: nptl/tst-execstack
FAIL: nptl/tst-exit2
FAIL: nptl/tst-exit3
FAIL: nptl/tst-join1
FAIL: nptl/tst-join5
FAIL: nptl/tst-mutex8
FAIL: nptl/tst-mutex8-static
FAIL: nptl/tst-mutexpi8
FAIL: nptl/tst-mutexpi8-static
FAIL: nptl/tst-once3
FAIL: nptl/tst-once4
FAIL: nptl/tst-oncex3
FAIL: nptl/tst-oncex4
FAIL: nptl/tst-sem11
FAIL: nptl/tst-sem11-static
FAIL: nptl/tst-sem12
FAIL: nptl/tst-sem12-static
FAIL: nptl/tst-tsd5
FAIL: nss/tst-cancel-getpwuid_r
FAIL: rt/tst-mqueue8
FAIL: nptl/tst-setuid2

For nptl/tst-tsd5, it went like this:

1. __libc_start_main calls __sigsetjmp:

 /* Memory for the cancellation buffer.  */
  struct pthread_unwind_buf unwind_buf;

  int not_first_call;
  not_first_call = setjmp ((struct __jmp_buf_tag *) unwind_buf.cancel_jmp_buf);
  if (__glibc_likely (! not_first_call))

__sigsetjmp stores -1 as shadow stack pointer.

2.  After calling  __sigsetjmp, __libc_start_main does

      /* Store old info.  */ = THREAD_GETMEM (self, cleanup_jmp_buf); = THREAD_GETMEM (self, cleanup);

which overrides shadow stack pointer.

What have I done wrong?


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]