[PATCH v7] nptl: Fix Race conditions in pthread cancellation [BZ#12683]

Carlos O'Donell carlos@redhat.com
Mon Aug 28 12:47:41 GMT 2023


On 5/24/23 09:51, Adhemerval Zanella via Libc-alpha wrote:
> The current racy approach is to enable asynchronous cancellation
> before making the syscall and restore the previous cancellation
> type once the syscall returns, and check if cancellation has happen
> during the cancellation entrypoint.

Conceptually I think this patch needs a little more work, not because it is incorrect,
but because you identify some important issues with the use of a vDSO or for
architectures (present or future) that do not have a clear PC range for handling the
syscall. There are some architectures that have "gateway" pages to enter the kernel,
and my concern is that if these become visible then we have non-contiguous PC range
for the syscall.
	
My opinion is that we should consider a model where the syscall is a range of
instructions and that creates some interesting implementation details which I will
discuss. Without your implementation I would never have thought this necessary, but
your requirement to go back to legacy single instruction syscalls raises a yellow
flag for me here.

If an architecture lacks an identifiable PC-range for the syscall operation
e.g. vDSO, gateway, Library OS then we might model this in libc with a distinct
state transition.

The *start* and the *end* of the syscall range today would be areas where
we allow a call to a vDSO, LibraryOS, or dispatch table. There is no
contiguous PC range for that region that we can identify, and so the only
solution IMO is to reserve a *bit* in the cancellation handling which is
set with an acquire atomic to indicate we enter, and a release atomic
to indicate we exit, the critical section of the non-contiguous PC region
for the cancellable syscall. I'm even more concerned here because this isn't
pure speculation the hppa32/hppa64 have a not-quite-kernel gateway page
which we had to take special measures in the kernel to ensure no signals
were delivered to userapce while executing on that page, but I could see
a desire to relax that to reduce signal latency. I could also see other
future architectures smearing the boundary between kernel and userspace
in such a way as to allow signals to see more of the transition to
kernel space. Likewise a clever emulator might insert a whole library
into the syscall and rewrite it to a jump. A model that assumes a contiguous
PC range here is going to limit the design.

Do you think we could change the model slightly? Reserve a bit and use it
to indicate we're in step 3/4? See notes further below.

Imagine you could see the kernel PC values... we'd be in the same scenario,
and even in the future if we have a unikernel where glibc is compiled into
a complete application + kernel, we will see all the PC values. The only
invariant we need here is that the kernel must return to the PC it has stored
or restart the syscall (guarantees we already have). This invariant allows
us to ensure we can set/reset the bit indicating we're in the syscall region.
 
> As described in BZ#12683, this approach shows 2 problems:
> 
>   1. Cancellation can act after the syscall has returned from the
>      kernel, but before userspace saves the return value.  It might
>      result in a resource leak if the syscall allocated a resource or a
>      side effect (partial read/write), and there is no way to program
>      handle it with cancellation handlers.

Correct, in such cases where a side-effect occurs, where the cancellation arrives
in the insn window after the syscall but before cancellation is checked we need to
behave *as-if* the cancellation arrived later and a correct program should handle
that scenario. There can be no observed happens-before because none can be established
against a syscall. You can establish ordering with a semaphore or mutex but only
before or after the syscall.

 
>   2. If a signal is handled while the thread is blocked at a cancellable
>      syscall, the entire signal handler runs with asynchronous
>      cancellation enabled.  This can lead to issues if the signal
>      handler call functions which are async-signal-safe but not
>      async-cancel-safe.

Correct. It means that any other thread attempting a deferred cancellation of the current
thread will actually immediately cancel the current thread, even if such a thread is
executing non-AC-safe code in the signal handler; all of which can lead to inconsistent
state.

> For the cancellation to work correctly, there are 5 points at which the
> cancellation signal could arrive:

Agreed:

1. Before testcancel e.g. [*... testcancel)
2. Between testcancel and syscall start e.g. [testcancel...syscall start)
3. Between syscall start and syscall end e.g. [syscall start...syscall end]
4. Same as 3 but with side-effects having occurred.
5. After syscall end e.g. (syscall end...*]

[... )[ ... )[ .... ]( ... ]
  1      2      34      5

>   1. Before the final "testcancel" and before the syscall is made.

Shouldn't this be:

1. Before the initial "testcancel" ?

The point being that this is the easy case, the testcancel code checks to see if
deferred cancellation is in effect and cancels.

I would mark this as [*... testcancel)

>   2. Between the "testcancel" and the syscall.

This is the interesting case. We would still call __do_cancel() here immediately
if we are sent a cancellation request from another thread.

I would mark this as [testcancel... syscall start)

>   3. While the syscall is blocked and no side effects have yet taken
>      place.

I would mark this as [syscall start... syscall end]

>   4. While the syscall is blocked but with some side effects already
>      having taken place (e.g. a partial read or write).

Still within [syscall start... syscall end]

For the purposes of the algorithm 3 and 4 can be merged, but I'll leave them as-is
because it allows reasoning about the changes. In practice you can't observe or know
when 3 becomes 4, you have to assume 4 is true at all times.

>   5. After the syscall has returned.

I would mark this as (syscall end... *]

> And libc wants to act on cancellation in cases 1, 2, and 3 but not
> in cases 4 or 5.  For the 4 and 5 cases, the cancellation will eventually
> happen in the next cancellable entrypoint without any further external
> event.

Agreed.	

> The proposed solution for each case is:

0. Provide assembly implementations for each architecture that ensure the ordering
   of the 5 regions is preserved.

>   1. Do a conditional branch based on whether the thread has received
>      a cancellation request;
> 
>   2. It can be caught by the signal handler determining that the saved
>      program counter (from the ucontext_t) is in some address range
>      beginning just before the "testcancel" and ending with the
>      syscall instruction.

Suggest:

2. SIGCANCEL can be caught by the signal handler and determine that the saved
   program counter (from the ucontext_t) is in the address range beginning just
   before "testcancel" and ending with the first uninterruptable (via a signal)
   syscall instruction that enters the kernel.

> 
>   3. In this case, except for certain syscalls that ALWAYS fail with
>      EINTR even for non-interrupting signals, the kernel will reset
>      the program counter to point at the syscall instruction during
>      signal handling, so that the syscall is restarted when the signal
>      handler returns.  So, from the signal handler's standpoint, this
>      looks the same as case 2, and thus it's taken care of.

OK.
 
>   4. For syscalls with side-effects, the kernel cannot restart the
>      syscall; when it's interrupted by a signal, the kernel must cause
>      the syscall to return with whatever partial result is obtained
>      (e.g. partial read or write).
> 
>   5. The saved program counter points just after the syscall
>      instruction, so the signal handler won't act on cancellation.
>      This is similar to 4. since the program counter is past the syscall
>      instruction.

OK. We treat this as-if SIGCANCEL arrived *after* the syscall since the user cannot
create a happens-before the syscall. The reason I care about this is that if the user
can force the SIGCANCEL to happen before the syscall then they might expect the cancel
to happen. The only way to do that would be to have a way for the user themselves to
know that they are between "testcancel" and the syscall and acting on the cancellation,
but they are not allowed to install a SIGCANCEL handler, and cannot otherwise observe
this situation. They could install another signal handler, and could know that they are
*somewhere* in the function call that makes a syscall, but that is the most they could
easily know. That is to say they could know:

* Thread A is calling read() (somewhere in read)
* Thread B calls pthread_cancel()

But the order is indeterminate so they have no guarantee of pthread_cancel() acting,
so deferring to a later cancellation point is OK IMO.

> So The proposed fixes are:
> 
>   1. Remove the enable_asynccancel/disable_asynccancel function usage in
>      cancellable syscall definition and instead make them call a common
>      symbol that will check if cancellation is enabled (__syscall_cancel
>      at nptl/cancellation.c), call the arch-specific cancellable
>      entry-point (__syscall_cancel_arch), and cancel the thread when
>      required.

YES!

>   2. Provide an arch-specific generic system call wrapper function
>      that contains global markers.  These markers will be used in
>      SIGCANCEL signal handler to check if the interruption has been
>      called in a valid syscall and if the syscalls has side-effects.
> 
>      A reference implementation sysdeps/unix/sysv/linux/syscall_cancel.c
>      is provided.  However, the markers may not be set on correct
>      expected places depending on how INTERNAL_SYSCALL_NCS is
>      implemented by the architecture.  It is expected that all
>      architectures add an arch-specific implementation.

Correct.

>   3. Rewrite SIGCANCEL asynchronous handler to check for both canceling
>      type and if current IP from signal handler falls between the global
>      markers and act accordingly.

OK.

>   4. Adjust libc code to replace LIBC_CANCEL_ASYNC/LIBC_CANCEL_RESET to
>      appropriated cancelable syscalls.

s/to appropriated/to use the appropriate/g

>   5. Adjust 'lowlevellock-futex.h' arch-specific implementations to
>      provide cancelable futex calls.

OK.

> Some architectures require specific support on syscall handling:
> 
>   * On i386 the syscall cancel bridge needs to use the old int80
>     instruction because the optimized vDSO symbol the resulting PC value
>     for an interrupted syscall points to an adress outside the expected

s/adress/address/g

Why do we need to do this?

I see two options:

(a) Put the "syscall" label at the point at which we call the vDSO.

(b) Do nothing special.

The consequence of (a) and (b) that the SIGCANCEL signal cannot detect if it's
in the narrower range so it defers cancellation until later.

As noted below Thread B can only create a happens-before outside of the cancellable
function?

For architectures that use a "centralized dispatch" for syscalls, where is no unique
per-syscall uninterruptable syscall instruction.

>     markers in __syscall_cancel_arch.  It has been discussed in LKML [1]
>     on how kernel could help userland to accomplish it, but afaik
>     discussion has stalled.
> 
>     Also, sysenter should not be used directly by libc since its calling
>     convention is set by the kernel depending of the underlying x86 chip
>     (check kernel commit 30bfa7b3488bfb1bb75c9f50a5fcac1832970c60).

We likewise have a problem if there is no uninterruptable syscall instruction, that is
that the syscall is emulated via userspace code, or something else which has a PC that
is not in the expected range.

It is possible for Thread A and Thread B to create a happens-before between:

Thread A - lock mutex.
Thread A - increment counter.
Thread A - pthread_cancel.
Thread A - unlock mutex.
Thread B - lock mutex.
Thread B - check counter and call cancellable syscall

Thread B *expects* to get cancelled and Thread A created a happens-before using
the mutex. We must always cancel in that case, and I think we would because we check
the cancellation state as we enter the cancellable syscall.

May you please add a test case for this? That way we have some more coverage?

> 
>   * On ia64 the syscall cancel bridge needs uses the old brk 0x10000
>     instruction because by the vDSO gate the resulting PC value for an
>     interrupted syscall points to an address outside the expected markers
>     in __syscall_cancel_arch.
> 
>     Also the __syscall_cancel_arch issues the 'break 0x100000' on its own
>     bundle, and __syscall_cancel_arch_end points to end of the previous
>     one. It requires an arch-specific ucontext_check_pc_boundary to check
>     for the 'ri' value (embedded in the sc_ip by the kernel) to check if
>     the syscall had any side-effects.
> 
>   * mips o32 is the only kABI that requires 7 argument syscall, and to
>     avoid add a requirement on all architectures to support it, mips
>     support is added with extra internal defines.
> 
> Checked on aarch64-linux-gnu, arm-linux-gnueabihf, powerpc-linux-gnu,
> powerpc64-linux-gnu, powerpc64le-linux-gnu, i686-linux-gnu, and
> x86_64-linux-gnu.
> 
> [1] https://lkml.org/lkml/2016/3/8/1105
> ---
> Changes from v6:
> * Fixed i686 syscall_cancel.S that triggered patchworkd buildbot
>   regressions.
> ---
>  elf/Makefile                                  |   5 +-
>  nptl/Makefile                                 |  10 +-
>  nptl/cancellation.c                           | 115 ++++++------
>  nptl/cleanup_defer.c                          |   5 +-
>  nptl/descr-const.sym                          |   6 +
>  nptl/descr.h                                  |  18 ++
>  nptl/libc-cleanup.c                           |   5 +-
>  nptl/pthread_cancel.c                         |  78 +++-----
>  nptl/pthread_exit.c                           |   4 +-
>  nptl/pthread_setcancelstate.c                 |   2 +-
>  nptl/pthread_setcanceltype.c                  |   2 +-
>  nptl/pthread_testcancel.c                     |   5 +-
>  nptl/tst-cancel31.c                           | 100 ++++++++++
>  sysdeps/generic/syscall_types.h               |  25 +++
>  sysdeps/nptl/cancellation-pc-check.h          |  54 ++++++
>  sysdeps/nptl/lowlevellock-futex.h             |  20 +-
>  sysdeps/nptl/pthreadP.h                       |  11 +-
>  sysdeps/powerpc/powerpc32/sysdep.h            |   3 +
>  sysdeps/powerpc/powerpc64/sysdep.h            |  19 ++
>  sysdeps/pthread/tst-cancel2.c                 |   4 +
>  sysdeps/sh/sysdep.h                           |   1 +
>  sysdeps/unix/sysdep.h                         | 173 ++++++++++++++----
>  .../unix/sysv/linux/aarch64/syscall_cancel.S  |  59 ++++++
>  .../unix/sysv/linux/alpha/syscall_cancel.S    |  80 ++++++++
>  sysdeps/unix/sysv/linux/arc/syscall_cancel.S  |  56 ++++++
>  sysdeps/unix/sysv/linux/arm/syscall_cancel.S  |  78 ++++++++
>  sysdeps/unix/sysv/linux/csky/syscall_cancel.S | 114 ++++++++++++
>  sysdeps/unix/sysv/linux/hppa/syscall_cancel.S |  81 ++++++++
>  sysdeps/unix/sysv/linux/i386/syscall_cancel.S | 104 +++++++++++
>  .../sysv/linux/ia64/cancellation-pc-check.h   |  48 +++++
>  sysdeps/unix/sysv/linux/ia64/syscall_cancel.S |  81 ++++++++
>  .../sysv/linux/loongarch/syscall_cancel.S     |  50 +++++
>  sysdeps/unix/sysv/linux/m68k/syscall_cancel.S |  84 +++++++++
>  .../sysv/linux/microblaze/syscall_cancel.S    |  61 ++++++
>  .../sysv/linux/mips/mips32/syscall_cancel.S   | 128 +++++++++++++
>  sysdeps/unix/sysv/linux/mips/mips32/sysdep.h  |   4 +
>  .../linux/mips/mips64/n32/syscall_types.h     |  28 +++
>  .../sysv/linux/mips/mips64/syscall_cancel.S   | 108 +++++++++++
>  sysdeps/unix/sysv/linux/mips/mips64/sysdep.h  |  52 +++---
>  .../unix/sysv/linux/nios2/syscall_cancel.S    |  95 ++++++++++
>  sysdeps/unix/sysv/linux/or1k/syscall_cancel.S |  63 +++++++
>  .../linux/powerpc/cancellation-pc-check.h     |  65 +++++++
>  .../unix/sysv/linux/powerpc/syscall_cancel.S  |  86 +++++++++
>  .../unix/sysv/linux/riscv/syscall_cancel.S    |  67 +++++++
>  .../sysv/linux/s390/s390-32/syscall_cancel.S  |  62 +++++++
>  .../sysv/linux/s390/s390-64/syscall_cancel.S  |  62 +++++++
>  sysdeps/unix/sysv/linux/sh/syscall_cancel.S   | 126 +++++++++++++
>  sysdeps/unix/sysv/linux/socketcall.h          |  35 +++-
>  .../sysv/linux/sparc/sparc32/syscall_cancel.S |  71 +++++++
>  .../sysv/linux/sparc/sparc64/syscall_cancel.S |  74 ++++++++
>  sysdeps/unix/sysv/linux/syscall_cancel.c      |  73 ++++++++
>  sysdeps/unix/sysv/linux/sysdep-cancel.h       |  12 --
>  .../unix/sysv/linux/x86_64/syscall_cancel.S   |  57 ++++++
>  .../sysv/linux/x86_64/x32/syscall_types.h     |  34 ++++
>  sysdeps/x86_64/nptl/tcb-offsets.sym           |   3 -
>  55 files changed, 2638 insertions(+), 228 deletions(-)
>  create mode 100644 nptl/descr-const.sym
>  create mode 100644 nptl/tst-cancel31.c
>  create mode 100644 sysdeps/generic/syscall_types.h
>  create mode 100644 sysdeps/nptl/cancellation-pc-check.h
>  create mode 100644 sysdeps/unix/sysv/linux/aarch64/syscall_cancel.S
>  create mode 100644 sysdeps/unix/sysv/linux/alpha/syscall_cancel.S
>  create mode 100644 sysdeps/unix/sysv/linux/arc/syscall_cancel.S
>  create mode 100644 sysdeps/unix/sysv/linux/arm/syscall_cancel.S
>  create mode 100644 sysdeps/unix/sysv/linux/csky/syscall_cancel.S
>  create mode 100644 sysdeps/unix/sysv/linux/hppa/syscall_cancel.S
>  create mode 100644 sysdeps/unix/sysv/linux/i386/syscall_cancel.S
>  create mode 100644 sysdeps/unix/sysv/linux/ia64/cancellation-pc-check.h
>  create mode 100644 sysdeps/unix/sysv/linux/ia64/syscall_cancel.S
>  create mode 100644 sysdeps/unix/sysv/linux/loongarch/syscall_cancel.S
>  create mode 100644 sysdeps/unix/sysv/linux/m68k/syscall_cancel.S
>  create mode 100644 sysdeps/unix/sysv/linux/microblaze/syscall_cancel.S
>  create mode 100644 sysdeps/unix/sysv/linux/mips/mips32/syscall_cancel.S
>  create mode 100644 sysdeps/unix/sysv/linux/mips/mips64/n32/syscall_types.h
>  create mode 100644 sysdeps/unix/sysv/linux/mips/mips64/syscall_cancel.S
>  create mode 100644 sysdeps/unix/sysv/linux/nios2/syscall_cancel.S
>  create mode 100644 sysdeps/unix/sysv/linux/or1k/syscall_cancel.S
>  create mode 100644 sysdeps/unix/sysv/linux/powerpc/cancellation-pc-check.h
>  create mode 100644 sysdeps/unix/sysv/linux/powerpc/syscall_cancel.S
>  create mode 100644 sysdeps/unix/sysv/linux/riscv/syscall_cancel.S
>  create mode 100644 sysdeps/unix/sysv/linux/s390/s390-32/syscall_cancel.S
>  create mode 100644 sysdeps/unix/sysv/linux/s390/s390-64/syscall_cancel.S
>  create mode 100644 sysdeps/unix/sysv/linux/sh/syscall_cancel.S
>  create mode 100644 sysdeps/unix/sysv/linux/sparc/sparc32/syscall_cancel.S
>  create mode 100644 sysdeps/unix/sysv/linux/sparc/sparc64/syscall_cancel.S
>  create mode 100644 sysdeps/unix/sysv/linux/syscall_cancel.c
>  create mode 100644 sysdeps/unix/sysv/linux/x86_64/syscall_cancel.S
>  create mode 100644 sysdeps/unix/sysv/linux/x86_64/x32/syscall_types.h
> 
> diff --git a/elf/Makefile b/elf/Makefile
> index e262f3e6b1..37bb493f4f 100644
> --- a/elf/Makefile
> +++ b/elf/Makefile
> @@ -1255,11 +1255,8 @@ $(objpfx)dl-allobjs.os: $(all-rtld-routines:%=$(objpfx)%.os)
>  # discovery mechanism is not compatible with the libc implementation
>  # when compiled for libc.
>  rtld-stubbed-symbols = \
> -  __GI___pthread_disable_asynccancel \
> -  __GI___pthread_enable_asynccancel \
> +  __syscall_cancel \
>    __libc_assert_fail \
> -  __pthread_disable_asynccancel \
> -  __pthread_enable_asynccancel \

OK.

>    calloc \
>    free \
>    malloc \
> diff --git a/nptl/Makefile b/nptl/Makefile
> index f8365467d9..993da7993c 100644
> --- a/nptl/Makefile
> +++ b/nptl/Makefile
> @@ -204,6 +204,7 @@ routines = \
>    sem_timedwait \
>    sem_unlink \
>    sem_wait \
> +  syscall_cancel \

OK.

>    tpp \
>    unwind \
>    vars \
> @@ -235,7 +236,8 @@ CFLAGS-pthread_setcanceltype.c += -fexceptions -fasynchronous-unwind-tables
>  
>  # These are internal functions which similar functionality as setcancelstate
>  # and setcanceltype.
> -CFLAGS-cancellation.c += -fasynchronous-unwind-tables
> +CFLAGS-cancellation.c += -fexceptions -fasynchronous-unwind-tables
> +CFLAGS-syscall_cancel.c += -fexceptions -fasynchronous-unwind-tables

OK.

>  
>  # Calling pthread_exit() must cause the registered cancel handlers to
>  # be executed.  Therefore exceptions have to be thrown through this
> @@ -279,6 +281,7 @@ tests = \
>    tst-cancel7 \
>    tst-cancel17 \
>    tst-cancel24 \
> +  tst-cancel31 \

OK.

>    tst-cond26 \
>    tst-context1 \
>    tst-default-attr \
> @@ -403,7 +406,10 @@ xtests += tst-eintr1
>  
>  test-srcs = tst-oddstacklimit
>  
> -gen-as-const-headers = unwindbuf.sym
> +gen-as-const-headers = \
> +  descr-const.sym \
> +  unwindbuf.sym \
> +  # gen-as-const-headers

OK.

>  
>  gen-py-const-headers := nptl_lock_constants.pysym
>  pretty-printers := nptl-printers.py
> diff --git a/nptl/cancellation.c b/nptl/cancellation.c
> index 765511d66d..3162492b80 100644
> --- a/nptl/cancellation.c
> +++ b/nptl/cancellation.c
> @@ -18,74 +18,81 @@
>  #include <setjmp.h>
>  #include <stdlib.h>
>  #include "pthreadP.h"
> -#include <futex-internal.h>
>  
> -
> -/* The next two functions are similar to pthread_setcanceltype() but
> -   more specialized for the use in the cancelable functions like write().
> -   They do not need to check parameters etc.  These functions must be
> -   AS-safe, with the exception of the actual cancellation, because they
> -   are called by wrappers around AS-safe functions like write().*/
> -int
> -__pthread_enable_asynccancel (void)
> +/* Called by the INTERNAL_SYSCALL_CANCEL macro, check for cancellation and
> +   returns the syscall value or its negative error code.  */
> +long int
> +__internal_syscall_cancel (__syscall_arg_t a1, __syscall_arg_t a2,
> +			   __syscall_arg_t a3, __syscall_arg_t a4,
> +			   __syscall_arg_t a5, __syscall_arg_t a6,
> +			   __SYSCALL_CANCEL7_ARG_DEF
> +			   __syscall_arg_t nr)
>  {
> -  struct pthread *self = THREAD_SELF;
> -  int oldval = atomic_load_relaxed (&self->cancelhandling);
> +  long int result;
> +  struct pthread *pd = THREAD_SELF;
>  
> -  while (1)
> +  /* If cancellation is not enabled, call the syscall directly and also
> +     for thread terminatation to avoid call __syscall_do_cancel while
> +     executing cleanup handlers.  */
> +  int ch = atomic_load_relaxed (&pd->cancelhandling);
> +  if (SINGLE_THREAD_P || !cancel_enabled (ch) || cancel_exiting (ch))
>      {
> -      int newval = oldval | CANCELTYPE_BITMASK;
> -
> -      if (newval == oldval)
> -	break;
> +      result = INTERNAL_SYSCALL_NCS_CALL (nr, a1, a2, a3, a4, a5, a6
> +					  __SYSCALL_CANCEL7_ARCH_ARG7);
> +      if (INTERNAL_SYSCALL_ERROR_P (result))
> +	return -INTERNAL_SYSCALL_ERRNO (result);
> +      return result;
> +    }
>  
> -      if (atomic_compare_exchange_weak_acquire (&self->cancelhandling,
> -						&oldval, newval))
> -	{
> -	  if (cancel_enabled_and_canceled_and_async (newval))
> -	    {
> -	      self->result = PTHREAD_CANCELED;
> -	      __do_cancel ();
> -	    }
> +  /* Call the arch-specific entry points that contains the globals markers
> +     to be checked by SIGCANCEL handler.  */
> +  result = __syscall_cancel_arch (&pd->cancelhandling, nr, a1, a2, a3, a4, a5,
> +			          a6 __SYSCALL_CANCEL7_ARCH_ARG7);
>  
> -	  break;
> -	}
> -    }
> +  /* If the cancellable syscall was interrupted by SIGCANCEL and it has not

s/not/no/g

> +     side-effect, cancel the thread if cancellation is enabled.  */
> +  ch = atomic_load_relaxed (&pd->cancelhandling);
> +  if (result == -EINTR && cancel_enabled_and_canceled (ch))
> +    __syscall_do_cancel ();
>  
> -  return oldval;
> +  return result;
>  }
> -libc_hidden_def (__pthread_enable_asynccancel)
>  
> -/* See the comment for __pthread_enable_asynccancel regarding
> -   the AS-safety of this function.  */
> -void
> -__pthread_disable_asynccancel (int oldtype)
> +/* Called by the SYSCALL_CANCEL macro, check for cancellation and return the
> +   syscall expected success value (usually 0) or, in case of failure, -1 and
> +   sets errno to syscall return value.  */
> +long int
> +__syscall_cancel (__syscall_arg_t a1, __syscall_arg_t a2,
> +		  __syscall_arg_t a3, __syscall_arg_t a4,
> +		  __syscall_arg_t a5, __syscall_arg_t a6,
> +		  __SYSCALL_CANCEL7_ARG_DEF __syscall_arg_t nr)
>  {
> -  /* If asynchronous cancellation was enabled before we do not have
> -     anything to do.  */
> -  if (oldtype & CANCELTYPE_BITMASK)
> -    return;
> +  int r = __internal_syscall_cancel (a1, a2, a3, a4, a5, a6,
> +				     __SYSCALL_CANCEL7_ARG nr);
> +  return __glibc_unlikely (INTERNAL_SYSCALL_ERROR_P (r))
> +	 ? SYSCALL_ERROR_LABEL (INTERNAL_SYSCALL_ERRNO (r))
> +	 : r;
> +}
>  
> +/* Called by __syscall_cancel_arch or function above start the thread
> +   cancellation.  */
> +_Noreturn void
> +__syscall_do_cancel (void)
> +{
>    struct pthread *self = THREAD_SELF;
> -  int newval;
> +
> +  /* Disable thread cancellation to avoid cancellable entrypoints to call

s/to call/calling/g

> +     __syscall_do_cancel recursively.  */
>    int oldval = atomic_load_relaxed (&self->cancelhandling);
> -  do
> +  while (1)
>      {
> -      newval = oldval & ~CANCELTYPE_BITMASK;
> +      int newval = oldval | CANCELSTATE_BITMASK;
> +      if (oldval == newval)
> +	break;
> +      if (atomic_compare_exchange_weak_acquire (&self->cancelhandling,
> +						&oldval, newval))
> +	break;
>      }
> -  while (!atomic_compare_exchange_weak_acquire (&self->cancelhandling,
> -						&oldval, newval));
>  
> -  /* We cannot return when we are being canceled.  Upon return the
> -     thread might be things which would have to be undone.  The
> -     following loop should loop until the cancellation signal is
> -     delivered.  */
> -  while (__glibc_unlikely ((newval & (CANCELING_BITMASK | CANCELED_BITMASK))
> -			   == CANCELING_BITMASK))
> -    {
> -      futex_wait_simple ((unsigned int *) &self->cancelhandling, newval,
> -			 FUTEX_PRIVATE);
> -      newval = atomic_load_relaxed (&self->cancelhandling);
> -    }
> +  __do_cancel (PTHREAD_CANCELED);
>  }
> -libc_hidden_def (__pthread_disable_asynccancel)
> diff --git a/nptl/cleanup_defer.c b/nptl/cleanup_defer.c
> index eef87f9a9c..d04227722b 100644
> --- a/nptl/cleanup_defer.c
> +++ b/nptl/cleanup_defer.c
> @@ -82,10 +82,7 @@ ___pthread_unregister_cancel_restore (__pthread_unwind_buf_t *buf)
>  						    &cancelhandling, newval));
>  
>        if (cancel_enabled_and_canceled (cancelhandling))
> -	{
> -	  self->result = PTHREAD_CANCELED;
> -	  __do_cancel ();
> -	}
> +	__do_cancel (PTHREAD_CANCELED);
>      }
>  }
>  versioned_symbol (libc, ___pthread_unregister_cancel_restore,
> diff --git a/nptl/descr-const.sym b/nptl/descr-const.sym
> new file mode 100644
> index 0000000000..8608248354
> --- /dev/null
> +++ b/nptl/descr-const.sym
> @@ -0,0 +1,6 @@
> +#include <tls.h>
> +
> +-- Not strictly offsets, these values are using thread cancellation by arch
> +-- specific cancel entrypoint.
> +TCB_CANCELED_BIT	 CANCELED_BIT
> +TCB_CANCELED_BITMASK	 CANCELED_BITMASK
> diff --git a/nptl/descr.h b/nptl/descr.h
> index f8b5ac7c22..142470f3f3 100644
> --- a/nptl/descr.h
> +++ b/nptl/descr.h
> @@ -415,6 +415,24 @@ struct pthread
>    (sizeof (struct pthread) - offsetof (struct pthread, end_padding))
>  } __attribute ((aligned (TCB_ALIGNMENT)));
>  
> +static inline bool
> +cancel_enabled (int value)
> +{
> +  return (value & CANCELSTATE_BITMASK) == 0;
> +}
> +
> +static inline bool
> +cancel_async_enabled (int value)
> +{
> +  return (value & CANCELTYPE_BITMASK) != 0;
> +}
> +
> +static inline bool
> +cancel_exiting (int value)
> +{
> +  return (value & EXITING_BITMASK) != 0;
> +}
> +
>  static inline bool
>  cancel_enabled_and_canceled (int value)
>  {
> diff --git a/nptl/libc-cleanup.c b/nptl/libc-cleanup.c
> index 4c7bcda302..252006060a 100644
> --- a/nptl/libc-cleanup.c
> +++ b/nptl/libc-cleanup.c
> @@ -69,10 +69,7 @@ __libc_cleanup_pop_restore (struct _pthread_cleanup_buffer *buffer)
>  						    &cancelhandling, newval));
>  
>        if (cancel_enabled_and_canceled (cancelhandling))
> -	{
> -	  self->result = PTHREAD_CANCELED;
> -	  __do_cancel ();
> -	}
> +	__do_cancel (PTHREAD_CANCELED);
>      }
>  }
>  libc_hidden_def (__libc_cleanup_pop_restore)
> diff --git a/nptl/pthread_cancel.c b/nptl/pthread_cancel.c
> index 87c9ef69ad..fc5ca8b3d4 100644
> --- a/nptl/pthread_cancel.c
> +++ b/nptl/pthread_cancel.c
> @@ -23,6 +23,7 @@
>  #include <sysdep.h>
>  #include <unistd.h>
>  #include <unwind-link.h>
> +#include <cancellation-pc-check.h>
>  #include <stdio.h>
>  #include <gnu/lib-names.h>
>  #include <sys/single_threaded.h>
> @@ -40,31 +41,16 @@ sigcancel_handler (int sig, siginfo_t *si, void *ctx)
>        || si->si_code != SI_TKILL)
>      return;
>  
> +  /* Check if asynchronous cancellation mode is set or if interrupted
> +     instruction pointer falls within the cancellable syscall bridge.  For
> +     interruptable syscalls with external side-effects (i.e. partial reads),
> +     the kernel  will set the IP to after __syscall_cancel_arch_end, thus

s/kernel  will/kernel will/g

> +     disabling the cancellation and allowing the process to handle such
> +     conditions.  */
>    struct pthread *self = THREAD_SELF;
> -
>    int oldval = atomic_load_relaxed (&self->cancelhandling);
> -  while (1)
> -    {
> -      /* We are canceled now.  When canceled by another thread this flag
> -	 is already set but if the signal is directly send (internally or
> -	 from another process) is has to be done here.  */
> -      int newval = oldval | CANCELING_BITMASK | CANCELED_BITMASK;
> -
> -      if (oldval == newval || (oldval & EXITING_BITMASK) != 0)
> -	/* Already canceled or exiting.  */
> -	break;
> -
> -      if (atomic_compare_exchange_weak_acquire (&self->cancelhandling,
> -						&oldval, newval))
> -	{
> -	  self->result = PTHREAD_CANCELED;
> -
> -	  /* Make sure asynchronous cancellation is still enabled.  */
> -	  if ((oldval & CANCELTYPE_BITMASK) != 0)
> -	    /* Run the registered destructors and terminate the thread.  */
> -	    __do_cancel ();
> -	}
> -    }
> +  if (cancel_async_enabled (oldval) || cancellation_pc_check (ctx))
> +    __syscall_do_cancel ();
>  }
>  
>  int
> @@ -106,15 +92,13 @@ __pthread_cancel (pthread_t th)
>    /* Some syscalls are never restarted after being interrupted by a signal
>       handler, regardless of the use of SA_RESTART (they always fail with
>       EINTR).  So pthread_cancel cannot send SIGCANCEL unless the cancellation
> -     is enabled and set as asynchronous (in this case the cancellation will
> -     be acted in the cancellation handler instead by the syscall wrapper).
> -     Otherwise the target thread is set as 'cancelling' (CANCELING_BITMASK)
> +     is enabled.
> +     In this case the target thread is set as 'cancelled' (CANCELED_BITMASK)
>       by atomically setting 'cancelhandling' and the cancelation will be acted
>       upon on next cancellation entrypoing in the target thread.
>  
> -     It also requires to atomically check if cancellation is enabled and
> -     asynchronous, so both cancellation state and type are tracked on
> -     'cancelhandling'.  */
> +     It also requires to atomically check if cancellation is enabled, so the
> +     state are also tracked on 'cancelhandling'.  */
>  
>    int result = 0;
>    int oldval = atomic_load_relaxed (&pd->cancelhandling);
> @@ -122,19 +106,17 @@ __pthread_cancel (pthread_t th)
>    do
>      {
>      again:
> -      newval = oldval | CANCELING_BITMASK | CANCELED_BITMASK;
> +      newval = oldval | CANCELED_BITMASK;
>        if (oldval == newval)
>  	break;
>  
> -      /* If the cancellation is handled asynchronously just send a
> -	 signal.  We avoid this if possible since it's more
> -	 expensive.  */
> -      if (cancel_enabled_and_canceled_and_async (newval))
> +      /* Only send the SIGANCEL signal is cancellation is enabled, since some

s/signal is/signal if/g

> +	 syscalls are never restarted even with SA_RESTART.  The signal
> +	 will act iff async cancellation is enabled.  */
> +      if (cancel_enabled (newval))
>  	{
> -	  /* Mark the cancellation as "in progress".  */
> -	  int newval2 = oldval | CANCELING_BITMASK;
>  	  if (!atomic_compare_exchange_weak_acquire (&pd->cancelhandling,
> -						     &oldval, newval2))
> +						     &oldval, newval))
>  	    goto again;
>  
>  	  if (pd == THREAD_SELF)
> @@ -143,9 +125,8 @@ __pthread_cancel (pthread_t th)
>  	       pthread_create, so the signal handler may not have been
>  	       set up for a self-cancel.  */
>  	    {
> -	      pd->result = PTHREAD_CANCELED;
> -	      if ((newval & CANCELTYPE_BITMASK) != 0)
> -		__do_cancel ();
> +	      if (cancel_async_enabled (newval))
> +		__do_cancel (PTHREAD_CANCELED);
>  	    }
>  	  else
>  	    /* The cancellation handler will take care of marking the
> @@ -154,19 +135,18 @@ __pthread_cancel (pthread_t th)
>  
>  	  break;
>  	}
> -
> -	/* A single-threaded process should be able to kill itself, since
> -	   there is nothing in the POSIX specification that says that it
> -	   cannot.  So we set multiple_threads to true so that cancellation
> -	   points get executed.  */
> -	THREAD_SETMEM (THREAD_SELF, header.multiple_threads, 1);
> -#ifndef TLS_MULTIPLE_THREADS_IN_TCB
> -	__libc_single_threaded_internal = 0;
> -#endif
>      }
>    while (!atomic_compare_exchange_weak_acquire (&pd->cancelhandling, &oldval,
>  						newval));
>  
> +  /* A single-threaded process should be able to kill itself, since there is
> +     nothing in the POSIX specification that says that it cannot.  So we set
> +     multiple_threads to true so that cancellation points get executed.  */
> +  THREAD_SETMEM (THREAD_SELF, header.multiple_threads, 1);
> +#ifndef TLS_MULTIPLE_THREADS_IN_TCB
> +  __libc_single_threaded_internal = 0;
> +#endif
> +
>    return result;
>  }
>  versioned_symbol (libc, __pthread_cancel, pthread_cancel, GLIBC_2_34);
> diff --git a/nptl/pthread_exit.c b/nptl/pthread_exit.c
> index 9f48dcc5d0..125f44b78a 100644
> --- a/nptl/pthread_exit.c
> +++ b/nptl/pthread_exit.c
> @@ -31,9 +31,7 @@ __pthread_exit (void *value)
>                      " must be installed for pthread_exit to work\n");
>    }
>  
> -  THREAD_SETMEM (THREAD_SELF, result, value);
> -
> -  __do_cancel ();
> +  __do_cancel (value);
>  }
>  libc_hidden_def (__pthread_exit)
>  weak_alias (__pthread_exit, pthread_exit)
> diff --git a/nptl/pthread_setcancelstate.c b/nptl/pthread_setcancelstate.c
> index 7f81d812dd..ffb482a83d 100644
> --- a/nptl/pthread_setcancelstate.c
> +++ b/nptl/pthread_setcancelstate.c
> @@ -48,7 +48,7 @@ __pthread_setcancelstate (int state, int *oldstate)
>  						&oldval, newval))
>  	{
>  	  if (cancel_enabled_and_canceled_and_async (newval))
> -	    __do_cancel ();
> +	    __do_cancel (PTHREAD_CANCELED);
>  
>  	  break;
>  	}
> diff --git a/nptl/pthread_setcanceltype.c b/nptl/pthread_setcanceltype.c
> index 7dfeee4364..9fe7c0029b 100644
> --- a/nptl/pthread_setcanceltype.c
> +++ b/nptl/pthread_setcanceltype.c
> @@ -48,7 +48,7 @@ __pthread_setcanceltype (int type, int *oldtype)
>  	  if (cancel_enabled_and_canceled_and_async (newval))
>  	    {
>  	      THREAD_SETMEM (self, result, PTHREAD_CANCELED);
> -	      __do_cancel ();
> +	      __do_cancel (PTHREAD_CANCELED);
>  	    }
>  
>  	  break;
> diff --git a/nptl/pthread_testcancel.c b/nptl/pthread_testcancel.c
> index 38b5a2d4bc..b574c0f001 100644
> --- a/nptl/pthread_testcancel.c
> +++ b/nptl/pthread_testcancel.c
> @@ -25,10 +25,7 @@ ___pthread_testcancel (void)
>    struct pthread *self = THREAD_SELF;
>    int cancelhandling = atomic_load_relaxed (&self->cancelhandling);
>    if (cancel_enabled_and_canceled (cancelhandling))
> -    {
> -      self->result = PTHREAD_CANCELED;
> -      __do_cancel ();
> -    }
> +    __do_cancel (PTHREAD_CANCELED);
>  }
>  versioned_symbol (libc, ___pthread_testcancel, pthread_testcancel, GLIBC_2_34);
>  libc_hidden_ver (___pthread_testcancel, __pthread_testcancel)
> diff --git a/nptl/tst-cancel31.c b/nptl/tst-cancel31.c
> new file mode 100644
> index 0000000000..4e93cc5ae1
> --- /dev/null
> +++ b/nptl/tst-cancel31.c
> @@ -0,0 +1,100 @@
> +/* Check side-effect act for cancellable syscalls (BZ #12683).

Suggest:

Verify side-effects of cancellable syscalls (BZ #12683).

> +   Copyright (C) 2023 Free Software Foundation, Inc.
> +   This file is part of the GNU C Library.
> +
> +   The GNU C Library is free software; you can redistribute it and/or
> +   modify it under the terms of the GNU Lesser General Public
> +   License as published by the Free Software Foundation; either
> +   version 2.1 of the License, or (at your option) any later version.
> +
> +   The GNU C Library is distributed in the hope that it will be useful,
> +   but WITHOUT ANY WARRANTY; without even the implied warranty of
> +   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
> +   Lesser General Public License for more details.
> +
> +   You should have received a copy of the GNU Lesser General Public
> +   License along with the GNU C Library; if not, see
> +   <https://www.gnu.org/licenses/>.  */
> +
> +/* This testcase checks if there is resource leakage if the syscall has
> +   returned from kernelspace, but before userspace saves the return
> +   value.  The 'leaker' thread should be able to close the file descriptor
> +   if the resource is already allocated, meaning that if the cancellation
> +   signal arrives *after* the open syscal return from kernel, the
> +   side-effect should be visible to application.  */
> +
> +#include <sys/types.h>
> +#include <sys/stat.h>
> +#include <fcntl.h>
> +#include <unistd.h>
> +#include <stdlib.h>
> +
> +#include <support/xunistd.h>
> +#include <support/xthread.h>
> +#include <support/check.h>
> +#include <support/temp_file.h>
> +#include <support/support.h>
> +#include <support/descriptors.h>
> +
> +static void *
> +writeopener (void *arg)
> +{
> +  int fd;
> +  for (;;)
> +    {
> +      fd = open (arg, O_WRONLY);
> +      xclose (fd);
> +    }
> +  return NULL;
> +}
> +
> +static void *
> +leaker (void *arg)
> +{
> +  int fd = open (arg, O_RDONLY);
> +  TEST_VERIFY_EXIT (fd > 0);
> +  pthread_setcancelstate (PTHREAD_CANCEL_DISABLE, 0);

OK. Perfect, after verifying the fd is valid you disable cancellation and
cleanup the resource. This is required because close is a cancellation point.

> +  xclose (fd);
> +  return NULL;
> +}
> +
> +static int
> +do_test (void)
> +{
> +  enum {
> +    iter_count = 1000
> +  };
> +
> +  char *dir = support_create_temp_directory ("tst-cancel28");
> +  char *name = xasprintf ("%s/fifo", dir);
> +  TEST_COMPARE (mkfifo (name, 0600), 0);
> +  add_temp_file (name);
> +
> +  struct support_descriptors *descrs = support_descriptors_list ();
> +
> +  srand (1);
> +
> +  xpthread_create (NULL, writeopener, name);
> +  for (int i = 0; i < iter_count; i++)
> +    {
> +      pthread_t td = xpthread_create (NULL, leaker, name);
> +      struct timespec ts =
> +	{ .tv_nsec = rand () % 100000, .tv_sec = 0 };
> +      nanosleep (&ts, NULL);
> +      /* Ignore pthread_cancel result because it might be the
> +	 case when pthread_cancel is called when thread is already
> +	 exited.  */
> +      pthread_cancel (td);
> +      xpthread_join (td);
> +    }
> +
> +  support_descriptors_check (descrs);
> +
> +  support_descriptors_free (descrs);
> +
> +  free (name);
> +
> +  return 0;
> +}
> +
> +#include <support/test-driver.c>
> diff --git a/sysdeps/generic/syscall_types.h b/sysdeps/generic/syscall_types.h
> new file mode 100644
> index 0000000000..2ddeaa2b5f
> --- /dev/null
> +++ b/sysdeps/generic/syscall_types.h
> @@ -0,0 +1,25 @@
> +/* Types and macros used for syscall issuing.
> +   Copyright (C) 2023 Free Software Foundation, Inc.
> +   This file is part of the GNU C Library.
> +
> +   The GNU C Library is free software; you can redistribute it and/or
> +   modify it under the terms of the GNU Lesser General Public
> +   License as published by the Free Software Foundation; either
> +   version 2.1 of the License, or (at your option) any later version.
> +
> +   The GNU C Library is distributed in the hope that it will be useful,
> +   but WITHOUT ANY WARRANTY; without even the implied warranty of
> +   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
> +   Lesser General Public License for more details.
> +
> +   You should have received a copy of the GNU Lesser General Public
> +   License along with the GNU C Library; if not, see
> +   <https://www.gnu.org/licenses/>.  */
> +
> +#ifndef _SYSCALL_TYPES_H
> +#define _SYSCALL_TYPES_H
> +
> +typedef long int __syscall_arg_t;
> +#define __SSC(__x) ((__syscall_arg_t) (__x))
> +
> +#endif
> diff --git a/sysdeps/nptl/cancellation-pc-check.h b/sysdeps/nptl/cancellation-pc-check.h
> new file mode 100644
> index 0000000000..cb38ad6819
> --- /dev/null
> +++ b/sysdeps/nptl/cancellation-pc-check.h
> @@ -0,0 +1,54 @@
> +/* Architecture specific code for pthread cancellation handling.
> +   Copyright (C) 2023 Free Software Foundation, Inc.
> +   This file is part of the GNU C Library.
> +
> +   The GNU C Library is free software; you can redistribute it and/or
> +   modify it under the terms of the GNU Lesser General Public
> +   License as published by the Free Software Foundation; either
> +   version 2.1 of the License, or (at your option) any later version.
> +
> +   The GNU C Library is distributed in the hope that it will be useful,
> +   but WITHOUT ANY WARRANTY; without even the implied warranty of
> +   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
> +   Lesser General Public License for more details.
> +
> +   You should have received a copy of the GNU Lesser General Public
> +   License along with the GNU C Library; if not, see
> +   <http://www.gnu.org/licenses/>.  */
> +
> +#ifndef _NPTL_CANCELLATION_PC_CHECK
> +#define _NPTL_CANCELLATION_PC_CHECK
> +
> +#include <sigcontextinfo.h>
> +
> +/* For syscalls with side-effects (e.g read that might return partial read),
> +   the kernel cannot restart the syscall when interrupted by a signal, it must
> +   return from the call with whatever partial result.  In this case, the saved
> +   program counter is set just after the syscall instruction, so the SIGCANCEL
> +   handler should not act on cancellation.
> +
> +   The __syscall_cancel_arch function, used for all cancellable syscalls,
> +   contains two extra markers, __syscall_cancel_arch_start and
> +   __syscall_cancel_arch_end.  The former points to just before the initial
> +   conditional branch that checks if the thread has received a cancellation
> +   request, while former points to the instruction after the one responsible
> +   to issue the syscall.
> +
> +   The function check if the program counter (PC) from ucontext_t CTX is
> +   within the start and then end boundary from the __syscall_cancel_arch
> +   bridge.  Return TRUE if the PC is within the boundary, meaning the
> +   syscall does not have any side effects; or FALSE otherwise.  */
> +
> +static __always_inline bool
> +cancellation_pc_check (void *ctx)
> +{
> +  /* Both are defined in syscall_cancel.S.  */
> +  extern const char __syscall_cancel_arch_start[1];
> +  extern const char __syscall_cancel_arch_end[1];
> +
> +  uintptr_t pc = sigcontext_get_pc (ctx);
> +  return pc >= (uintptr_t) __syscall_cancel_arch_start
> +	 && pc < (uintptr_t) __syscall_cancel_arch_end;

My opinion is that this needs a bit to support discontiguous handling of
the syscall range.

The bit is set upon entry to region 3/4 and cleared upon exit. If we have
to expand it we can set it at region 2 and clear it at region 5. The
interesting question is: What happens if you [sig]longjump out of the
handler in the middle of doing the syscall. Firstly you are always stuck
in asynchronous signal state, so you can only call AS-safe functions.
You don't know the results of the kernel action so you might have leaked
a resource, but that's an application fault because it used [sig]longjmp.
The safe option is to leave the bit set since you are still in the middle
of the syscall and leaked a resources. You can zero the bit on the next
call to disable cancellation.

Also it needs to be thread-specific memory so we don't have performance
issues.

> +}
> +
> +#endif
> diff --git a/sysdeps/nptl/lowlevellock-futex.h b/sysdeps/nptl/lowlevellock-futex.h
> index 0392b5c04f..bd57913b6f 100644
> --- a/sysdeps/nptl/lowlevellock-futex.h
> +++ b/sysdeps/nptl/lowlevellock-futex.h
> @@ -21,7 +21,6 @@
>  
>  #ifndef __ASSEMBLER__
>  # include <sysdep.h>
> -# include <sysdep-cancel.h>
>  # include <kernel-features.h>
>  #endif
>  
> @@ -120,21 +119,10 @@
>  		     nr_wake, nr_move, mutex, val)
>  
>  /* Like lll_futex_wait, but acting as a cancellable entrypoint.  */
> -# define lll_futex_wait_cancel(futexp, val, private) \
> -  ({                                                                   \
> -    int __oldtype = LIBC_CANCEL_ASYNC ();			       \
> -    long int __err = lll_futex_wait (futexp, val, LLL_SHARED);	       \
> -    LIBC_CANCEL_RESET (__oldtype);				       \
> -    __err;							       \
> -  })
> -
> -/* Like lll_futex_timed_wait, but acting as a cancellable entrypoint.  */
> -# define lll_futex_timed_wait_cancel(futexp, val, timeout, private) \
> -  ({									   \
> -    int __oldtype = LIBC_CANCEL_ASYNC ();			       	   \
> -    long int __err = lll_futex_timed_wait (futexp, val, timeout, private); \
> -    LIBC_CANCEL_RESET (__oldtype);					   \
> -    __err;								   \
> +# define lll_futex_wait_cancel(futexp, val, private)			\
> +  ({									\
> +     int __op = __lll_private_flag (FUTEX_WAIT, private);		\
> +     INTERNAL_SYSCALL_CANCEL (futex, futexp, __op, val, NULL);		\
>    })
>  
>  #endif  /* !__ASSEMBLER__  */
> diff --git a/sysdeps/nptl/pthreadP.h b/sysdeps/nptl/pthreadP.h
> index 54f9198681..a9d351b9b8 100644
> --- a/sysdeps/nptl/pthreadP.h
> +++ b/sysdeps/nptl/pthreadP.h
> @@ -261,10 +261,12 @@ libc_hidden_proto (__pthread_unregister_cancel)
>  /* Called when a thread reacts on a cancellation request.  */
>  static inline void
>  __attribute ((noreturn, always_inline))
> -__do_cancel (void)
> +__do_cancel (void *result)
>  {
>    struct pthread *self = THREAD_SELF;
>  
> +  self->result = result;
> +
>    /* Make sure we get no more cancellations.  */
>    atomic_fetch_or_relaxed (&self->cancelhandling, EXITING_BITMASK);
>  
> @@ -272,6 +274,13 @@ __do_cancel (void)
>  		    THREAD_GETMEM (self, cleanup_jmp_buf));
>  }
>  
> +extern long int __syscall_cancel_arch (volatile int *, __syscall_arg_t nr,
> +     __syscall_arg_t arg1, __syscall_arg_t arg2, __syscall_arg_t arg3,
> +     __syscall_arg_t arg4, __syscall_arg_t arg5, __syscall_arg_t arg6
> +     __SYSCALL_CANCEL7_ARCH_ARG_DEF) attribute_hidden;
> +
> +extern _Noreturn void __syscall_do_cancel (void) attribute_hidden;
> +
>  
>  /* Internal prototypes.  */
>  
> diff --git a/sysdeps/powerpc/powerpc32/sysdep.h b/sysdeps/powerpc/powerpc32/sysdep.h
> index 095a726765..df67e3516a 100644
> --- a/sysdeps/powerpc/powerpc32/sysdep.h
> +++ b/sysdeps/powerpc/powerpc32/sysdep.h
> @@ -104,6 +104,9 @@ GOT_LABEL:			;					      \
>  # define JUMPTARGET(name) name
>  #endif
>  
> +#define TAIL_CALL_NO_RETURN(__func) \
> +    b __func@local
> +
>  #if defined SHARED && defined PIC && !defined NO_HIDDEN
>  # undef HIDDEN_JUMPTARGET
>  # define HIDDEN_JUMPTARGET(name) __GI_##name##@local
> diff --git a/sysdeps/powerpc/powerpc64/sysdep.h b/sysdeps/powerpc/powerpc64/sysdep.h
> index ce92d8b3d2..1815131dc2 100644
> --- a/sysdeps/powerpc/powerpc64/sysdep.h
> +++ b/sysdeps/powerpc/powerpc64/sysdep.h
> @@ -352,6 +352,25 @@ LT_LABELSUFFIX(name,_name_end): ; \
>    ENTRY (name);					\
>    DO_CALL (SYS_ify (syscall_name))
>  
> +#ifdef SHARED
> +# define TAIL_CALL_NO_RETURN(__func) \
> +    b JUMPTARGET(__func)
> +#else
> +# define TAIL_CALL_NO_RETURN(__func) \
> +    .ifdef .Local ## __func; \
> +    b .Local ## __func; \
> +    .else; \
> +.Local ## __func: \
> +    mflr 0; \
> +    std 0,FRAME_LR_SAVE(1); \
> +    stdu 1,-FRAME_MIN_SIZE(1); \
> +    cfi_adjust_cfa_offset(FRAME_MIN_SIZE); \
> +    cfi_offset(lr,FRAME_LR_SAVE); \
> +    bl JUMPTARGET(__func); \
> +    nop; \
> +    .endif
> +#endif
> +
>  #ifdef SHARED
>  #define TAIL_CALL_SYSCALL_ERROR \
>      b JUMPTARGET (NOTOC (__syscall_error))
> diff --git a/sysdeps/pthread/tst-cancel2.c b/sysdeps/pthread/tst-cancel2.c
> index 45de68241f..ac77aca4be 100644
> --- a/sysdeps/pthread/tst-cancel2.c
> +++ b/sysdeps/pthread/tst-cancel2.c
> @@ -32,6 +32,10 @@ tf (void *arg)
>    char buf[100000];
>  
>    while (write (fd[1], buf, sizeof (buf)) > 0);
> +  /* The write can return -1/EPIPE if the pipe was closed before the
> +     thread calls write, which signals a side-effect that must be
> +     signaled to the thread.  */
> +  pthread_testcancel ();
>  
>    return (void *) 42l;
>  }
> diff --git a/sysdeps/sh/sysdep.h b/sysdeps/sh/sysdep.h
> index 003b05fa25..60fd06188c 100644
> --- a/sysdeps/sh/sysdep.h
> +++ b/sysdeps/sh/sysdep.h
> @@ -24,6 +24,7 @@
>  
>  #define ALIGNARG(log2) log2
>  #define ASM_SIZE_DIRECTIVE(name) .size name,.-name
> +#define L(label) .L##label
>  
>  #ifdef SHARED
>  #define PLTJMP(_x)	_x##@PLT
> diff --git a/sysdeps/unix/sysdep.h b/sysdeps/unix/sysdep.h
> index 1ba4de99db..1cb1f1d9b7 100644
> --- a/sysdeps/unix/sysdep.h
> +++ b/sysdeps/unix/sysdep.h
> @@ -24,6 +24,9 @@
>  #define	SYSCALL__(name, args)	PSEUDO (__##name, name, args)
>  #define	SYSCALL(name, args)	PSEUDO (name, name, args)
>  
> +#ifndef __ASSEMBLER__
> +# include <errno.h>
> +
>  #define __SYSCALL_CONCAT_X(a,b)     a##b
>  #define __SYSCALL_CONCAT(a,b)       __SYSCALL_CONCAT_X (a, b)
>  
> @@ -108,42 +111,148 @@
>  #define INLINE_SYSCALL_CALL(...) \
>    __INLINE_SYSCALL_DISP (__INLINE_SYSCALL, __VA_ARGS__)
>  
> -#if IS_IN (rtld)
> -/* All cancellation points are compiled out in the dynamic loader.  */
> -# define NO_SYSCALL_CANCEL_CHECKING 1
> +#define __INTERNAL_SYSCALL_NCS0(name) \
> +  INTERNAL_SYSCALL_NCS (name, 0)
> +#define __INTERNAL_SYSCALL_NCS1(name, a1) \
> +  INTERNAL_SYSCALL_NCS (name, 1, a1)
> +#define __INTERNAL_SYSCALL_NCS2(name, a1, a2) \
> +  INTERNAL_SYSCALL_NCS (name, 2, a1, a2)
> +#define __INTERNAL_SYSCALL_NCS3(name, a1, a2, a3) \
> +  INTERNAL_SYSCALL_NCS (name, 3, a1, a2, a3)
> +#define __INTERNAL_SYSCALL_NCS4(name, a1, a2, a3, a4) \
> +  INTERNAL_SYSCALL_NCS (name, 4, a1, a2, a3, a4)
> +#define __INTERNAL_SYSCALL_NCS5(name, a1, a2, a3, a4, a5) \
> +  INTERNAL_SYSCALL_NCS (name, 5, a1, a2, a3, a4, a5)
> +#define __INTERNAL_SYSCALL_NCS6(name, a1, a2, a3, a4, a5, a6) \
> +  INTERNAL_SYSCALL_NCS (name, 6, a1, a2, a3, a4, a5, a6)
> +#define __INTERNAL_SYSCALL_NCS7(name, a1, a2, a3, a4, a5, a6, a7) \
> +  INTERNAL_SYSCALL_NCS (name, 7, a1, a2, a3, a4, a5, a6, a7)
> +
> +/* Issue a syscall defined by syscall number plus any other argument required.
> +   It is similar to INTERNAL_SYSCALL_NCS macro, but without the need to pass
> +   the expected argument number as third parameter.  */
> +#define INTERNAL_SYSCALL_NCS_CALL(...) \
> +  __INTERNAL_SYSCALL_DISP (__INTERNAL_SYSCALL_NCS, __VA_ARGS__)
> +
> +/* Cancellation macros.  */
> +#include <syscall_types.h>
> +
> +/* Adjust both the __syscall_cancel and the SYSCALL_CANCEL macro to support
> +   7 arguments instead of default 6 (curently only mip32).  It avoid add
> +   the requirement to each architecture to support 7 argument macros
> +   {INTERNAL,INLINE}_SYSCALL.  */
> +#ifdef HAVE_CANCELABLE_SYSCALL_WITH_7_ARGS
> +# define __SYSCALL_CANCEL7_ARG_DEF	__syscall_arg_t a7,
> +# define __SYSCALL_CANCEL7_ARCH_ARG_DEF ,__syscall_arg_t a7
> +# define __SYSCALL_CANCEL7_ARG		0,
> +# define __SYSCALL_CANCEL7_ARG7		a7,
> +# define __SYSCALL_CANCEL7_ARCH_ARG7	, a7
>  #else
> -# define NO_SYSCALL_CANCEL_CHECKING SINGLE_THREAD_P
> +# define __SYSCALL_CANCEL7_ARG_DEF
> +# define __SYSCALL_CANCEL7_ARCH_ARG_DEF
> +# define __SYSCALL_CANCEL7_ARG
> +# define __SYSCALL_CANCEL7_ARG7
> +# define __SYSCALL_CANCEL7_ARCH_ARG7
>  #endif
> +long int __internal_syscall_cancel (__syscall_arg_t a1, __syscall_arg_t a2,
> +				    __syscall_arg_t a3, __syscall_arg_t a4,
> +				    __syscall_arg_t a5, __syscall_arg_t a6,
> +				    __SYSCALL_CANCEL7_ARG_DEF
> +				    __syscall_arg_t nr) attribute_hidden;
>  
> -#define SYSCALL_CANCEL(...) \
> -  ({									     \
> -    long int sc_ret;							     \
> -    if (NO_SYSCALL_CANCEL_CHECKING)					     \
> -      sc_ret = INLINE_SYSCALL_CALL (__VA_ARGS__); 			     \
> -    else								     \
> -      {									     \
> -	int sc_cancel_oldtype = LIBC_CANCEL_ASYNC ();			     \
> -	sc_ret = INLINE_SYSCALL_CALL (__VA_ARGS__);			     \
> -        LIBC_CANCEL_RESET (sc_cancel_oldtype);				     \
> -      }									     \
> -    sc_ret;								     \
> -  })
> +long int __syscall_cancel (__syscall_arg_t arg1, __syscall_arg_t arg2,
> +			   __syscall_arg_t arg3, __syscall_arg_t arg4,
> +			   __syscall_arg_t arg5, __syscall_arg_t arg6,
> +			   __SYSCALL_CANCEL7_ARG_DEF
> +			   __syscall_arg_t nr) attribute_hidden;
>  
> -/* Issue a syscall defined by syscall number plus any other argument
> -   required.  Any error will be returned unmodified (including errno).  */
> -#define INTERNAL_SYSCALL_CANCEL(...) \
> -  ({									     \
> -    long int sc_ret;							     \
> -    if (NO_SYSCALL_CANCEL_CHECKING) 					     \
> -      sc_ret = INTERNAL_SYSCALL_CALL (__VA_ARGS__); 			     \
> -    else								     \
> -      {									     \
> -	int sc_cancel_oldtype = LIBC_CANCEL_ASYNC ();			     \
> -	sc_ret = INTERNAL_SYSCALL_CALL (__VA_ARGS__);			     \
> -        LIBC_CANCEL_RESET (sc_cancel_oldtype);				     \
> -      }									     \
> -    sc_ret;								     \
> -  })
> +#define __SYSCALL_CANCEL0(name)						\
> +  __syscall_cancel (0, 0, 0, 0, 0, 0, __SYSCALL_CANCEL7_ARG __NR_##name)
> +#define __SYSCALL_CANCEL1(name, a1)					\
> +  __syscall_cancel (__SSC (a1), 0, 0, 0, 0, 0,				\
> +		    __SYSCALL_CANCEL7_ARG __NR_##name)
> +#define __SYSCALL_CANCEL2(name, a1, a2) \
> +  __syscall_cancel (__SSC (a1), __SSC (a2), 0, 0, 0, 0,			\
> +		    __SYSCALL_CANCEL7_ARG __NR_##name)
> +#define __SYSCALL_CANCEL3(name, a1, a2, a3) \
> +  __syscall_cancel (__SSC (a1), __SSC (a2), __SSC (a3), 0, 0, 0,	\
> +		    __SYSCALL_CANCEL7_ARG __NR_##name)
> +#define __SYSCALL_CANCEL4(name, a1, a2, a3, a4) \
> +  __syscall_cancel (__SSC (a1), __SSC (a2), __SSC (a3),			\
> +		    __SSC(a4), 0, 0, __SYSCALL_CANCEL7_ARG __NR_##name)
> +#define __SYSCALL_CANCEL5(name, a1, a2, a3, a4, a5) \
> +  __syscall_cancel (__SSC (a1), __SSC (a2), __SSC (a3), __SSC(a4),	\
> +		    __SSC (a5), 0, __SYSCALL_CANCEL7_ARG __NR_##name)
> +#define __SYSCALL_CANCEL6(name, a1, a2, a3, a4, a5, a6) \
> +  __syscall_cancel (__SSC (a1), __SSC (a2), __SSC (a3), __SSC (a4),	\
> +		    __SSC (a5), __SSC (a6), __SYSCALL_CANCEL7_ARG	\
> +		    __NR_##name)
> +#define __SYSCALL_CANCEL7(name, a1, a2, a3, a4, a5, a6, a7)		\
> +  __syscall_cancel (__SSC (a1), __SSC (a2), __SSC (a3), __SSC (a4),	\
> +		    __SSC (a5), __SSC (a6), __SSC (a7), __NR_##name)
> +
> +#define __SYSCALL_CANCEL_NARGS_X(a,b,c,d,e,f,g,h,n,...) n
> +#define __SYSCALL_CANCEL_NARGS(...) \
> +  __SYSCALL_CANCEL_NARGS_X (__VA_ARGS__,7,6,5,4,3,2,1,0,)
> +#define __SYSCALL_CANCEL_CONCAT_X(a,b)     a##b
> +#define __SYSCALL_CANCEL_CONCAT(a,b)       __SYSCALL_CANCEL_CONCAT_X (a, b)
> +#define __SYSCALL_CANCEL_DISP(b,...) \
> +  __SYSCALL_CANCEL_CONCAT (b,__SYSCALL_CANCEL_NARGS(__VA_ARGS__))(__VA_ARGS__)
> +
> +/* Issue a cancellable syscall defined first argument plus any other argument
> +   required.  If and error occurs its value, the macro returns -1 and sets
> +   errno accordingly.  */
> +#define __SYSCALL_CANCEL_CALL(...) \
> +  __SYSCALL_CANCEL_DISP (__SYSCALL_CANCEL, __VA_ARGS__)
> +
> +#define __INTERNAL_SYSCALL_CANCEL0(name)				\
> +  __internal_syscall_cancel (0, 0, 0, 0, 0, 0, __SYSCALL_CANCEL7_ARG	\
> +			     __NR_##name)
> +#define __INTERNAL_SYSCALL_CANCEL1(name, a1)				\
> +  __internal_syscall_cancel (__SSC (a1), 0, 0, 0, 0, 0,			\
> +			     __SYSCALL_CANCEL7_ARG __NR_##name)
> +#define __INTERNAL_SYSCALL_CANCEL2(name, a1, a2)			\
> +  __internal_syscall_cancel (__SSC (a1), __SSC (a2), 0, 0, 0, 0,	\
> +			     __SYSCALL_CANCEL7_ARG __NR_##name)
> +#define __INTERNAL_SYSCALL_CANCEL3(name, a1, a2, a3)			\
> +  __internal_syscall_cancel (__SSC (a1), __SSC (a2), __SSC (a3), 0,	\
> +			     0, 0, __SYSCALL_CANCEL7_ARG __NR_##name)
> +#define __INTERNAL_SYSCALL_CANCEL4(name, a1, a2, a3, a4)		\
> +  __internal_syscall_cancel (__SSC (a1), __SSC (a2), __SSC (a3),	\
> +			     __SSC(a4), 0, 0,				\
> +			     __SYSCALL_CANCEL7_ARG __NR_##name)
> +#define __INTERNAL_SYSCALL_CANCEL5(name, a1, a2, a3, a4, a5)		\
> +  __internal_syscall_cancel (__SSC (a1), __SSC (a2), __SSC (a3),	\
> +			     __SSC(a4), __SSC (a5), 0,			\
> +			     __SYSCALL_CANCEL7_ARG __NR_##name)
> +#define __INTERNAL_SYSCALL_CANCEL6(name, a1, a2, a3, a4, a5, a6)	\
> +  __internal_syscall_cancel (__SSC (a1), __SSC (a2), __SSC (a3),	\
> +			     __SSC (a4), __SSC (a5), __SSC (a6),	\
> +			     __SYSCALL_CANCEL7_ARG __NR_##name)
> +#define __INTERNAL_SYSCALL_CANCEL7(name, a1, a2, a3, a4, a5, a6, a7) \
> +  __internal_syscall_cancel (__SSC (a1), __SSC (a2), __SSC (a3),     \
> +			     __SSC (a4), __SSC (a5), __SSC (a6),     \
> +			     __SSC (a7), __NR_##name)
> +
> +/* Issue a cancellable syscall defined by syscall number NAME plus any other
> +   argument required.  If an error occurs its value is returned as an negative
> +   number unmodified and errno is not set.  */
> +#define __INTERNAL_SYSCALL_CANCEL_CALL(...) \
> +  __SYSCALL_CANCEL_DISP (__INTERNAL_SYSCALL_CANCEL, __VA_ARGS__)
> +
> +#if IS_IN (rtld)
> +/* The loader does not need to handle thread cancellation, use direct
> +   syscall instead.  */
> +# define INTERNAL_SYSCALL_CANCEL(...) INTERNAL_SYSCALL_CALL(__VA_ARGS__)
> +# define SYSCALL_CANCEL(...)          INLINE_SYSCALL_CALL (__VA_ARGS__)
> +#else
> +# define INTERNAL_SYSCALL_CANCEL(...) \
> +  __INTERNAL_SYSCALL_CANCEL_CALL (__VA_ARGS__)
> +# define SYSCALL_CANCEL(...) \
> +  __SYSCALL_CANCEL_CALL (__VA_ARGS__)
> +#endif
> +
> +#endif /* __ASSEMBLER__  */
>  
>  /* Machine-dependent sysdep.h files are expected to define the macro
>     PSEUDO (function_name, syscall_name) to emit assembly code to define the
> diff --git a/sysdeps/unix/sysv/linux/aarch64/syscall_cancel.S b/sysdeps/unix/sysv/linux/aarch64/syscall_cancel.S
> new file mode 100644
> index 0000000000..e91a431b36
> --- /dev/null
> +++ b/sysdeps/unix/sysv/linux/aarch64/syscall_cancel.S
> @@ -0,0 +1,59 @@
> +/* Cancellable syscall wrapper.  Linux/AArch64 version.
> +   Copyright (C) 2023 Free Software Foundation, Inc.
> +   This file is part of the GNU C Library.
> +
> +   The GNU C Library is free software; you can redistribute it and/or
> +   modify it under the terms of the GNU Lesser General Public
> +   License as published by the Free Software Foundation; either
> +   version 2.1 of the License, or (at your option) any later version.
> +
> +   The GNU C Library is distributed in the hope that it will be useful,
> +   but WITHOUT ANY WARRANTY; without even the implied warranty of
> +   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
> +   Lesser General Public License for more details.
> +
> +   You should have received a copy of the GNU Lesser General Public
> +   License along with the GNU C Library; if not, see
> +   <http://www.gnu.org/licenses/>.  */
> +
> +#include <sysdep.h>
> +#include <descr-const.h>
> +
> +/* long int [x0] __syscall_cancel_arch (int *cancelhandling [x0],
> +					long int nr   [x1],
> +					long int arg1 [x2],
> +					long int arg2 [x3],
> +					long int arg3 [x4],
> +					long int arg4 [x5],
> +					long int arg5 [x6],
> +					long int arg6 [x7])  */
> +
> +ENTRY (__syscall_cancel_arch)
> +
> +	.globl __syscall_cancel_arch_start
> +__syscall_cancel_arch_start:
> +
> +	/* if (*cancelhandling & CANCELED_BITMASK)
> +	     __syscall_do_cancel()  */
> +	ldr	w0, [x0]
> +	tbnz    w0, TCB_CANCELED_BIT, 1f
> +
> +	/* Issue a 6 argument syscall, the nr [x1] being the syscall
> +	   number.  */
> +	mov	x8, x1
> +	mov	x0, x2
> +	mov	x1, x3
> +	mov	x2, x4
> +	mov	x3, x5
> +	mov	x4, x6
> +	mov	x5, x7
> +	svc	0x0
> +
> +	.globl __syscall_cancel_arch_end
> +__syscall_cancel_arch_end:
> +	ret
> +
> +1:
> +	b	__syscall_do_cancel
> +
> +END (__syscall_cancel_arch)
> diff --git a/sysdeps/unix/sysv/linux/alpha/syscall_cancel.S b/sysdeps/unix/sysv/linux/alpha/syscall_cancel.S
> new file mode 100644
> index 0000000000..377eef48be
> --- /dev/null
> +++ b/sysdeps/unix/sysv/linux/alpha/syscall_cancel.S
> @@ -0,0 +1,80 @@
> +/* Cancellable syscall wrapper.  Linux/alpha version.
> +   Copyright (C) 2023 Free Software Foundation, Inc.
> +   This file is part of the GNU C Library.
> +
> +   The GNU C Library is free software; you can redistribute it and/or
> +   modify it under the terms of the GNU Lesser General Public
> +   License as published by the Free Software Foundation; either
> +   version 2.1 of the License, or (at your option) any later version.
> +
> +   The GNU C Library is distributed in the hope that it will be useful,
> +   but WITHOUT ANY WARRANTY; without even the implied warranty of
> +   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
> +   Lesser General Public License for more details.
> +
> +   You should have received a copy of the GNU Lesser General Public
> +   License along with the GNU C Library; if not, see
> +   <http://www.gnu.org/licenses/>.  */
> +
> +#include <sysdep.h>
> +#include <descr-const.h>
> +
> +/* long int __syscall_cancel_arch (int *ch,
> +				   __syscall_arg_t nr,
> +				   __syscall_arg_t arg1,
> +				   __syscall_arg_t arg2,
> +				   __syscall_arg_t arg3,
> +				   __syscall_arg_t arg4,
> +				   __syscall_arg_t arg5,
> +				   __syscall_arg_t arg6)  */
> +
> +	.set noreorder
> +	.set noat
> +	.set nomacro
> +ENTRY (__syscall_cancel_arch)
> +	.frame	sp, 16, ra, 0
> +	.mask	0x4000000,-16
> +	cfi_startproc
> +	ldah	gp, 0(t12)
> +	lda	gp, 0(gp)
> +	lda	sp, -16(sp)
> +	cfi_def_cfa_offset (16)
> +	mov	a1, v0
> +	stq	ra, 0(sp)
> +	cfi_offset (26, -16)
> +	.prologue 1
> +
> +	.global	__syscall_cancel_arch_start
> +__syscall_cancel_arch_start:
> +	ldl	t0, 0(a0)
> +	addl	zero, t0, t0
> +	/* if (*ch & CANCELED_BITMASK)  */
> +	and	t0, TCB_CANCELED_BITMASK, t0
> +	bne	t0, 1f
> +	mov	a2, a0
> +	mov	a3, a1
> +	mov	a4, a2
> +	ldq	a4, 16(sp)
> +	mov	a5, a3
> +	ldq	a5, 24(sp)
> +	.set	macro
> +	callsys
> +	.set	nomacro
> +
> +	.global __syscall_cancel_arch_end
> +__syscall_cancel_arch_end:
> +	subq	zero, v0, t0
> +	ldq	ra, 0(sp)
> +	cmovne	a3, t0, v0
> +	lda	sp, 16(sp)
> +	cfi_remember_state
> +	cfi_restore (26)
> +	cfi_def_cfa_offset (0)
> +	ret	zero, (ra), 1
> +	.align 4
> +1:
> +	cfi_restore_state
> +	ldq 	t12, __syscall_do_cancel(gp)		!literal!2
> +	jsr 	ra, (t12), __syscall_do_cancel		!lituse_jsr!2
> +	cfi_endproc
> +END (__syscall_cancel_arch)
> diff --git a/sysdeps/unix/sysv/linux/arc/syscall_cancel.S b/sysdeps/unix/sysv/linux/arc/syscall_cancel.S
> new file mode 100644
> index 0000000000..fa02af4163
> --- /dev/null
> +++ b/sysdeps/unix/sysv/linux/arc/syscall_cancel.S
> @@ -0,0 +1,56 @@
> +/* Cancellable syscall wrapper.  Linux/ARC version.
> +   Copyright (C) 2023 Free Software Foundation, Inc.
> +   This file is part of the GNU C Library.
> +
> +   The GNU C Library is free software; you can redistribute it and/or
> +   modify it under the terms of the GNU Lesser General Public
> +   License as published by the Free Software Foundation; either
> +   version 2.1 of the License, or (at your option) any later version.
> +
> +   The GNU C Library is distributed in the hope that it will be useful,
> +   but WITHOUT ANY WARRANTY; without even the implied warranty of
> +   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
> +   Lesser General Public License for more details.
> +
> +   You should have received a copy of the GNU Lesser General Public
> +   License along with the GNU C Library; if not, see
> +   <http://www.gnu.org/licenses/>.  */
> +
> +#include <sysdep.h>
> +#include <descr-const.h>
> +
> +/* long int __syscall_cancel_arch (int *cancelhandling,
> +				   __syscall_arg_t nr,
> +				   __syscall_arg_t arg1,
> +				   __syscall_arg_t arg2,
> +				   __syscall_arg_t arg3,
> +				   __syscall_arg_t arg4,
> +				   __syscall_arg_t arg5,
> +				   __syscall_arg_t arg6)  */
> +
> +ENTRY (__syscall_cancel_arch)
> +
> +	.globl __syscall_cancel_arch_start
> +__syscall_cancel_arch_start:
> +	ld_s	r12,[r0]
> +	bbit1	r12, TCB_CANCELED_BITMASK, 1f
> +	mov_s	r8, r1
> +	mov_s	r0, r2
> +	mov_s	r1, r3
> +	mov_s	r2, r4
> +	mov_s	r3, r5
> +	mov_s	r4, r6
> +	mov_s	r5, r7
> +	trap_s	0
> +
> +	.globl __syscall_cancel_arch_end
> +__syscall_cancel_arch_end:
> +	j_s	[blink]
> +
> +	.align 4
> +1:	push_s	blink
> +	cfi_def_cfa_offset (4)
> +	cfi_offset (31, -4)
> +	bl	@__syscall_do_cancel
> +
> +END (__syscall_cancel_arch)
> diff --git a/sysdeps/unix/sysv/linux/arm/syscall_cancel.S b/sysdeps/unix/sysv/linux/arm/syscall_cancel.S
> new file mode 100644
> index 0000000000..6b899306e3
> --- /dev/null
> +++ b/sysdeps/unix/sysv/linux/arm/syscall_cancel.S
> @@ -0,0 +1,78 @@
> +/* Cancellable syscall wrapper.  Linux/arm version.
> +   Copyright (C) 2023 Free Software Foundation, Inc.
> +   This file is part of the GNU C Library.
> +
> +   The GNU C Library is free software; you can redistribute it and/or
> +   modify it under the terms of the GNU Lesser General Public
> +   License as published by the Free Software Foundation; either
> +   version 2.1 of the License, or (at your option) any later version.
> +
> +   The GNU C Library is distributed in the hope that it will be useful,
> +   but WITHOUT ANY WARRANTY; without even the implied warranty of
> +   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
> +   Lesser General Public License for more details.
> +
> +   You should have received a copy of the GNU Lesser General Public
> +   License along with the GNU C Library; if not, see
> +   <http://www.gnu.org/licenses/>.  */
> +
> +#include <sysdep.h>
> +#include <descr-const.h>
> +
> +/* long int [r0] __syscall_cancel_arch (int *cancelhandling [r0],
> +					long int nr   [r1],
> +					long int arg1 [r2],
> +					long int arg2 [r3],
> +					long int arg3 [SP],
> +					long int arg4 [SP+4],
> +					long int arg5 [SP+8],
> +					long int arg6 [SP+12])  */
> +
> +	.syntax unified
> +
> +ENTRY (__syscall_cancel_arch)
> +	.fnstart
> +	mov	ip, sp
> +	stmfd	sp!, {r4, r5, r6, r7, lr}
> +	.save	{r4, r5, r6, r7, lr}
> +
> +	cfi_adjust_cfa_offset (20)
> +	cfi_rel_offset (r4, 0)
> +	cfi_rel_offset (r5, 4)
> +	cfi_rel_offset (r6, 8)
> +	cfi_rel_offset (r7, 12)
> +	cfi_rel_offset (lr, 16)
> +
> +	.globl __syscall_cancel_arch_start
> +__syscall_cancel_arch_start:
> +
> +	/* if (*cancelhandling & CANCELED_BITMASK)
> +	     __syscall_do_cancel()  */
> +	ldr	r0, [r0]
> +	tst	r0, #TCB_CANCELED_BITMASK
> +	bne	1f
> +
> +	/* Issue a 6 argument syscall, the nr [r1] being the syscall
> +	   number.  */
> +	mov	r7, r1
> +	mov	r0, r2
> +	mov	r1, r3
> +	ldmfd	ip, {r2, r3, r4, r5, r6}
> +	svc	0x0
> +
> +	.globl __syscall_cancel_arch_end
> +__syscall_cancel_arch_end:
> +	ldmfd	sp!, {r4, r5, r6, r7, lr}
> +	cfi_adjust_cfa_offset (-20)
> +        cfi_restore (r4)
> +        cfi_restore (r5)
> +        cfi_restore (r6)
> +        cfi_restore (r7)
> +        cfi_restore (lr)
> +	BX (lr)
> +
> +1:
> +	ldmfd	sp!, {r4, r5, r6, r7, lr}
> +	b	__syscall_do_cancel
> +	.fnend
> +END (__syscall_cancel_arch)
> diff --git a/sysdeps/unix/sysv/linux/csky/syscall_cancel.S b/sysdeps/unix/sysv/linux/csky/syscall_cancel.S
> new file mode 100644
> index 0000000000..2989765f8c
> --- /dev/null
> +++ b/sysdeps/unix/sysv/linux/csky/syscall_cancel.S
> @@ -0,0 +1,114 @@
> +/* Cancellable syscall wrapper.  Linux/csky version.
> +   Copyright (C) 2023 Free Software Foundation, Inc.
> +   This file is part of the GNU C Library.
> +
> +   The GNU C Library is free software; you can redistribute it and/or
> +   modify it under the terms of the GNU Lesser General Public
> +   License as published by the Free Software Foundation; either
> +   version 2.1 of the License, or (at your option) any later version.
> +
> +   The GNU C Library is distributed in the hope that it will be useful,
> +   but WITHOUT ANY WARRANTY; without even the implied warranty of
> +   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
> +   Lesser General Public License for more details.
> +
> +   You should have received a copy of the GNU Lesser General Public
> +   License along with the GNU C Library; if not, see
> +   <http://www.gnu.org/licenses/>.  */
> +
> +#include <sysdep.h>
> +#include <descr-const.h>
> +
> +/* long int __syscall_cancel_arch (int *cancelhandling,
> +				   __syscall_arg_t nr,
> +				   __syscall_arg_t arg1,
> +				   __syscall_arg_t arg2,
> +				   __syscall_arg_t arg3,
> +				   __syscall_arg_t arg4,
> +				   __syscall_arg_t arg5,
> +				   __syscall_arg_t arg6)  */
> +
> +#ifdef SHARED
> +# define STACK_ADJ 4
> +#else
> +# define STACK_ADJ 0
> +#endif
> +
> +ENTRY (__syscall_cancel_arch)
> +	subi	sp, sp, 16 + STACK_ADJ
> +	cfi_def_cfa_offset (16 + STACK_ADJ)
> +#ifdef SHARED
> +	st.w	gb, (sp, 16)
> +	lrw	t1, 1f@GOTPC
> +	cfi_offset (gb, -4)
> +	grs	gb, 1f
> +1:
> +#endif
> +	st.w	lr, (sp, 12)
> +	st.w	l3, (sp, 8)
> +	st.w	l1, (sp, 4)
> +	st.w	l0, (sp, 0)
> +#ifdef SHARED
> +	addu	gb, gb, t1
> +#endif
> +	subi	sp, sp, 16
> +	cfi_def_cfa_offset (32 + STACK_ADJ)
> +	cfi_offset (lr, -( 4 + STACK_ADJ))
> +	cfi_offset (l3, -( 8 + STACK_ADJ))
> +	cfi_offset (l1, -(12 + STACK_ADJ))
> +	cfi_offset (l0, -(16 + STACK_ADJ))
> +
> +	mov	l3, a1
> +	mov	a1, a3
> +	ld.w	a3, (sp, 32 + STACK_ADJ)
> +	st.w	a3, (sp, 0)
> +	ld.w	a3, (sp, 36 + STACK_ADJ)
> +	st.w	a3, (sp, 4)
> +	ld.w	a3, (sp, 40 + STACK_ADJ)
> +	st.w	a3, (sp, 8)
> +	ld.w	a3, (sp, 44 + STACK_ADJ)
> +	st.w	a3, (sp, 12)
> +
> +	.globl __syscall_cancel_arch_start
> +__syscall_cancel_arch_start:
> +	ld.w	t0, (a0, 0)
> +	andi	t0, t0, TCB_CANCELED_BITMASK
> +	jbnez	t0, 2f
> +	mov	a0, a2
> +	ld.w	a3, (sp, 4)
> +	ld.w	a2, (sp, 0)
> +	ld.w	l0, (sp, 8)
> +	ld.w	l1, (sp, 12)
> +	trap	0
> +
> +	.globl __syscall_cancel_arch_end
> +__syscall_cancel_arch_end:
> +	addi	sp, sp, 16
> +	cfi_remember_state
> +	cfi_def_cfa_offset (16 + STACK_ADJ)
> +#ifdef SHARED
> +	ld.w	gb, (sp, 16)
> +	cfi_restore (gb)
> +#endif
> +	ld.w	lr, (sp, 12)
> +	cfi_restore (lr)
> +	ld.w	l3, (sp, 8)
> +	cfi_restore (l3)
> +	ld.w	l1, (sp, 4)
> +	cfi_restore (l1)
> +	ld.w	l0, (sp, 0)
> +	cfi_restore (l0)
> +	addi	sp, sp, 16
> +	cfi_def_cfa_offset (0)
> +	rts
> +
> +2:
> +	cfi_restore_state
> +#ifdef SHARED
> +	lrw	a3, __syscall_do_cancel@GOTOFF
> +	addu	a3, a3, gb
> +	jsr	a3
> +#else
> +	jbsr	__syscall_do_cancel
> +#endif
> +END (__syscall_cancel_arch)
> diff --git a/sysdeps/unix/sysv/linux/hppa/syscall_cancel.S b/sysdeps/unix/sysv/linux/hppa/syscall_cancel.S
> new file mode 100644
> index 0000000000..b9c19747ea
> --- /dev/null
> +++ b/sysdeps/unix/sysv/linux/hppa/syscall_cancel.S
> @@ -0,0 +1,81 @@
> +/* Cancellable syscall wrapper.  Linux/hppa version.
> +   Copyright (C) 2023 Free Software Foundation, Inc.
> +   This file is part of the GNU C Library.
> +
> +   The GNU C Library is free software; you can redistribute it and/or
> +   modify it under the terms of the GNU Lesser General Public
> +   License as published by the Free Software Foundation; either
> +   version 2.1 of the License, or (at your option) any later version.
> +
> +   The GNU C Library is distributed in the hope that it will be useful,
> +   but WITHOUT ANY WARRANTY; without even the implied warranty of
> +   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
> +   Lesser General Public License for more details.
> +
> +   You should have received a copy of the GNU Lesser General Public
> +   License along with the GNU C Library.  If not, see
> +   <http://www.gnu.org/licenses/>.  */
> +
> +#include <sysdep.h>
> +#include <descr-const.h>
> +
> +/* long int __syscall_cancel_arch (int *cancelhandling,
> +				   long int nr,
> +				   long int arg1,
> +				   long int arg2,
> +				   long int arg3,
> +				   long int arg4,
> +				   long int arg5,
> +				   long int arg6)  */
> +
> +	.text
> +ENTRY(__syscall_cancel_arch)
> +	stw	%r2,-20(%r30)
> +	ldo	128(%r30),%r30
> +	cfi_def_cfa_offset (-128)
> +	cfi_offset (2, -20)
> +	ldw	-180(%r30),%r28
> +	copy	%r26,%r20
> +	stw	%r28,-108(%r30)
> +	ldw	-184(%r30),%r28
> +	copy	%r24,%r26
> +	stw	%r28,-112(%r30)
> +	ldw	-188(%r30),%r28
> +	stw	%r28,-116(%r30)
> +	ldw	-192(%r30),%r28
> +	stw	%r4,-104(%r30)
> +	stw	%r28,-120(%r30)
> +	copy	%r25,%r28
> +	copy	%r23,%r25
> +#ifdef __PIC__
> +	stw	%r19,-32(%r30)
> +#endif
> +	cfi_offset (4, 24)
> +
> +	.global __syscall_cancel_arch_start
> +__syscall_cancel_arch_start:
> +	ldw	0(%r20),%r20
> +	bb,<	%r20,31-TCB_CANCELED_BIT,1f
> +	ldw	-120(%r30),%r21
> +	ldw	-116(%r30),%r22
> +	ldw	-112(%r30),%r23
> +	ldw	-108(%r30),%r24
> +	copy	%r19, %r4
> +	ble	0x100(%sr2, %r0)
> +
> +	.global __syscall_cancel_arch_end
> +__syscall_cancel_arch_end:
> +
> +	copy	%r28,%r20
> +	copy	%r4,%r19
> +
> +	ldw	-148(%r30),%r2
> +	ldw	-104(%r30),%r4
> +	bv	%r0(%r2)
> +	ldo	-128(%r30),%r30
> +1:
> +	bl	__syscall_do_cancel,%r2
> +	nop
> +	nop
> +
> +END(__syscall_cancel_arch)
> diff --git a/sysdeps/unix/sysv/linux/i386/syscall_cancel.S b/sysdeps/unix/sysv/linux/i386/syscall_cancel.S
> new file mode 100644
> index 0000000000..46fb746da0
> --- /dev/null
> +++ b/sysdeps/unix/sysv/linux/i386/syscall_cancel.S
> @@ -0,0 +1,104 @@
> +/* Cancellable syscall wrapper.  Linux/i686 version.
> +   Copyright (C) 2023 Free Software Foundation, Inc.
> +   This file is part of the GNU C Library.
> +
> +   The GNU C Library is free software; you can redistribute it and/or
> +   modify it under the terms of the GNU Lesser General Public
> +   License as published by the Free Software Foundation; either
> +   version 2.1 of the License, or (at your option) any later version.
> +
> +   The GNU C Library is distributed in the hope that it will be useful,
> +   but WITHOUT ANY WARRANTY; without even the implied warranty of
> +   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
> +   Lesser General Public License for more details.
> +
> +   You should have received a copy of the GNU Lesser General Public
> +   License along with the GNU C Library; if not, see
> +   <http://www.gnu.org/licenses/>.  */
> +
> +#include <sysdep.h>
> +#include <descr-const.h>
> +
> +/* long int [eax] __syscall_cancel_arch (int *cancelhandling [SP],
> +					 long int nr   [SP+4],
> +					 long int arg1 [SP+8],
> +					 long int arg2 [SP+12],
> +					 long int arg3 [SP+16],
> +					 long int arg4 [SP+20],
> +					 long int arg5 [SP+24],
> +					 long int arg6 [SP+28])  */
> +
> +ENTRY (__syscall_cancel_arch)
> +	pushl %ebp
> +	cfi_def_cfa_offset (8)
> +	cfi_offset (ebp, -8)
> +	pushl %edi
> +	cfi_def_cfa_offset (12)
> +	cfi_offset (edi, -12)
> +	pushl %esi
> +	cfi_def_cfa_offset (16)
> +	cfi_offset (esi, -16)
> +	pushl %ebx
> +	cfi_def_cfa_offset (20)
> +	cfi_offset (ebx, -20)
> +
> +	.global __syscall_cancel_arch_start
> +__syscall_cancel_arch_start:
> +
> +	/* if (*cancelhandling & CANCELED_BITMASK)
> +	     __syscall_do_cancel()  */
> +	movl	20(%esp), %eax
> +	testb	$TCB_CANCELED_BITMASK, (%eax)
> +	jne     1f
> +
> +	/* Issue a 6 argument syscall, the nr [%eax] being the syscall
> +	   number.  */
> +	movl    24(%esp), %eax
> +	movl    28(%esp), %ebx
> +	movl    32(%esp), %ecx
> +	movl    36(%esp), %edx
> +	movl    40(%esp), %esi
> +	movl    44(%esp), %edi
> +	movl    48(%esp), %ebp
> +
> +	/* We can not use the vDSO helper for syscall (__kernel_vsyscall)
> +	   because the returned PC from kernel will point to the vDSO page
> +	   instead of the expected __syscall_cancel_arch_{start,end}
> +	   marks.  */
> +	int	$0x80
> +
> +	.global __syscall_cancel_arch_end
> +__syscall_cancel_arch_end:
> +
> +	popl %ebx
> +	cfi_restore (ebx)
> +	cfi_def_cfa_offset (16)
> +	popl %esi
> +	cfi_restore (esi)
> +	cfi_def_cfa_offset (12)
> +	popl %edi
> +	cfi_restore (edi)
> +	cfi_def_cfa_offset (8)
> +	popl %ebp
> +	cfi_restore (ebp)
> +	cfi_def_cfa_offset (4)
> +        ret
> +
> +1:
> +	/* Although the __syscall_do_cancel do not return, we need to stack
> +	   being set correctly for unwind.  */
> +	popl %ebx
> +	cfi_restore (ebx)
> +	cfi_def_cfa_offset (16)
> +	popl %esi
> +	cfi_restore (esi)
> +	cfi_def_cfa_offset (12)
> +	popl %edi
> +	cfi_restore (edi)
> +	cfi_def_cfa_offset (8)
> +	popl %ebp
> +	cfi_restore (ebp)
> +	cfi_def_cfa_offset (4)
> +	jmp __syscall_do_cancel
> +
> +END (__syscall_cancel_arch)
> diff --git a/sysdeps/unix/sysv/linux/ia64/cancellation-pc-check.h b/sysdeps/unix/sysv/linux/ia64/cancellation-pc-check.h
> new file mode 100644
> index 0000000000..24f05f9759
> --- /dev/null
> +++ b/sysdeps/unix/sysv/linux/ia64/cancellation-pc-check.h
> @@ -0,0 +1,48 @@
> +/* Architecture specific bits for cancellation handling.
> +   Copyright (C) 2023 Free Software Foundation, Inc.
> +   This file is part of the GNU C Library.
> +
> +   The GNU C Library is free software; you can redistribute it and/or
> +   modify it under the terms of the GNU Lesser General Public
> +   License as published by the Free Software Foundation; either
> +   version 2.1 of the License, or (at your option) any later version.
> +
> +   The GNU C Library is distributed in the hope that it will be useful,
> +   but WITHOUT ANY WARRANTY; without even the implied warranty of
> +   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
> +   Lesser General Public License for more details.
> +
> +   You should have received a copy of the GNU Lesser General Public
> +   License along with the GNU C Library; if not, see
> +   <http://www.gnu.org/licenses/>.  */
> +
> +#ifndef _NPTL_CANCELLATION_PC_CHECK
> +#define _NPTL_CANCELLATION_PC_CHECK 1
> +
> +/* Check if the program counter (PC) from ucontext CTX is within the start and
> +   then end boundary from the __syscall_cancel_arch bridge.  Return TRUE if
> +   the PC is within the boundary, meaning the syscall does not have any side
> +   effects; or FALSE otherwise.  */
> +static bool
> +cancellation_pc_check (void *ctx)
> +{
> +  /* Both are defined in syscall_cancel.S for each architecture.  */
> +  extern const char __syscall_cancel_arch_start[1];
> +  extern const char __syscall_cancel_arch_end[1];
> +
> +  uintptr_t sc_ip = ((struct sigcontext *) (ctx))->sc_ip;
> +  uintptr_t cr_iip = sc_ip & ~0x3ull;
> +  uintptr_t ri = sc_ip & 0x3ull;
> +
> +  /* IA64 __syscall_cancel_arch issues the 'break 0x100000' on its own bundle,
> +     and __syscall_cancel_arch_end points to end of the previous bundle.
> +     To check if the syscall had any side-effects we need to check the slot
> +     number.  */
> +  if (cr_iip == (uintptr_t) __syscall_cancel_arch_end)
> +    return ri == 0;
> +
> +  return cr_iip >= (uintptr_t) __syscall_cancel_arch_start
> +	 && cr_iip < (uintptr_t) __syscall_cancel_arch_end;
> +}
> +
> +#endif
> diff --git a/sysdeps/unix/sysv/linux/ia64/syscall_cancel.S b/sysdeps/unix/sysv/linux/ia64/syscall_cancel.S
> new file mode 100644
> index 0000000000..732bf60185
> --- /dev/null
> +++ b/sysdeps/unix/sysv/linux/ia64/syscall_cancel.S
> @@ -0,0 +1,81 @@
> +/* Cancellable syscall wrapper.  Linux/IA64 version.
> +   Copyright (C) 2023 Free Software Foundation, Inc.
> +   This file is part of the GNU C Library.
> +
> +   The GNU C Library is free software; you can redistribute it and/or
> +   modify it under the terms of the GNU Lesser General Public
> +   License as published by the Free Software Foundation; either
> +   version 2.1 of the License, or (at your option) any later version.
> +
> +   The GNU C Library is distributed in the hope that it will be useful,
> +   but WITHOUT ANY WARRANTY; without even the implied warranty of
> +   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
> +   Lesser General Public License for more details.
> +
> +   You should have received a copy of the GNU Lesser General Public
> +   License along with the GNU C Library; if not, see
> +   <http://www.gnu.org/licenses/>.  */
> +
> +#include <sysdep.h>
> +#include <descr-const.h>
> +#undef ret
> +
> +/* long int __syscall_cancel_arch (int *cancelhandling, long int nr,
> +				   long int arg1, long int arg2, long int arg3,
> +				   long int arg4, long int arg5, long int arg6)
> +*/
> +
> +ENTRY (__syscall_cancel_arch)
> +	.prologue ASM_UNW_PRLG_RP | ASM_UNW_PRLG_PFS, ASM_UNW_PRLG_GRSAVE (8)
> +	.mmi
> +	.save ar.pfs, r41
> +	alloc r41=ar.pfs,8,3,8,0
> +	mov r15=r33
> +	.save rp, r40
> +	mov r40=b0
> +	.body
> +	.mmi
> +	mov r43=r34
> +	mov r44=r35
> +	mov r45=r36
> +	;;
> +	.mmi
> +	mov r46=r37
> +	mov r47=r38
> +	mov r48=r39
> +	;;
> +
> +	.global __syscall_cancel_arch_start
> +__syscall_cancel_arch_start:
> +	;;
> +	.mmi
> +	nop 0
> +	ld4.acq r14=[r32]
> +	nop 0
> +	;;
> +	.mib
> +	nop 0
> +	tbit.z p6, p7=r14, TCB_CANCELED_BIT
> +	.pred.safe_across_calls p1-p63
> +(p7)	br.call.dpnt.many b0 = __syscall_do_cancel#
> +	.pred.safe_across_calls p1-p5,p16-p63
> +	;;
> +
> +	/* Due instruction bundle ia64 has the end marker before the syscall
> +           instruction.  Check IA64 ucontext_check_pc_boundary on how the PC
> +           is checked.  */
> +	.global __syscall_cancel_arch_end
> +__syscall_cancel_arch_end:
> +	break 0x100000
> +	;;
> +	.mmi
> +	cmp.ne p6, p7=-1, r10
> +	nop 0
> +	mov b0=r40
> +	;;
> +	.mib
> +(p7)	sub r8=r0, r8
> +	mov ar.pfs=r41
> +	br.ret.sptk.many b0
> +
> +END (__syscall_cancel_arch)
> diff --git a/sysdeps/unix/sysv/linux/loongarch/syscall_cancel.S b/sysdeps/unix/sysv/linux/loongarch/syscall_cancel.S
> new file mode 100644
> index 0000000000..edea9632ff
> --- /dev/null
> +++ b/sysdeps/unix/sysv/linux/loongarch/syscall_cancel.S
> @@ -0,0 +1,50 @@
> +/* Cancellable syscall wrapper.  Linux/loongarch version.
> +   Copyright (C) 2023 Free Software Foundation, Inc.
> +   This file is part of the GNU C Library.
> +
> +   The GNU C Library is free software; you can redistribute it and/or
> +   modify it under the terms of the GNU Lesser General Public
> +   License as published by the Free Software Foundation; either
> +   version 2.1 of the License, or (at your option) any later version.
> +
> +   The GNU C Library is distributed in the hope that it will be useful,
> +   but WITHOUT ANY WARRANTY; without even the implied warranty of
> +   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
> +   Lesser General Public License for more details.
> +
> +   You should have received a copy of the GNU Lesser General Public
> +   License along with the GNU C Library; if not, see
> +   <http://www.gnu.org/licenses/>.  */
> +
> +#include <sysdep.h>
> +#include <descr-const.h>
> +
> +ENTRY (__syscall_cancel_arch)
> +
> +	.global __syscall_cancel_arch_start
> +__syscall_cancel_arch_start:
> +
> +	/* if (*cancelhandling & CANCELED_BITMASK)
> +	     __syscall_do_cancel()  */
> +	ld.w	t0, a0, 0
> +	andi	t0, t0, TCB_CANCELED_BITMASK
> +	bnez	t0, 1f
> +
> +	/* Issue a 6 argument syscall.  */
> +	move	t1, a1
> +	move	a0, a2
> +	move	a1, a3
> +	move	a2, a4
> +	move	a3, a5
> +	move	a4, a6
> +	move	a5, a7
> +	move	a7, t1
> +	syscall 0
> +
> +	.global __syscall_cancel_arch_end
> +__syscall_cancel_arch_end:
> +	jr	ra
> +1:
> +	b	__syscall_do_cancel
> +
> +END (__syscall_cancel_arch)
> diff --git a/sysdeps/unix/sysv/linux/m68k/syscall_cancel.S b/sysdeps/unix/sysv/linux/m68k/syscall_cancel.S
> new file mode 100644
> index 0000000000..8923bcc71c
> --- /dev/null
> +++ b/sysdeps/unix/sysv/linux/m68k/syscall_cancel.S
> @@ -0,0 +1,84 @@
> +/* Cancellable syscall wrapper.  Linux/m68k version.
> +   Copyright (C) 2023 Free Software Foundation, Inc.
> +   This file is part of the GNU C Library.
> +
> +   The GNU C Library is free software; you can redistribute it and/or
> +   modify it under the terms of the GNU Lesser General Public
> +   License as published by the Free Software Foundation; either
> +   version 2.1 of the License, or (at your option) any later version.
> +
> +   The GNU C Library is distributed in the hope that it will be useful,
> +   but WITHOUT ANY WARRANTY; without even the implied warranty of
> +   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
> +   Lesser General Public License for more details.
> +
> +   You should have received a copy of the GNU Lesser General Public
> +   License along with the GNU C Library; if not, see
> +   <http://www.gnu.org/licenses/>.  */
> +
> +#include <sysdep.h>
> +#include <descr-const.h>
> +
> +/* long int __syscall_cancel_arch (int *cancelhandling,
> +				   __syscall_arg_t nr,
> +				   __syscall_arg_t arg1,
> +				   __syscall_arg_t arg2,
> +				   __syscall_arg_t arg3,
> +				   __syscall_arg_t arg4,
> +				   __syscall_arg_t arg5,
> +				   __syscall_arg_t arg6)  */
> +
> +
> +ENTRY (__syscall_cancel_arch)
> +#ifdef __mcoldfire__
> +	lea	(-16,%sp),%sp
> +	movem.l	%d2-%d5,(%sp)
> +#else
> +	movem.l	%d2-%d5,-(%sp)
> +#endif
> +	cfi_def_cfa_offset (20)
> +	cfi_offset (2, -20)
> +	cfi_offset (3, -16)
> +	cfi_offset (4, -12)
> +	cfi_offset (5, -8)
> +
> +	.global __syscall_cancel_arch_start
> +__syscall_cancel_arch_start:
> +
> +	move.l	20(%sp),%a0
> +	move.l	(%a0),%d0
> +#ifdef __mcoldfire__
> +	move.w	%d0,%ccr
> +	jeq	1f
> +#else
> +	btst	#TCB_CANCELED_BIT,%d0
> +	jne 	1f
> +#endif
> +
> +	move.l	48(%sp),%a0
> +	move.l	44(%sp),%d5
> +	move.l	40(%sp),%d4
> +	move.l	36(%sp),%d3
> +	move.l	32(%sp),%d2
> +	move.l	28(%sp),%d1
> +	move.l	24(%sp),%d0
> +	trap #0
> +
> +	.global __syscall_cancel_arch_end
> +__syscall_cancel_arch_end:
> +
> +#ifdef __mcoldfire__
> +	movem.l	(%sp),%d2-%d5
> +	lea	(16,%sp),%sp
> +#else
> +	movem.l	(%sp)+,%d2-%d5
> +#endif
> +	rts
> +
> +1:
> +#ifdef PIC
> +	bsr.l __syscall_do_cancel
> +#else
> +	jsr __syscall_do_cancel
> +#endif
> +END (__syscall_cancel_arch)
> diff --git a/sysdeps/unix/sysv/linux/microblaze/syscall_cancel.S b/sysdeps/unix/sysv/linux/microblaze/syscall_cancel.S
> new file mode 100644
> index 0000000000..1f9d202bf5
> --- /dev/null
> +++ b/sysdeps/unix/sysv/linux/microblaze/syscall_cancel.S
> @@ -0,0 +1,61 @@
> +/* Cancellable syscall wrapper.  Linux/microblaze version.
> +   Copyright (C) 2023 Free Software Foundation, Inc.
> +   This file is part of the GNU C Library.
> +
> +   The GNU C Library is free software; you can redistribute it and/or
> +   modify it under the terms of the GNU Lesser General Public
> +   License as published by the Free Software Foundation; either
> +   version 2.1 of the License, or (at your option) any later version.
> +
> +   The GNU C Library is distributed in the hope that it will be useful,
> +   but WITHOUT ANY WARRANTY; without even the implied warranty of
> +   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
> +   Lesser General Public License for more details.
> +
> +   You should have received a copy of the GNU Lesser General Public
> +   License along with the GNU C Library; if not, see
> +   <http://www.gnu.org/licenses/>.  */
> +
> +#include <sysdep.h>
> +#include <descr-const.h>
> +
> +/* long int __syscall_cancel_arch (int *cancelhandling,
> +				   long int nr,
> +				   long int arg1,
> +				   long int arg2,
> +				   long int arg3,
> +				   long int arg4,
> +				   long int arg5,
> +				   long int arg6)  */
> +
> +ENTRY (__syscall_cancel_arch)
> +
> +	.globl __syscall_cancel_arch_start
> +__syscall_cancel_arch_start:
> +
> +	lwi	r3,r5,0
> +	andi	r3,r3,TCB_CANCELED_BITMASK
> +	bneid	r3,1f
> +	addk	r12,r6,r0
> +
> +	addk	r5,r7,r0
> +	addk	r6,r8,r0
> +	addk	r7,r9,r0
> +	addk	r8,r10,r0
> +	lwi	r9,r1,56
> +	lwi	r10,r1,60
> +	brki	r14,8
> +
> +	.globl __syscall_cancel_arch_end
> +__syscall_cancel_arch_end:
> +
> +	nop
> +	lwi	r15,r1,0
> +	rtsd	r15,8
> +	addik	r1,r1,28
> +
> +1:
> +	brlid	r15, __syscall_do_cancel
> +	nop
> +
> +END (__syscall_cancel_arch)
> diff --git a/sysdeps/unix/sysv/linux/mips/mips32/syscall_cancel.S b/sysdeps/unix/sysv/linux/mips/mips32/syscall_cancel.S
> new file mode 100644
> index 0000000000..eb3b2ed005
> --- /dev/null
> +++ b/sysdeps/unix/sysv/linux/mips/mips32/syscall_cancel.S
> @@ -0,0 +1,128 @@
> +/* Cancellable syscall wrapper.  Linux/mips32 version.
> +   Copyright (C) 2023 Free Software Foundation, Inc.
> +   This file is part of the GNU C Library.
> +
> +   The GNU C Library is free software; you can redistribute it and/or
> +   modify it under the terms of the GNU Lesser General Public
> +   License as published by the Free Software Foundation; either
> +   version 2.1 of the License, or (at your option) any later version.
> +
> +   The GNU C Library is distributed in the hope that it will be useful,
> +   but WITHOUT ANY WARRANTY; without even the implied warranty of
> +   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
> +   Lesser General Public License for more details.
> +
> +   You should have received a copy of the GNU Lesser General Public
> +   License along with the GNU C Library; if not, see
> +   <http://www.gnu.org/licenses/>.  */
> +
> +#include <sysdep.h>
> +#include <sys/asm.h>
> +#include <descr-const.h>
> +
> +/* long int __syscall_cancel_arch (int *cancelhandling,
> +				   __syscall_arg_t nr,
> +				   __syscall_arg_t arg1,
> +				   __syscall_arg_t arg2,
> +				   __syscall_arg_t arg3,
> +				   __syscall_arg_t arg4,
> +				   __syscall_arg_t arg5,
> +				   __syscall_arg_t arg6,
> +				   __syscall_arg_t arg7)  */
> +
> +#define FRAME_SIZE 56
> +
> +NESTED (__syscall_cancel_arch, FRAME_SIZE, fp)
> +	.mask	0xc0070000,-SZREG
> +	.fmask	0x00000000,0
> +
> +	PTR_ADDIU sp, -FRAME_SIZE
> +	cfi_def_cfa_offset (FRAME_SIZE)
> +
> +	sw	fp, 48(sp)
> +	sw	ra, 52(sp)
> +	sw	s2, 44(sp)
> +	sw	s1, 40(sp)
> +	sw	s0, 36(sp)
> +#ifdef __PIC__
> +	.cprestore	16
> +#endif
> +	cfi_offset (ra, -4)
> +	cfi_offset (fp, -8)
> +	cfi_offset (s2, -12)
> +	cfi_offset (s1, -16)
> +	cfi_offset (s0, -20)
> +
> +	move	fp ,sp
> +	cfi_def_cfa_register (fp)
> +
> +	.globl __syscall_cancel_arch_start
> +__syscall_cancel_arch_start:
> +
> +	lw	v0, 0(a0)
> +	andi	v0, v0, TCB_CANCELED_BITMASK
> +	bne	v0, zero, 2f
> +
> +	addiu	sp, sp, -16
> +	addiu	v0, sp, 16
> +	sw	v0, 24(fp)
> +
> +	move	s0, a1
> +	move	a0, a2
> +	move	a1, a3
> +	lw	a2, 72(fp)
> +	lw	a3, 76(fp)
> +	lw	v0, 84(fp)
> +	lw	s1, 80(fp)
> +	lw	s2, 88(fp)
> +
> +	.set	noreorder
> +	subu	sp, 32
> +	sw	s1, 16(sp)
> +	sw	v0, 20(sp)
> +	sw	s2, 24(sp)
> +	move	v0, s0
> +	syscall
> +
> +	.globl __syscall_cancel_arch_end
> +__syscall_cancel_arch_end:
> +	addiu	sp, sp, 32
> +	.set	reorder
> +
> +	beq	a3, zero, 1f
> +	subu	v0, zero, v0
> +1:
> +	move	sp, fp
> +	cfi_remember_state
> +	cfi_def_cfa_register (sp)
> +	lw	ra, 52(fp)
> +	lw	fp, 48(sp)
> +	lw	s2, 44(sp)
> +	lw	s1, 40(sp)
> +	lw	s0, 36(sp)
> +
> +	.set	noreorder
> +	.set	nomacro
> +	jr	ra
> +	addiu	sp,sp,FRAME_SIZE
> +
> +	.set	macro
> +	.set	reorder
> +
> +	cfi_def_cfa_offset (0)
> +	cfi_restore (s0)
> +	cfi_restore (s1)
> +	cfi_restore (s2)
> +	cfi_restore (fp)
> +	cfi_restore (ra)
> +
> +2:
> +	cfi_restore_state
> +#ifdef __PIC__
> +	PTR_LA	t9, __syscall_do_cancel
> +	jalr	t9
> +#else
> +	jal	__syscall_do_cancel
> +#endif
> +
> +END (__syscall_cancel_arch)
> diff --git a/sysdeps/unix/sysv/linux/mips/mips32/sysdep.h b/sysdeps/unix/sysv/linux/mips/mips32/sysdep.h
> index 1318083195..3ba5334d66 100644
> --- a/sysdeps/unix/sysv/linux/mips/mips32/sysdep.h
> +++ b/sysdeps/unix/sysv/linux/mips/mips32/sysdep.h
> @@ -18,6 +18,10 @@
>  #ifndef _LINUX_MIPS_MIPS32_SYSDEP_H
>  #define _LINUX_MIPS_MIPS32_SYSDEP_H 1
>  
> +/* mips32 have cancelable syscalls with 7 arguments (currently only
> +   sync_file_range).  */
> +#define HAVE_CANCELABLE_SYSCALL_WITH_7_ARGS	1
> +
>  /* There is some commonality.  */
>  #include <sysdeps/unix/sysv/linux/mips/sysdep.h>
>  #include <sysdeps/unix/sysv/linux/sysdep.h>
> diff --git a/sysdeps/unix/sysv/linux/mips/mips64/n32/syscall_types.h b/sysdeps/unix/sysv/linux/mips/mips64/n32/syscall_types.h
> new file mode 100644
> index 0000000000..b3a8b0b634
> --- /dev/null
> +++ b/sysdeps/unix/sysv/linux/mips/mips64/n32/syscall_types.h
> @@ -0,0 +1,28 @@
> +/* Types and macros used for syscall issuing.  MIPS64n32 version.
> +   Copyright (C) 2023 Free Software Foundation, Inc.
> +   This file is part of the GNU C Library.
> +
> +   The GNU C Library is free software; you can redistribute it and/or
> +   modify it under the terms of the GNU Lesser General Public
> +   License as published by the Free Software Foundation; either
> +   version 2.1 of the License, or (at your option) any later version.
> +
> +   The GNU C Library is distributed in the hope that it will be useful,
> +   but WITHOUT ANY WARRANTY; without even the implied warranty of
> +   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
> +   Lesser General Public License for more details.
> +
> +   You should have received a copy of the GNU Lesser General Public
> +   License along with the GNU C Library; if not, see
> +   <https://www.gnu.org/licenses/>.  */
> +
> +#ifndef _SYSCALL_TYPES_H
> +#define _SYSCALL_TYPES_H
> +
> +typedef long long int __syscall_arg_t;
> +
> +/* Convert X to a long long, without losing any bits if it is one
> +   already or warning if it is a 32-bit pointer.  */
> +#define __SSC(__x) ((__syscall_arg_t) (__typeof__ ((__x) - (__x))) (__x))
> +
> +#endif
> diff --git a/sysdeps/unix/sysv/linux/mips/mips64/syscall_cancel.S b/sysdeps/unix/sysv/linux/mips/mips64/syscall_cancel.S
> new file mode 100644
> index 0000000000..f172041324
> --- /dev/null
> +++ b/sysdeps/unix/sysv/linux/mips/mips64/syscall_cancel.S
> @@ -0,0 +1,108 @@
> +/* Cancellable syscall wrapper.  Linux/mips64 version.
> +   Copyright (C) 2023 Free Software Foundation, Inc.
> +   This file is part of the GNU C Library.
> +
> +   The GNU C Library is free software; you can redistribute it and/or
> +   modify it under the terms of the GNU Lesser General Public
> +   License as published by the Free Software Foundation; either
> +   version 2.1 of the License, or (at your option) any later version.
> +
> +   The GNU C Library is distributed in the hope that it will be useful,
> +   but WITHOUT ANY WARRANTY; without even the implied warranty of
> +   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
> +   Lesser General Public License for more details.
> +
> +   You should have received a copy of the GNU Lesser General Public
> +   License along with the GNU C Library; if not, see
> +   <http://www.gnu.org/licenses/>.  */
> +
> +#include <sysdep.h>
> +#include <sys/asm.h>
> +#include <descr-const.h>
> +
> +/* long int __syscall_cancel_arch (int *cancelhandling,
> +				   __syscall_arg_t nr,
> +				   __syscall_arg_t arg1,
> +				   __syscall_arg_t arg2,
> +				   __syscall_arg_t arg3,
> +				   __syscall_arg_t arg4,
> +				   __syscall_arg_t arg5,
> +				   __syscall_arg_t arg6,
> +				   __syscall_arg_t arg7)  */
> +
> +#define FRAME_SIZE 32
> +
> +	.text
> +NESTED (__syscall_cancel_arch, FRAME_SIZE, ra)
> +	.mask	0x90010000, -SZREG
> +	.fmask	0x00000000, 0
> +	LONG_ADDIU	sp, sp, -FRAME_SIZE
> +	cfi_def_cfa_offset (FRAME_SIZE)
> +	sd		gp, 16(sp)
> +	cfi_offset (gp, -16)
> +	lui		gp, %hi(%neg(%gp_rel(__syscall_cancel_arch)))
> +	LONG_ADDU	gp, gp, t9
> +	sd		ra, 24(sp)
> +	sd		s0, 8(sp)
> +	cfi_offset (ra, -8)
> +	cfi_offset (s0, -24)
> +	LONG_ADDIU	gp, gp, %lo(%neg(%gp_rel(__syscall_cancel_arch)))
> +
> +	.global __syscall_cancel_arch_start
> +__syscall_cancel_arch_start:
> +
> +	lw		v0, 0(a0)
> +	andi		v0, v0, TCB_CANCELED_BITMASK
> +	.set noreorder
> +	.set nomacro
> +	bne		v0, zero, 2f
> +	move		s0, a1
> +	.set macro
> +	.set reorder
> +
> +	move		a0, a2
> +	move		a1, a3
> +	move		a2, a4
> +	move		a3, a5
> +	move		a4, a6
> +	move		a5, a7
> +
> +	.set noreorder
> +	move		v0, s0
> +	syscall
> +	.set reorder
> +
> +	.global __syscall_cancel_arch_end
> +__syscall_cancel_arch_end:
> +
> +	.set noreorder
> +	.set nomacro
> +	bnel	a3, zero, 1f
> +	SUBU	v0, zero, v0
> +	.set macro
> +	.set reorder
> +
> +1:
> +	ld		ra, 24(sp)
> +	ld		gp, 16(sp)
> +	ld		s0, 8(sp)
> +
> +	.set	noreorder
> +	.set	nomacro
> +	jr		ra
> +	LONG_ADDIU	sp, sp, FRAME_SIZE
> +	.set	macro
> +	.set	reorder
> +
> +	cfi_remember_state
> +	cfi_def_cfa_offset (0)
> +	cfi_restore (s0)
> +	cfi_restore (gp)
> +	cfi_restore (ra)
> +	.align	3
> +2:
> +	cfi_restore_state
> +	LONG_L		t9, %got_disp(__syscall_do_cancel)(gp)
> +	.reloc	3f, R_MIPS_JALR, __syscall_do_cancel
> +3:	jalr		t9
> +END (__syscall_cancel_arch)
> diff --git a/sysdeps/unix/sysv/linux/mips/mips64/sysdep.h b/sysdeps/unix/sysv/linux/mips/mips64/sysdep.h
> index d7ae60f596..db27bd9e4d 100644
> --- a/sysdeps/unix/sysv/linux/mips/mips64/sysdep.h
> +++ b/sysdeps/unix/sysv/linux/mips/mips64/sysdep.h
> @@ -44,15 +44,7 @@
>  #undef HAVE_INTERNAL_BRK_ADDR_SYMBOL
>  #define HAVE_INTERNAL_BRK_ADDR_SYMBOL 1
>  
> -#if _MIPS_SIM == _ABIN32
> -/* Convert X to a long long, without losing any bits if it is one
> -   already or warning if it is a 32-bit pointer.  */
> -# define ARGIFY(X) ((long long int) (__typeof__ ((X) - (X))) (X))
> -typedef long long int __syscall_arg_t;
> -#else
> -# define ARGIFY(X) ((long int) (X))
> -typedef long int __syscall_arg_t;
> -#endif
> +#include <syscall_types.h>
>  
>  /* Note that the original Linux syscall restart convention required the
>     instruction immediately preceding SYSCALL to initialize $v0 with the
> @@ -120,7 +112,7 @@ typedef long int __syscall_arg_t;
>  	long int _sys_result;						\
>  									\
>  	{								\
> -	__syscall_arg_t _arg1 = ARGIFY (arg1);				\
> +	__syscall_arg_t _arg1 = __SSC (arg1);				\
>  	register __syscall_arg_t __s0 asm ("$16") __attribute__ ((unused))\
>  	  = (number);							\
>  	register __syscall_arg_t __v0 asm ("$2");			\
> @@ -144,8 +136,8 @@ typedef long int __syscall_arg_t;
>  	long int _sys_result;						\
>  									\
>  	{								\
> -	__syscall_arg_t _arg1 = ARGIFY (arg1);				\
> -	__syscall_arg_t _arg2 = ARGIFY (arg2);				\
> +	__syscall_arg_t _arg1 = __SSC (arg1);				\
> +	__syscall_arg_t _arg2 = __SSC (arg2);				\
>  	register __syscall_arg_t __s0 asm ("$16") __attribute__ ((unused))\
>  	  = (number);							\
>  	register __syscall_arg_t __v0 asm ("$2");			\
> @@ -170,9 +162,9 @@ typedef long int __syscall_arg_t;
>  	long int _sys_result;						\
>  									\
>  	{								\
> -	__syscall_arg_t _arg1 = ARGIFY (arg1);				\
> -	__syscall_arg_t _arg2 = ARGIFY (arg2);				\
> -	__syscall_arg_t _arg3 = ARGIFY (arg3);				\
> +	__syscall_arg_t _arg1 = __SSC (arg1);				\
> +	__syscall_arg_t _arg2 = __SSC (arg2);				\
> +	__syscall_arg_t _arg3 = __SSC (arg3);				\
>  	register __syscall_arg_t __s0 asm ("$16") __attribute__ ((unused))\
>  	  = (number);							\
>  	register __syscall_arg_t __v0 asm ("$2");			\
> @@ -199,10 +191,10 @@ typedef long int __syscall_arg_t;
>  	long int _sys_result;						\
>  									\
>  	{								\
> -	__syscall_arg_t _arg1 = ARGIFY (arg1);				\
> -	__syscall_arg_t _arg2 = ARGIFY (arg2);				\
> -	__syscall_arg_t _arg3 = ARGIFY (arg3);				\
> -	__syscall_arg_t _arg4 = ARGIFY (arg4);				\
> +	__syscall_arg_t _arg1 = __SSC (arg1);				\
> +	__syscall_arg_t _arg2 = __SSC (arg2);				\
> +	__syscall_arg_t _arg3 = __SSC (arg3);				\
> +	__syscall_arg_t _arg4 = __SSC (arg4);				\
>  	register __syscall_arg_t __s0 asm ("$16") __attribute__ ((unused))\
>  	  = (number);							\
>  	register __syscall_arg_t __v0 asm ("$2");			\
> @@ -229,11 +221,11 @@ typedef long int __syscall_arg_t;
>  	long int _sys_result;						\
>  									\
>  	{								\
> -	__syscall_arg_t _arg1 = ARGIFY (arg1);				\
> -	__syscall_arg_t _arg2 = ARGIFY (arg2);				\
> -	__syscall_arg_t _arg3 = ARGIFY (arg3);				\
> -	__syscall_arg_t _arg4 = ARGIFY (arg4);				\
> -	__syscall_arg_t _arg5 = ARGIFY (arg5);				\
> +	__syscall_arg_t _arg1 = __SSC (arg1);				\
> +	__syscall_arg_t _arg2 = __SSC (arg2);				\
> +	__syscall_arg_t _arg3 = __SSC (arg3);				\
> +	__syscall_arg_t _arg4 = __SSC (arg4);				\
> +	__syscall_arg_t _arg5 = __SSC (arg5);				\
>  	register __syscall_arg_t __s0 asm ("$16") __attribute__ ((unused))\
>  	  = (number);							\
>  	register __syscall_arg_t __v0 asm ("$2");			\
> @@ -261,12 +253,12 @@ typedef long int __syscall_arg_t;
>  	long int _sys_result;						\
>  									\
>  	{								\
> -	__syscall_arg_t _arg1 = ARGIFY (arg1);				\
> -	__syscall_arg_t _arg2 = ARGIFY (arg2);				\
> -	__syscall_arg_t _arg3 = ARGIFY (arg3);				\
> -	__syscall_arg_t _arg4 = ARGIFY (arg4);				\
> -	__syscall_arg_t _arg5 = ARGIFY (arg5);				\
> -	__syscall_arg_t _arg6 = ARGIFY (arg6);				\
> +	__syscall_arg_t _arg1 = __SSC (arg1);				\
> +	__syscall_arg_t _arg2 = __SSC (arg2);				\
> +	__syscall_arg_t _arg3 = __SSC (arg3);				\
> +	__syscall_arg_t _arg4 = __SSC (arg4);				\
> +	__syscall_arg_t _arg5 = __SSC (arg5);				\
> +	__syscall_arg_t _arg6 = __SSC (arg6);				\
>  	register __syscall_arg_t __s0 asm ("$16") __attribute__ ((unused))\
>  	  = (number);							\
>  	register __syscall_arg_t __v0 asm ("$2");			\
> diff --git a/sysdeps/unix/sysv/linux/nios2/syscall_cancel.S b/sysdeps/unix/sysv/linux/nios2/syscall_cancel.S
> new file mode 100644
> index 0000000000..19d0795886
> --- /dev/null
> +++ b/sysdeps/unix/sysv/linux/nios2/syscall_cancel.S
> @@ -0,0 +1,95 @@
> +/* Cancellable syscall wrapper.  Linux/nios2 version.
> +   Copyright (C) 2023 Free Software Foundation, Inc.
> +   This file is part of the GNU C Library.
> +
> +   The GNU C Library is free software; you can redistribute it and/or
> +   modify it under the terms of the GNU Lesser General Public
> +   License as published by the Free Software Foundation; either
> +   version 2.1 of the License, or (at your option) any later version.
> +
> +   The GNU C Library is distributed in the hope that it will be useful,
> +   but WITHOUT ANY WARRANTY; without even the implied warranty of
> +   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
> +   Lesser General Public License for more details.
> +
> +   You should have received a copy of the GNU Lesser General Public
> +   License along with the GNU C Library; if not, see
> +   <http://www.gnu.org/licenses/>.  */
> +
> +#include <sysdep.h>
> +#include <descr-const.h>
> +
> +/* long int __syscall_cancel_arch (int *cancelhandling,
> +				   __syscall_arg_t nr,
> +				   __syscall_arg_t arg1,
> +				   __syscall_arg_t arg2,
> +				   __syscall_arg_t arg3,
> +				   __syscall_arg_t arg4,
> +				   __syscall_arg_t arg5,
> +				   __syscall_arg_t arg6)  */
> +
> +ENTRY (__syscall_cancel_arch)
> +#ifdef SHARED
> +	addi	sp, sp, -8
> +	stw	r22, 0(sp)
> +	nextpc	r22
> +1:
> +	movhi	r8, %hiadj(_gp_got - 1b)
> +	addi	r8, r8, %lo(_gp_got - 1b)
> +	stw	ra, 4(sp)
> +	add	r22, r22, r8
> +#else
> +	addi	sp, sp, -4
> +	cfi_def_cfa_offset (4)
> +	stw	ra, 0(sp)
> +	cfi_offset (31, -4)
> +#endif
> +
> +	.globl __syscall_cancel_arch_start
> +__syscall_cancel_arch_start:
> +	ldw	r3, 0(r4)
> +	andi	r3, r3, TCB_CANCELED_BITMASK
> +	bne	r3, zero, 3f
> +	mov	r10, r6
> +	mov	r2, r5
> +#ifdef SHARED
> +# define STACK_ADJ 4
> +#else
> +# define STACK_ADJ 0
> +#endif
> +	ldw	r9, (16 + STACK_ADJ)(sp)
> +	mov	r5, r7
> +	ldw	r8, (12 + STACK_ADJ)(sp)
> +	ldw	r7, (8 + STACK_ADJ)(sp)
> +	ldw	r6, (4 + STACK_ADJ)(sp)
> +	mov	r4, r10
> +	trap
> +
> +	.globl __syscall_cancel_arch_end
> +__syscall_cancel_arch_end:
> +	beq	r7, zero, 2f
> +	sub	r2, zero, r2
> +2:
> +#ifdef SHARED
> +	ldw	ra, 4(sp)
> +	ldw	r22, 0(sp)
> +	addi	sp, sp, 8
> +#else
> +	ldw	ra, (0 + STACK_ADJ)(sp)
> +	cfi_remember_state
> +	cfi_restore (31)
> +	addi	sp, sp, 4
> +	cfi_def_cfa_offset (0)
> +#endif
> +	ret
> +
> +3:
> +#ifdef SHARED
> +	ldw	r2, %call(__syscall_do_cancel)(r22)
> +	callr	r2
> +#else
> +	cfi_restore_state
> +	call	__syscall_do_cancel
> +#endif
> +
> +END (__syscall_cancel_arch)
> diff --git a/sysdeps/unix/sysv/linux/or1k/syscall_cancel.S b/sysdeps/unix/sysv/linux/or1k/syscall_cancel.S
> new file mode 100644
> index 0000000000..876f5e05ab
> --- /dev/null
> +++ b/sysdeps/unix/sysv/linux/or1k/syscall_cancel.S
> @@ -0,0 +1,63 @@
> +/* Cancellable syscall wrapper.  Linux/or1k version.
> +   Copyright (C) 2023 Free Software Foundation, Inc.
> +   This file is part of the GNU C Library.
> +
> +   The GNU C Library is free software; you can redistribute it and/or
> +   modify it under the terms of the GNU Lesser General Public
> +   License as published by the Free Software Foundation; either
> +   version 2.1 of the License, or (at your option) any later version.
> +
> +   The GNU C Library is distributed in the hope that it will be useful,
> +   but WITHOUT ANY WARRANTY; without even the implied warranty of
> +   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
> +   Lesser General Public License for more details.
> +
> +   You should have received a copy of the GNU Lesser General Public
> +   License along with the GNU C Library; if not, see
> +   <http://www.gnu.org/licenses/>.  */
> +
> +#include <sysdep.h>
> +#include <descr-const.h>
> +
> +ENTRY (__syscall_cancel_arch)
> +	l.addi	r1, r1, -4
> +	cfi_def_cfa_offset (4)
> +	l.sw	0(r1), r9
> +	cfi_offset (9, -4)
> +
> +	.global __syscall_cancel_arch_start
> +__syscall_cancel_arch_start:
> +
> +	/* if (*cancelhandling & CANCELED_BITMASK)
> +	     __syscall_do_cancel()  */
> +	l.movhi	r19, hi(0)
> +	l.lwz	r17, 0(r3)
> +	l.andi	r17, r17, 8
> +	l.sfeq	r17, r19
> +	l.bnf	1f
> +
> +	/* Issue a 6 argument syscall.  */
> +	l.or	r11, r4, r4
> +	l.or	r3, r5, r5
> +	l.or	r4, r6, r6
> +	l.or	r5, r7, r7
> +	l.or	r6, r8, r8
> +	l.lwz	r7, 4(r1)
> +	l.lwz	r8, 8(r1)
> +	l.sys	1
> +	 l.nop
> +
> +	.global __syscall_cancel_arch_end
> +__syscall_cancel_arch_end:
> +
> +	l.lwz	r9, 0(r1)
> +	l.jr	r9
> +	l.addi	r1, r1, 4
> +	cfi_remember_state
> +	cfi_def_cfa_offset (0)
> +	cfi_restore (9)
> +1:
> +	cfi_restore_state
> +	l.jal	__syscall_do_cancel
> +	 l.nop
> +END (__syscall_cancel_arch)
> diff --git a/sysdeps/unix/sysv/linux/powerpc/cancellation-pc-check.h b/sysdeps/unix/sysv/linux/powerpc/cancellation-pc-check.h
> new file mode 100644
> index 0000000000..1175e1a070
> --- /dev/null
> +++ b/sysdeps/unix/sysv/linux/powerpc/cancellation-pc-check.h
> @@ -0,0 +1,65 @@
> +/* Architecture specific code for pthread cancellation handling.
> +   Linux/PowerPC version.
> +   Copyright (C) 2023 Free Software Foundation, Inc.
> +   This file is part of the GNU C Library.
> +
> +   The GNU C Library is free software; you can redistribute it and/or
> +   modify it under the terms of the GNU Lesser General Public
> +   License as published by the Free Software Foundation; either
> +   version 2.1 of the License, or (at your option) any later version.
> +
> +   The GNU C Library is distributed in the hope that it will be useful,
> +   but WITHOUT ANY WARRANTY; without even the implied warranty of
> +   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
> +   Lesser General Public License for more details.
> +
> +   You should have received a copy of the GNU Lesser General Public
> +   License along with the GNU C Library; if not, see
> +   <http://www.gnu.org/licenses/>.  */
> +
> +#ifndef _NPTL_CANCELLATION_PC_CHECK
> +#define _NPTL_CANCELLATION_PC_CHECK
> +
> +#include <sigcontextinfo.h>
> +
> +/* For syscalls with side-effects (e.g read that might return partial read),
> +   the kernel cannot restart the syscall when interrupted by a signal, it must
> +   return from the call with whatever partial result.  In this case, the saved
> +   program counter is set just after the syscall instruction, so the SIGCANCEL
> +   handler should not act on cancellation.
> +
> +   The __syscall_cancel_arch function, used for all cancellable syscalls,
> +   contains two extra markers, __syscall_cancel_arch_start and
> +   __syscall_cancel_arch_end.  The former points to just before the initial
> +   conditional branch that checks if the thread has received a cancellation
> +   request, while former points to the instruction after the one responsible
> +   to issue the syscall.
> +
> +   The function check if the program counter (PC) from ucontext_t CTX is
> +   within the start and then end boundary from the __syscall_cancel_arch
> +   bridge.  Return TRUE if the PC is within the boundary, meaning the
> +   syscall does not have any side effects; or FALSE otherwise.  */
> +
> +static __always_inline bool
> +cancellation_pc_check (void *ctx)
> +{
> +  /* Both are defined in syscall_cancel.S.  */
> +  extern const char __syscall_cancel_arch_start[1];
> +  extern const char __syscall_cancel_arch_end_sc[1];
> +#if defined(USE_PPC_SVC) && defined(__powerpc64__)
> +  extern const char __syscall_cancel_arch_end_svc[1];
> +#endif
> +
> +  uintptr_t pc = sigcontext_get_pc (ctx);
> +
> +  return pc >= (uintptr_t) __syscall_cancel_arch_start
> +#if defined(USE_PPC_SVC) && defined(__powerpc64__)
> +	 && THREAD_GET_HWCAP() & PPC_FEATURE2_SCV
> +	    ? pc < (uintptr_t) __syscall_cancel_arch_end_sc
> +	    : pc < (uintptr_t) __syscall_cancel_arch_end_svc;
> +#else
> +	 && pc < (uintptr_t) __syscall_cancel_arch_end_sc;
> +#endif
> +}
> +
> +#endif
> diff --git a/sysdeps/unix/sysv/linux/powerpc/syscall_cancel.S b/sysdeps/unix/sysv/linux/powerpc/syscall_cancel.S
> new file mode 100644
> index 0000000000..1f119d0889
> --- /dev/null
> +++ b/sysdeps/unix/sysv/linux/powerpc/syscall_cancel.S
> @@ -0,0 +1,86 @@
> +/* Cancellable syscall wrapper.  Linux/powerpc version.
> +   Copyright (C) 2023 Free Software Foundation, Inc.
> +   This file is part of the GNU C Library.
> +
> +   The GNU C Library is free software; you can redistribute it and/or
> +   modify it under the terms of the GNU Lesser General Public
> +   License as published by the Free Software Foundation; either
> +   version 2.1 of the License, or (at your option) any later version.
> +
> +   The GNU C Library is distributed in the hope that it will be useful,
> +   but WITHOUT ANY WARRANTY; without even the implied warranty of
> +   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
> +   Lesser General Public License for more details.
> +
> +   You should have received a copy of the GNU Lesser General Public
> +   License along with the GNU C Library; if not, see
> +   <http://www.gnu.org/licenses/>.  */
> +
> +#include <sysdep.h>
> +#include <descr-const.h>
> +
> +/* long int [r3] __syscall_cancel_arch (int *cancelhandling [r3],
> +					long int nr   [r4],
> +					long int arg1 [r5],
> +					long int arg2 [r6],
> +					long int arg3 [r7],
> +					long int arg4 [r8],
> +					long int arg5 [r9],
> +					long int arg6 [r10])  */
> +
> +ENTRY (__syscall_cancel_arch)
> +
> +	.globl __syscall_cancel_arch_start
> +__syscall_cancel_arch_start:
> +
> +	/* if (*cancelhandling & CANCELED_BITMASK)
> +	     __syscall_do_cancel()  */
> +	lwz     r0,0(r3)
> +	andi.   r0,r0,TCB_CANCELED_BITMASK
> +	bne     1f
> +
> +	/* Issue a 6 argument syscall, the nr [r4] being the syscall
> +	   number.  */
> +	mr      r0,r4
> +	mr      r3,r5
> +	mr      r4,r6
> +	mr      r5,r7
> +	mr      r6,r8
> +	mr      r7,r9
> +	mr      r8,r10
> +
> +#if defined(USE_PPC_SVC) && defined(__powerpc64__)
> +	CHECK_SCV_SUPPORT r9 0f
> +
> +	stdu	r1, -SCV_FRAME_SIZE(r1)
> +	cfi_adjust_cfa_offset (SCV_FRAME_SIZE)
> +	.machine "push"
> +	.machine "power9"
> +	scv	0
> +	.machine "pop"
> +	.globl __syscall_cancel_arch_end_svc
> +__syscall_cancel_arch_end_svc:
> +	ld	r9, SCV_FRAME_SIZE + FRAME_LR_SAVE(r1)
> +	mtlr	r9
> +	addi	r1, r1, SCV_FRAME_SIZE
> +	cfi_restore (lr)
> +	li	r9, -4095
> +	cmpld	r3, r9
> +	bnslr+
> +	neg	r3,r3
> +	blr
> +0:
> +#endif
> +	sc
> +	.globl __syscall_cancel_arch_end_sc
> +__syscall_cancel_arch_end_sc:
> +	bnslr+
> +	neg	r3,r3
> +	blr
> +
> +	/* Although the __syscall_do_cancel do not return, we need to stack
> +	   being set correctly for unwind.  */
> +1:
> +	TAIL_CALL_NO_RETURN (__syscall_do_cancel)
> +
> +END (__syscall_cancel_arch)
> diff --git a/sysdeps/unix/sysv/linux/riscv/syscall_cancel.S b/sysdeps/unix/sysv/linux/riscv/syscall_cancel.S
> new file mode 100644
> index 0000000000..93ff0bd90a
> --- /dev/null
> +++ b/sysdeps/unix/sysv/linux/riscv/syscall_cancel.S
> @@ -0,0 +1,67 @@
> +/* Cancellable syscall wrapper.  Linux/riscv version.
> +   Copyright (C) 2023 Free Software Foundation, Inc.
> +   This file is part of the GNU C Library.
> +
> +   The GNU C Library is free software; you can redistribute it and/or
> +   modify it under the terms of the GNU Lesser General Public
> +   License as published by the Free Software Foundation; either
> +   version 2.1 of the License, or (at your option) any later version.
> +
> +   The GNU C Library is distributed in the hope that it will be useful,
> +   but WITHOUT ANY WARRANTY; without even the implied warranty of
> +   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
> +   Lesser General Public License for more details.
> +
> +   You should have received a copy of the GNU Lesser General Public
> +   License along with the GNU C Library; if not, see
> +   <http://www.gnu.org/licenses/>.  */
> +
> +#include <sysdep.h>
> +#include <descr-const.h>
> +
> +/* long int __syscall_cancel_arch (int *cancelhandling,
> +				   __syscall_arg_t nr,
> +				   __syscall_arg_t arg1,
> +				   __syscall_arg_t arg2,
> +				   __syscall_arg_t arg3,
> +				   __syscall_arg_t arg4,
> +				   __syscall_arg_t arg5,
> +				   __syscall_arg_t arg6)  */
> +
> +#ifdef SHARED
> +	.option pic
> +#else
> +	.option nopic
> +#endif
> +
> +ENTRY (__syscall_cancel_arch)
> +
> +	.globl __syscall_cancel_arch_start
> +__syscall_cancel_arch_start:
> +	lw	t1, 0(a0)
> +	/* if (*ch & CANCELED_BITMASK)  */
> +	andi	t1, t1, TCB_CANCELED_BITMASK
> +	bne	t1, zero, 1f
> +
> +	mv	t3, a1
> +	mv	a0, a2
> +	mv	a1, a3
> +	mv	a2, a4
> +	mv	a3, a5
> +	mv	a4, a6
> +	mv	a5, a7
> +	mv	a7, t3
> +	scall
> +
> +	.globl __syscall_cancel_arch_end
> +__syscall_cancel_arch_end:
> +	ret
> +
> +1:
> +	addi	sp, sp, -16
> +	cfi_def_cfa_offset (16)
> +	REG_S	ra, (16-SZREG)(sp)
> +	cfi_offset (ra, -SZREG)
> +	tail	__syscall_do_cancel
> +
> +END (__syscall_cancel_arch)
> diff --git a/sysdeps/unix/sysv/linux/s390/s390-32/syscall_cancel.S b/sysdeps/unix/sysv/linux/s390/s390-32/syscall_cancel.S
> new file mode 100644
> index 0000000000..9e0ad2a635
> --- /dev/null
> +++ b/sysdeps/unix/sysv/linux/s390/s390-32/syscall_cancel.S
> @@ -0,0 +1,62 @@
> +/* Cancellable syscall wrapper.  Linux/s390 version.
> +   Copyright (C) 2023 Free Software Foundation, Inc.
> +   This file is part of the GNU C Library.
> +
> +   The GNU C Library is free software; you can redistribute it and/or
> +   modify it under the terms of the GNU Lesser General Public
> +   License as published by the Free Software Foundation; either
> +   version 2.1 of the License, or (at your option) any later version.
> +
> +   The GNU C Library is distributed in the hope that it will be useful,
> +   but WITHOUT ANY WARRANTY; without even the implied warranty of
> +   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
> +   Lesser General Public License for more details.
> +
> +   You should have received a copy of the GNU Lesser General Public
> +   License along with the GNU C Library; if not, see
> +   <http://www.gnu.org/licenses/>.  */
> +
> +#include <sysdep.h>
> +#include <descr-const.h>
> +
> +/* long int __syscall_cancel_arch (int *cancelhandling,
> +				   __syscall_arg_t nr,
> +				   __syscall_arg_t arg1,
> +				   __syscall_arg_t arg2,
> +				   __syscall_arg_t arg3,
> +				   __syscall_arg_t arg4,
> +				   __syscall_arg_t arg5,
> +				   __syscall_arg_t arg6)  */
> +
> +ENTRY (__syscall_cancel_arch)
> +	stm	%r6,%r7,24(%r15)
> +	cfi_offset (%r6, -72)
> +	cfi_offset (%r7, -68)
> +
> +	.globl __syscall_cancel_arch_start
> +__syscall_cancel_arch_start:
> +	/* if (*cancelhandling & CANCELED_BITMASK)
> +	     __syscall_do_cancel()  */
> +	tm	3(%r2),TCB_CANCELED_BITMASK
> +	jne	1f
> +
> +	/* Issue a 6 argument syscall, the nr [%r1] being the syscall
> +	   number.  */
> +	lr	%r1,%r3
> +	lr	%r2,%r4
> +	lr	%r3,%r5
> +	lr	%r4,%r6
> +	lm	%r5,%r7,96(%r15)
> +	svc	0
> +
> +	.globl __syscall_cancel_arch_end
> +__syscall_cancel_arch_end:
> +	lm	%r6,%r7,24(%r15)
> +	cfi_remember_state
> +	cfi_restore (%r7)
> +	cfi_restore (%r6)
> +	br	%r14
> +1:
> +	cfi_restore_state
> +	jg	__syscall_do_cancel
> +END (__syscall_cancel_arch)
> diff --git a/sysdeps/unix/sysv/linux/s390/s390-64/syscall_cancel.S b/sysdeps/unix/sysv/linux/s390/s390-64/syscall_cancel.S
> new file mode 100644
> index 0000000000..e1620add6a
> --- /dev/null
> +++ b/sysdeps/unix/sysv/linux/s390/s390-64/syscall_cancel.S
> @@ -0,0 +1,62 @@
> +/* Cancellable syscall wrapper.  Linux/s390x version.
> +   Copyright (C) 2023 Free Software Foundation, Inc.
> +   This file is part of the GNU C Library.
> +
> +   The GNU C Library is free software; you can redistribute it and/or
> +   modify it under the terms of the GNU Lesser General Public
> +   License as published by the Free Software Foundation; either
> +   version 2.1 of the License, or (at your option) any later version.
> +
> +   The GNU C Library is distributed in the hope that it will be useful,
> +   but WITHOUT ANY WARRANTY; without even the implied warranty of
> +   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
> +   Lesser General Public License for more details.
> +
> +   You should have received a copy of the GNU Lesser General Public
> +   License along with the GNU C Library; if not, see
> +   <http://www.gnu.org/licenses/>.  */
> +
> +#include <sysdep.h>
> +#include <descr-const.h>
> +
> +/* long int __syscall_cancel_arch (int *cancelhandling,
> +				   __syscall_arg_t nr,
> +				   __syscall_arg_t arg1,
> +				   __syscall_arg_t arg2,
> +				   __syscall_arg_t arg3,
> +				   __syscall_arg_t arg4,
> +				   __syscall_arg_t arg5,
> +				   __syscall_arg_t arg6)  */
> +
> +ENTRY (__syscall_cancel_arch)
> +	stmg	%r6,%r7,48(%r15)
> +	cfi_offset (%r6, -112)
> +	cfi_offset (%r7, -104)
> +
> +	.globl __syscall_cancel_arch_start
> +__syscall_cancel_arch_start:
> +	/* if (*cancelhandling & CANCELED_BITMASK)
> +	     __syscall_do_cancel()  */
> +	tm	3(%r2),TCB_CANCELED_BITMASK
> +	jne	1f
> +
> +	/* Issue a 6 argument syscall, the nr [%r1] being the syscall
> +	   number.  */
> +	lgr	%r1,%r3
> +	lgr	%r2,%r4
> +	lgr	%r3,%r5
> +	lgr	%r4,%r6
> +	lmg	%r5,%r7,160(%r15)
> +	svc	0
> +
> +	.globl __syscall_cancel_arch_end
> +__syscall_cancel_arch_end:
> +	lmg	%r6,%r7,48(%r15)
> +	cfi_remember_state
> +	cfi_restore (%r7)
> +	cfi_restore (%r6)
> +	br	%r14
> +1:
> +	cfi_restore_state
> +	jg	__syscall_do_cancel
> +END (__syscall_cancel_arch)
> diff --git a/sysdeps/unix/sysv/linux/sh/syscall_cancel.S b/sysdeps/unix/sysv/linux/sh/syscall_cancel.S
> new file mode 100644
> index 0000000000..2afd23928d
> --- /dev/null
> +++ b/sysdeps/unix/sysv/linux/sh/syscall_cancel.S
> @@ -0,0 +1,126 @@
> +/* Cancellable syscall wrapper.  Linux/sh version.
> +   Copyright (C) 2023 Free Software Foundation, Inc.
> +   This file is part of the GNU C Library.
> +
> +   The GNU C Library is free software; you can redistribute it and/or
> +   modify it under the terms of the GNU Lesser General Public
> +   License as published by the Free Software Foundation; either
> +   version 2.1 of the License, or (at your option) any later version.
> +
> +   The GNU C Library is distributed in the hope that it will be useful,
> +   but WITHOUT ANY WARRANTY; without even the implied warranty of
> +   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
> +   Lesser General Public License for more details.
> +
> +   You should have received a copy of the GNU Lesser General Public
> +   License along with the GNU C Library; if not, see
> +   <http://www.gnu.org/licenses/>.  */
> +
> +#include <sysdep.h>
> +#include <descr-const.h>
> +
> +/* long int __syscall_cancel_arch (int *cancelhandling,
> +				   long int nr,
> +				   long int arg1,
> +				   long int arg2,
> +				   long int arg3,
> +				   long int arg4,
> +				   long int arg5,
> +				   long int arg6)  */
> +
> +ENTRY (__syscall_cancel_arch)
> +
> +#ifdef SHARED
> +	mov.l	r12,@-r15
> +	cfi_def_cfa_offset (4)
> +	cfi_offset (12, -4)
> +	mova	L(GT),r0
> +	mov.l	L(GT),r12
> +	sts.l	pr,@-r15
> +	cfi_def_cfa_offset (8)
> +	cfi_offset (17, -8)
> +	add	r0,r12
> +#else
> +	sts.l	pr,@-r15
> +	cfi_def_cfa_offset (4)
> +	cfi_offset (17, -4)
> +#endif
> +
> +	.globl __syscall_cancel_arch_start
> +__syscall_cancel_arch_start:
> +
> +	/* if (*cancelhandling & CANCELED_BITMASK)
> +	     __syscall_do_cancel()  */
> +	mov.l	@r4,r0
> +	tst	#TCB_CANCELED_BITMASK,r0
> +	bf/s	1f
> +
> +	/* Issue a 6 argument syscall.  */
> +	mov	r5,r3
> +	mov	r6,r4
> +	mov	r7,r5
> +#ifdef SHARED
> +	mov.l	@(8,r15),r6
> +	mov.l	@(12,r15),r7
> +	mov.l	@(16,r15),r0
> +	mov.l	@(20,r15),r1
> +#else
> +	mov.l	@(4,r15),r6
> +	mov.l	@(8,r15),r7
> +	mov.l	@(12,r15),r0
> +	mov.l	@(16,r15),r1
> +#endif
> +	trapa	#0x16
> +
> +	.globl __syscall_cancel_arch_end
> +__syscall_cancel_arch_end:
> +
> +	/* The additional or is a workaround for a hardware issue:
> +	   http://documentation.renesas.com/eng/products/mpumcu/tu/tnsh7456ae.pdf
> +	 */
> +	or	r0,r0
> +	or	r0,r0
> +	or	r0,r0
> +	or	r0,r0
> +	or	r0,r0
> +
> +	lds.l	@r15+,pr
> +	cfi_remember_state
> +	cfi_restore (17)
> +#ifdef SHARED
> +	cfi_def_cfa_offset (4)
> +	rts
> +	mov.l	@r15+,r12
> +	cfi_def_cfa_offset (0)
> +	cfi_restore (12)
> +	.align 1
> +1:
> +	cfi_restore_state
> +	mov.l	L(SC),r1
> +	bsrf	r1
> +L(M):
> +	nop
> +
> +	.align 2
> +L(GT):
> +	.long	_GLOBAL_OFFSET_TABLE_
> +L(SC):
> +	.long	__syscall_do_cancel-(L(M)+2)
> +#else
> +	cfi_def_cfa_offset (0)
> +	rts
> +	nop
> +
> +	.align 1
> +1:
> +	cfi_restore_state
> +	mov.l	2f,r1
> +	jsr	@r1
> +	nop
> +
> +	.align 2
> +2:
> +	.long	__syscall_do_cancel
> +#endif
> +
> +END (__syscall_cancel_arch)
> diff --git a/sysdeps/unix/sysv/linux/socketcall.h b/sysdeps/unix/sysv/linux/socketcall.h
> index d1a173277e..19a6c17a86 100644
> --- a/sysdeps/unix/sysv/linux/socketcall.h
> +++ b/sysdeps/unix/sysv/linux/socketcall.h
> @@ -88,14 +88,33 @@
>      sc_ret;								\
>    })
>  
> -
> -#define SOCKETCALL_CANCEL(name, args...)				\
> -  ({									\
> -    int oldtype = LIBC_CANCEL_ASYNC ();					\
> -    long int sc_ret = __SOCKETCALL (SOCKOP_##name, args);		\
> -    LIBC_CANCEL_RESET (oldtype);					\
> -    sc_ret;								\
> -  })
> +#define __SOCKETCALL_CANCEL1(__name, __a1) \
> +  SYSCALL_CANCEL (socketcall, __name, \
> +     ((long int [1]) { (long int) __a1 }))
> +#define __SOCKETCALL_CANCEL2(__name, __a1, __a2) \
> +  SYSCALL_CANCEL (socketcall, __name, \
> +     ((long int [2]) { (long int) __a1, (long int) __a2 }))
> +#define __SOCKETCALL_CANCEL3(__name, __a1, __a2, __a3) \
> +  SYSCALL_CANCEL (socketcall, __name, \
> +     ((long int [3]) { (long int) __a1, (long int) __a2, (long int) __a3 }))
> +#define __SOCKETCALL_CANCEL4(__name, __a1, __a2, __a3, __a4) \
> +  SYSCALL_CANCEL (socketcall, __name, \
> +     ((long int [4]) { (long int) __a1, (long int) __a2, (long int) __a3, \
> +                       (long int) __a4 }))
> +#define __SOCKETCALL_CANCEL5(__name, __a1, __a2, __a3, __a4, __a5) \
> +  SYSCALL_CANCEL (socketcall, __name, \
> +     ((long int [5]) { (long int) __a1, (long int) __a2, (long int) __a3, \
> +                       (long int) __a4, (long int) __a5 }))
> +#define __SOCKETCALL_CANCEL6(__name, __a1, __a2, __a3, __a4, __a5, __a6) \
> +  SYSCALL_CANCEL (socketcall, __name, \
> +     ((long int [6]) { (long int) __a1, (long int) __a2, (long int) __a3, \
> +                       (long int) __a4, (long int) __a5, (long int) __a6 }))
> +
> +#define __SOCKETCALL_CANCEL(...) __SOCKETCALL_DISP (__SOCKETCALL_CANCEL,\
> +						    __VA_ARGS__)
> +
> +#define SOCKETCALL_CANCEL(name, args...) \
> +   __SOCKETCALL_CANCEL (SOCKOP_##name, args)
>  
>  
>  #endif /* sys/socketcall.h */
> diff --git a/sysdeps/unix/sysv/linux/sparc/sparc32/syscall_cancel.S b/sysdeps/unix/sysv/linux/sparc/sparc32/syscall_cancel.S
> new file mode 100644
> index 0000000000..aa5c658ce1
> --- /dev/null
> +++ b/sysdeps/unix/sysv/linux/sparc/sparc32/syscall_cancel.S
> @@ -0,0 +1,71 @@
> +/* Cancellable syscall wrapper.  Linux/sparc32 version.
> +   Copyright (C) 2023 Free Software Foundation, Inc.
> +   This file is part of the GNU C Library.
> +
> +   The GNU C Library is free software; you can redistribute it and/or
> +   modify it under the terms of the GNU Lesser General Public
> +   License as published by the Free Software Foundation; either
> +   version 2.1 of the License, or (at your option) any later version.
> +
> +   The GNU C Library is distributed in the hope that it will be useful,
> +   but WITHOUT ANY WARRANTY; without even the implied warranty of
> +   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
> +   Lesser General Public License for more details.
> +
> +   You should have received a copy of the GNU Lesser General Public
> +   License along with the GNU C Library; if not, see
> +   <http://www.gnu.org/licenses/>.  */
> +
> +#include <sysdep.h>
> +#include <descr-const.h>
> +
> +/* long int __syscall_cancel_arch (int *cancelhandling,
> +				   long int nr,
> +				   long int arg1,
> +				   long int arg2,
> +				   long int arg3,
> +				   long int arg4,
> +				   long int arg5,
> +				   long int arg6)  */
> +
> +ENTRY (__syscall_cancel_arch)
> +	save	%sp, -96, %sp
> +
> +	cfi_window_save
> +	cfi_register (%o7, %i7)
> +	cfi_def_cfa_register (%fp)
> +
> +	.globl __syscall_cancel_arch_start
> +__syscall_cancel_arch_start:
> +
> +	/* if (*cancelhandling & CANCELED_BITMASK)
> +	     __syscall_do_cancel()  */
> +	ld	[%i0], %g2
> +	andcc	%g2, TCB_CANCELED_BITMASK, %g0
> +	bne,pn	%icc, 2f
> +	/* Issue a 6 argument syscall.  */
> +	 mov	%i1, %g1
> +	mov	%i2, %o0
> +	mov	%i3, %o1
> +	mov	%i4, %o2
> +	mov	%i5, %o3
> +	ld	[%fp+92], %o4
> +	ld	[%fp+96], %o5
> +	ta	0x10
> +
> +	.globl __syscall_cancel_arch_end
> +__syscall_cancel_arch_end:
> +	bcc	1f
> +	 nop
> +	sub	%g0, %o0, %o0
> +1:
> +	mov	%o0, %i0
> +	return	%i7+8
> +	 nop
> +
> +2:
> +	call	__syscall_do_cancel, 0
> +	 nop
> +	nop
> +
> +END (__syscall_cancel_arch)
> diff --git a/sysdeps/unix/sysv/linux/sparc/sparc64/syscall_cancel.S b/sysdeps/unix/sysv/linux/sparc/sparc64/syscall_cancel.S
> new file mode 100644
> index 0000000000..21b0728d5a
> --- /dev/null
> +++ b/sysdeps/unix/sysv/linux/sparc/sparc64/syscall_cancel.S
> @@ -0,0 +1,74 @@
> +/* Cancellable syscall wrapper.  Linux/sparc64 version.
> +   Copyright (C) 2023 Free Software Foundation, Inc.
> +   This file is part of the GNU C Library.
> +
> +   The GNU C Library is free software; you can redistribute it and/or
> +   modify it under the terms of the GNU Lesser General Public
> +   License as published by the Free Software Foundation; either
> +   version 2.1 of the License, or (at your option) any later version.
> +
> +   The GNU C Library is distributed in the hope that it will be useful,
> +   but WITHOUT ANY WARRANTY; without even the implied warranty of
> +   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
> +   Lesser General Public License for more details.
> +
> +   You should have received a copy of the GNU Lesser General Public
> +   License along with the GNU C Library; if not, see
> +   <http://www.gnu.org/licenses/>.  */
> +
> +#include <sysdep.h>
> +#include <descr-const.h>
> +
> +	.register       %g2, #scratch
> +
> +/* long int __syscall_cancel_arch (int *cancelhandling,
> +				   long int nr,
> +				   long int arg1,
> +				   long int arg2,
> +				   long int arg3,
> +				   long int arg4,
> +				   long int arg5,
> +				   long int arg6)  */
> +
> +ENTRY (__syscall_cancel_arch)
> +	save	%sp, -176, %sp
> +
> +	cfi_window_save
> +	cfi_register (%o7, %i7)
> +	cfi_def_cfa_register (%fp)
> +
> +	.globl __syscall_cancel_arch_start
> +__syscall_cancel_arch_start:
> +
> +	/* if (*cancelhandling & CANCELED_BITMASK)
> +	     __syscall_do_cancel()  */
> +	lduw	[%i0], %g2
> +	andcc	%g2, TCB_CANCELED_BITMASK, %g0
> +	bne,pn	%xcc, 2f
> +	/* Issue a 6 argument syscall.  */
> +	 mov	%i1, %g1
> +	mov	%i2, %o0
> +	mov	%i3, %o1
> +	mov	%i4, %o2
> +	mov	%i5, %o3
> +	ldx	[%fp + STACK_BIAS + 176], %o4
> +	ldx	[%fp + STACK_BIAS + 184], %o5
> +	ta	0x6d
> +
> +	.global __syscall_cancel_arch_end
> +__syscall_cancel_arch_end:
> +
> +	bcc,pt	%xcc, 1f
> +	 nop
> +	sub	%g0, %o0, %o0
> +1:
> +	mov	%o0, %i0
> +	return	%i7+8
> +	 nop
> +
> +2:
> +	call	__syscall_do_cancel, 0
> +	 nop
> +	nop
> +
> +END (__syscall_cancel_arch)
> diff --git a/sysdeps/unix/sysv/linux/syscall_cancel.c b/sysdeps/unix/sysv/linux/syscall_cancel.c
> new file mode 100644
> index 0000000000..5fa0706486
> --- /dev/null
> +++ b/sysdeps/unix/sysv/linux/syscall_cancel.c
> @@ -0,0 +1,73 @@
> +/* Pthread cancellation syscall bridge.  Default Linux version.
> +   Copyright (C) 2023 Free Software Foundation, Inc.
> +   This file is part of the GNU C Library.
> +
> +   The GNU C Library is free software; you can redistribute it and/or
> +   modify it under the terms of the GNU Lesser General Public
> +   License as published by the Free Software Foundation; either
> +   version 2.1 of the License, or (at your option) any later version.
> +
> +   The GNU C Library is distributed in the hope that it will be useful,
> +   but WITHOUT ANY WARRANTY; without even the implied warranty of
> +   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
> +   Lesser General Public License for more details.
> +
> +   You should have received a copy of the GNU Lesser General Public
> +   License along with the GNU C Library; if not, see
> +   <http://www.gnu.org/licenses/>.  */
> +
> +#include <sysdep.h>
> +#include <pthreadP.h>
> +
> +#warning "This implementation should be use just as reference or for bootstrapping"
> +
> +/* This is the generic version of the cancellable syscall code which
> +   adds the label guards (__syscall_cancel_arch_{start,end}) used on SIGCANCEL
> +   handler to check if the cancelled syscall have side-effects that need to be
> +   returned to the caller.
> +
> +   This implementation should be used as a reference one to document the
> +   implementation constraints:
> +
> +     1. The __syscall_cancel_arch_start should point just before the test
> +        that thread is already cancelled,
> +     2.	The __syscall_cancel_arch_end should point to the immediate next
> +        instruction after the syscall one.
> +     3. It should return the syscall value or a negative result if is has
> +        failed, similar to INTERNAL_SYSCALL_CALL.
> +
> +   The __syscall_cancel_arch_end one is because the kernel will signal
> +   interrupted syscall with side effects by setting the signal frame program
> +   counter (on the ucontext_t third argument from SA_SIGINFO signal handler)
> +   right after the syscall instruction.
> +
> +   For some architecture, the INTERNAL_SYSCALL_NCS macro use more instructions
> +   to get the error condition from kernel (as for powerpc and sparc that
> +   checks for the conditional register), or uses an out of the line helper
> +   (ARM thumb), or uses a kernel helper gate (i686 or ia64).  In this case
> +   the architecture should either adjust the macro or provide a custom
> +   __syscall_cancel_arch implementation.   */
> +
> +long int
> +__syscall_cancel_arch (volatile int *ch, __syscall_arg_t nr,
> +		       __syscall_arg_t a1, __syscall_arg_t a2,
> +		       __syscall_arg_t a3, __syscall_arg_t a4,
> +		       __syscall_arg_t a5, __syscall_arg_t a6
> +		       __SYSCALL_CANCEL7_ARG_DEF)
> +{
> +#define ADD_LABEL(__label)		\
> +  asm volatile (			\
> +    ".global " __label "\t\n"		\
> +    __label ":\n");
> +
> +  ADD_LABEL ("__syscall_cancel_arch_start");
> +  if (__glibc_unlikely (*ch & CANCELED_BITMASK))
> +    __syscall_do_cancel();
> +
> +  long int result = INTERNAL_SYSCALL_NCS_CALL (nr, a1, a2, a3, a4, a5, a6
> +					       __SYSCALL_CANCEL7_ARG7);
> +  ADD_LABEL ("__syscall_cancel_arch_end");
> +  if (__glibc_unlikely (INTERNAL_SYSCALL_ERROR_P (result)))
> +    return -INTERNAL_SYSCALL_ERRNO (result);
> +  return result;
> +}
> diff --git a/sysdeps/unix/sysv/linux/sysdep-cancel.h b/sysdeps/unix/sysv/linux/sysdep-cancel.h
> index 102682c5ee..1b686d53a9 100644
> --- a/sysdeps/unix/sysv/linux/sysdep-cancel.h
> +++ b/sysdeps/unix/sysv/linux/sysdep-cancel.h
> @@ -21,17 +21,5 @@
>  #define _SYSDEP_CANCEL_H
>  
>  #include <sysdep.h>
> -#include <tls.h>
> -#include <errno.h>
> -
> -/* Set cancellation mode to asynchronous.  */
> -extern int __pthread_enable_asynccancel (void);
> -libc_hidden_proto (__pthread_enable_asynccancel)
> -#define LIBC_CANCEL_ASYNC() __pthread_enable_asynccancel ()
> -
> -/* Reset to previous cancellation mode.  */
> -extern void __pthread_disable_asynccancel (int oldtype);
> -libc_hidden_proto (__pthread_disable_asynccancel)
> -#define LIBC_CANCEL_RESET(oldtype) __pthread_disable_asynccancel (oldtype)
>  
>  #endif
> diff --git a/sysdeps/unix/sysv/linux/x86_64/syscall_cancel.S b/sysdeps/unix/sysv/linux/x86_64/syscall_cancel.S
> new file mode 100644
> index 0000000000..cda9d20a83
> --- /dev/null
> +++ b/sysdeps/unix/sysv/linux/x86_64/syscall_cancel.S
> @@ -0,0 +1,57 @@
> +/* Cancellable syscall wrapper.  Linux/x86_64 version.
> +   Copyright (C) 2023 Free Software Foundation, Inc.
> +   This file is part of the GNU C Library.
> +
> +   The GNU C Library is free software; you can redistribute it and/or
> +   modify it under the terms of the GNU Lesser General Public
> +   License as published by the Free Software Foundation; either
> +   version 2.1 of the License, or (at your option) any later version.
> +
> +   The GNU C Library is distributed in the hope that it will be useful,
> +   but WITHOUT ANY WARRANTY; without even the implied warranty of
> +   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
> +   Lesser General Public License for more details.
> +
> +   You should have received a copy of the GNU Lesser General Public
> +   License along with the GNU C Library; if not, see
> +   <http://www.gnu.org/licenses/>.  */
> +
> +#include <sysdep.h>
> +#include <descr-const.h>
> +
> +/* long int [rax] __syscall_cancel_arch (volatile int *cancelhandling [%rdi],
> +					 __syscall_arg_t nr   [%rsi],
> +					 __syscall_arg_t arg1 [%rdx],
> +					 __syscall_arg_t arg2 [%rcx],
> +					 __syscall_arg_t arg3 [%r8],
> +					 __syscall_arg_t arg4 [%r9],
> +					 __syscall_arg_t arg5 [SP+8],
> +					 __syscall_arg_t arg6 [SP+16])  */
> +
> +ENTRY (__syscall_cancel_arch)
> +	.globl __syscall_cancel_arch_start
> +__syscall_cancel_arch_start:
> +
> +	/* if (*cancelhandling & CANCELED_BITMASK)
> +	     __syscall_do_cancel()  */
> +	mov    (%rdi),%eax
> +	testb  $TCB_CANCELED_BITMASK, (%rdi)
> +	jne    __syscall_do_cancel
> +
> +	/* Issue a 6 argument syscall, the nr [%rax] being the syscall
> +	   number.  */
> +	mov    %rdi,%r11
> +	mov    %rsi,%rax
> +	mov    %rdx,%rdi
> +	mov    %rcx,%rsi
> +	mov    %r8,%rdx
> +	mov    %r9,%r10
> +	mov    8(%rsp),%r8
> +	mov    16(%rsp),%r9
> +	mov    %r11,8(%rsp)
> +	syscall
> +
> +	.globl __syscall_cancel_arch_end
> +__syscall_cancel_arch_end:
> +	ret
> +END (__syscall_cancel_arch)
> diff --git a/sysdeps/unix/sysv/linux/x86_64/x32/syscall_types.h b/sysdeps/unix/sysv/linux/x86_64/x32/syscall_types.h
> new file mode 100644
> index 0000000000..ac2019751d
> --- /dev/null
> +++ b/sysdeps/unix/sysv/linux/x86_64/x32/syscall_types.h
> @@ -0,0 +1,34 @@
> +/* Types and macros used for syscall issuing.  x86_64/x32 version.
> +   Copyright (C) 2023 Free Software Foundation, Inc.
> +   This file is part of the GNU C Library.
> +
> +   The GNU C Library is free software; you can redistribute it and/or
> +   modify it under the terms of the GNU Lesser General Public
> +   License as published by the Free Software Foundation; either
> +   version 2.1 of the License, or (at your option) any later version.
> +
> +   The GNU C Library is distributed in the hope that it will be useful,
> +   but WITHOUT ANY WARRANTY; without even the implied warranty of
> +   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
> +   Lesser General Public License for more details.
> +
> +   You should have received a copy of the GNU Lesser General Public
> +   License along with the GNU C Library; if not, see
> +   <https://www.gnu.org/licenses/>.  */
> +
> +#ifndef _SYSCALL_TYPES_H
> +#define _SYSCALL_TYPES_H
> +
> +#include <libc-diag.h>
> +
> +typedef long long int __syscall_arg_t;
> +
> +/* Syscall arguments for x32 follows x86_64 ABI, however pointers are 32 bits
> +   should be zero extended.  */
> +#define __SSC(__x) \
> +  ({					\
> +    TYPEFY (__x, __tmp) = ARGIFY (__x);	\
> +    (__syscall_arg_t) __tmp;		\
> +  })
> +
> +#endif
> diff --git a/sysdeps/x86_64/nptl/tcb-offsets.sym b/sysdeps/x86_64/nptl/tcb-offsets.sym
> index 2bbd563a6c..988a4b8593 100644
> --- a/sysdeps/x86_64/nptl/tcb-offsets.sym
> +++ b/sysdeps/x86_64/nptl/tcb-offsets.sym
> @@ -13,6 +13,3 @@ MULTIPLE_THREADS_OFFSET	offsetof (tcbhead_t, multiple_threads)
>  POINTER_GUARD		offsetof (tcbhead_t, pointer_guard)
>  FEATURE_1_OFFSET	offsetof (tcbhead_t, feature_1)
>  SSP_BASE_OFFSET		offsetof (tcbhead_t, ssp_base)
> -
> --- Not strictly offsets, but these values are also used in the TCB.
> -TCB_CANCELED_BITMASK	 CANCELED_BITMASK

-- 
Cheers,
Carlos.



More information about the Libc-alpha mailing list