[PATCH 11/12] aarch64: redefine RETURN_ADDRESS to strip PAC

Adhemerval Zanella adhemerval.zanella@linaro.org
Mon May 11 19:15:31 GMT 2020



On 11/05/2020 09:38, Szabolcs Nagy wrote:
> The 05/08/2020 14:44, Adhemerval Zanella via Libc-alpha wrote:
>> On 30/04/2020 14:45, Szabolcs Nagy wrote:
>>> +++ b/sysdeps/aarch64/sysdep.h
>>> @@ -35,6 +35,16 @@
>>>  
>>>  #define PTR_SIZE	(1<<PTR_LOG_SIZE)
>>>  
>>> +/* Strip pointer authentication code from pointer p.  */
>>> +#define XPAC(p) ({					\
>>> +  register void *__ra asm ("x30") = (p);		\
>>> +  asm ("hint 7 // xpaclri" : "+r"(__ra));		\
>>> +  __ra;})
>>> +
>>> +/* This is needed when glibc is built with -mbranch-protection=pac-ret.  */
>>> +#undef RETURN_ADDRESS
>>> +#define RETURN_ADDRESS(n) XPAC(__builtin_return_address(n))
>>> +
>>
>> Maybe use a inline function instead?
> 
> macro seems more reliable to me than always_inline
> when poking at __builtin_return_address and x30,
> but i'm not against always_inline if that's
> considered better.

I would prefer a static inline unless a macro is really required
(either due some compiler limitation or bug).

> 
> i'd prefer separate xpac (since it can be used
> not just with __builtin_return_address e.g. for
> stored code address in jmpbuf, which currently
> uses ptrmangling)

Ack.

> 
>>   #ifndef __ASSEMBLER__
>>   # include <sys/cdefs.h>
> 
> what is cdefs.h for?

The __always_inline macro.

> 
>>   /* Strip pointer authentication code from pointer p.  */
>>   static __always_inline void *
>>   return_address (unsigned int n)
>>   { 
>>     register void *ra asm ("x30") = __builtin_return_address (n);
>>     asm ("hint 7 // xpaclri" : "+r" (ra));
>>     return ra;
>>   }
>>
>>   /* This is needed when glibc is built with -mbranch-protection=pac-ret.  */
>>   # undef RETURN_ADDRESS
>>   # define RETURN_ADDRESS(n) return_address (n)
>>   #endif


More information about the Libc-alpha mailing list