[1/9][RFC][DWARF] Reserve three DW_OP numbers in vendor extension space

Jiong Wang jiong.wang@foss.arm.com
Thu Dec 1 11:09:00 GMT 2016

On 01/12/16 10:42, Richard Earnshaw (lists) wrote:
> On 30/11/16 21:43, Cary Coutant wrote:
>> How about if instead of special DW_OP codes, you instead define a new
>> virtual register that contains the mangled return address? If the rule
>> for that virtual register is anything other than DW_CFA_undefined,
>> you'd expect to find the mangled return address using that rule;
>> otherwise, you would use the rule for LR instead and expect an
>> unmangled return address. The earlier example would become (picking an
>> arbitrary value of 120 for the new virtual register number):
>>          .cfi_startproc
>>     0x0  paciasp (this instruction sign return address register LR/X30)
>>          .cfi_val 120, DW_OP_reg30
>>     0x4  stp     x29, x30, [sp, -32]!
>>          .cfi_offset 120, -16
>>          .cfi_offset 29, -32
>>          .cfi_def_cfa_offset 32
>>     0x8  add     x29, sp, 0
>> Just a suggestion...
> What about signing other registers?  And what if the value is then
> copied to another register?  Don't you end up with every possible
> register (including the FP/SIMD registers) needing a shadow copy?

   Another issue is compared with the DW_CFA approach, this virtual register
   approach is less efficient on unwind table size and complexer to implement.

   .cfi_register takes two ULEB128 register number, it needs 3 bytes rather
    than DW_CFA's 1 byte.  From example .debug_frame section size for linux
    kernel increment will be ~14% compared with DW_CFA approach's 5%.

   In the implementation, the prologue then normally will be

     0x0  paciasp (this instruction sign return address register LR/X30)
          .cfi_val 120, DW_OP_reg30  <-A
     0x4  stp     x29, x30, [sp, -32]!
          .cfi_offset 120, -16       <-B
          .cfi_offset 29, -32
          .cfi_def_cfa_offset 32

     The epilogue normally will be
         ldp     x29, x30, [sp], 32
           .cfi_val 120, DW_OP_reg30  <- C
           .cfi_restore 29
           .cfi_def_cfa 31, 0

         autiasp (this instruction unsign LR/X30)
           .cfi_restore 30

    For the virual register approach, GCC needs to track dwarf generation for
    LR/X30 in every place (A/B/C, maybe some other rare LR copy places), and
    rewrite LR to new virtual register accordingly. This seems easy, but my
    practice shows GCC won't do any DWARF auto-deduction if you have one
    explict DWARF CFI note attached to an insn (handled_one will be true in
    dwarf2out_frame_debug).  So for instruction like stp/ldp, we then need to
    explicitly generate all three DWARF CFI note manually.

    While for DW_CFA approach, they will be:

     0x0  paciasp (this instruction sign return address register LR/X30)
     0x4  stp     x29, x30, [sp, -32]!     \
          .cfi_offset 30, -16              |
          .cfi_offset 29, -32              |
          .cfi_def_cfa_offset 32           |  all dwarf generation between sign and
     ...                                   |  unsign (paciasp/autiasp) is the same
         ldp     x29, x30, [sp], 16        |  as before
           .cfi_restore 30                 |
           .cfi_restore 29                 |
           .cfi_def_cfa 31, 0              |
         autiasp (this instruction unsign LR/X30)

    The DWARF generation implementation in backend is very simple, nothing needs to be
    updated between sign and unsign instruction.

  For the impact on the unwinder, the virtual register approach needs to change
  the implementation of "save value" rule which is quite general code. A target hook
  might need for AArch64 that when the destination register is the special virtual
  register, it seems a little bit hack to me.

>> -cary
>> On Wed, Nov 16, 2016 at 6:02 AM, Jakub Jelinek <jakub@redhat.com> wrote:
>>> On Wed, Nov 16, 2016 at 02:54:56PM +0100, Mark Wielaard wrote:
>>>> On Wed, 2016-11-16 at 10:00 +0000, Jiong Wang wrote:
>>>>>    The two operations DW_OP_AARCH64_paciasp and DW_OP_AARCH64_paciasp_deref were
>>>>> designed as shortcut operations when LR is signed with A key and using
>>>>> function's CFA as salt.  This is the default behaviour of return address
>>>>> signing so is expected to be used for most of the time.  DW_OP_AARCH64_pauth
>>>>> is designed as a generic operation that allow describing pointer signing on
>>>>> any value using any salt and key in case we can't use the shortcut operations
>>>>> we can use this.
>>>> I admit to not fully understand the salting/keying involved. But given
>>>> that the DW_OP space is really tiny, so we would like to not eat up too
>>>> many of them for new opcodes. And given that introducing any new DW_OPs
>>>> using for CFI unwinding will break any unwinder anyway causing us to
>>>> update them all for this new feature. Have you thought about using a new
>>>> CIE augmentation string character for describing that the return
>>>> address/link register used by a function/frame is salted/keyed?
>>>> This seems a good description of CIE records and augmentation
>>>> characters: http://www.airs.com/blog/archives/460
>>>> It obviously also involves updating all unwinders to understand the new
>>>> augmentation character (and possible arguments). But it might be more
>>>> generic and saves us from using up too many DW_OPs.
>>>  From what I understood, the return address is not always scrambled, so
>>> it doesn't apply to the whole function, just to most of it (except for
>>> an insn in the prologue and some in the epilogue).  So I think one op is
>>> needed.  But can't it be just a toggable flag whether the return address
>>> is scrambled + some arguments to it?
>>> Thus DW_OP_AARCH64_scramble .uleb128 0 would mean that the default
>>> way of scrambling starts here (if not already active) or any kind of
>>> scrambling ends here (if already active), and
>>> DW_OP_AARCH64_scramble .uleb128 non-zero would be whatever encoding you need
>>> to represent details of the less common variants with details what to do.
>>> Then you'd just hook through some MD_* macro in the unwinder the
>>> descrambling operation if the scrambling is active at the insns you unwind
>>> on.
>>>          Jakub

More information about the Gdb-patches mailing list