This is the mail archive of the libc-alpha@sourceware.org mailing list for the glibc project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: PowerPC LE setjmp/longjmp


On Thu, Aug 8, 2013 at 11:51 PM, Alan Modra <amodra@gmail.com> wrote:
> Little-endian fixes for setjmp/longjmp.  When writing these I noticed
> the setjmp code corrupts the non volatile VMX registers when using an
> unaligned buffer.  Anton fixed this, and also simplified it quite a
> bit.
>
> The current code uses boilerplate for the case where we want to store
> 16 bytes to an unaligned address.  For that we have to do a
> read/modify/write of two aligned 16 byte quantities.  In our case we
> are storing a bunch of back to back data (consective VMX registers),
> and only the start and end of the region need the read/modify/write.
>
>         2013-07-10  Anton Blanchard <anton@au1.ibm.com>
>                     Alistair Popple <alistair@ozlabs.au.ibm.com>
>                     Alan Modra <amodra@gmail.com>
>
>         PR 15723
>         * sysdeps/powerpc/jmpbuf-offsets.h: Comment fix.
>         * sysdeps/powerpc/powerpc32/fpu/__longjmp-common.S: Correct
>         _dl_hwcap access for little-endian.
>         * sysdeps/powerpc/powerpc32/fpu/setjmp-common.S: Likewise.  Don't
>         destroy vmx regs when saving unaligned.
>         * sysdeps/powerpc/powerpc64/__longjmp-common.S: Correct CR load.
>         * sysdeps/powerpc/powerpc64/setjmp-common.S: Likewise CR save.  Don't
>         destroy vmx regs when saving unaligned.
...
>  L(aligned_save_vmx):
> diff --git a/sysdeps/powerpc/powerpc64/__longjmp-common.S b/sysdeps/powerpc/powerpc64/__longjmp-common.S
> index 70c3704..21ff50f 100644
> --- a/sysdeps/powerpc/powerpc64/__longjmp-common.S
> +++ b/sysdeps/powerpc/powerpc64/__longjmp-common.S
> @@ -153,7 +153,7 @@ L(no_vmx):
>         lfd fp21,((JB_FPRS+7)*8)(r3)
>         ld r22,((JB_GPRS+8)*8)(r3)
>         lfd fp22,((JB_FPRS+8)*8)(r3)
> -       ld r0,(JB_CR*8)(r3)
> +       lwz r0,((JB_CR*8)+4)(r3)

I can see a nameless current maintainer mindlessly seeing this lwz and
saying to himself "Shouldn't this be a ld in powerpc64?".

So a comment would be greatly appreciate.

> diff --git a/sysdeps/powerpc/powerpc64/setjmp-common.S b/sysdeps/powerpc/powerpc64/setjmp-common.S
> index 58ec610..1c8b7cb 100644
> --- a/sysdeps/powerpc/powerpc64/setjmp-common.S
> +++ b/sysdeps/powerpc/powerpc64/setjmp-common.S
> @@ -95,7 +95,7 @@ JUMPTARGET(GLUE(__sigsetjmp,_ent)):
>         mfcr r0
>         std  r16,((JB_GPRS+2)*8)(3)
>         stfd fp16,((JB_FPRS+2)*8)(3)
> -       std  r0,(JB_CR*8)(3)
> +       stw  r0,((JB_CR*8)+4)(3)
>         std  r17,((JB_GPRS+3)*8)(3)
>         stfd fp17,((JB_FPRS+3)*8)(3)
>         std  r18,((JB_GPRS+4)*8)(3)

Likewise, regarding a comment.

The substance of the patch otherwise looks good.

Ryan


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]