This is the mail archive of the
libc-alpha@sourceware.org
mailing list for the glibc project.
Re: [PATCH x86_64] Update memcpy, mempcpy and memmove selection order for Excavator CPU BZ #19583
- From: "H.J. Lu" <hjl dot tools at gmail dot com>
- To: "Pawar, Amit" <Amit dot Pawar at amd dot com>
- Cc: "libc-alpha at sourceware dot org" <libc-alpha at sourceware dot org>
- Date: Fri, 18 Mar 2016 06:51:32 -0700
- Subject: Re: [PATCH x86_64] Update memcpy, mempcpy and memmove selection order for Excavator CPU BZ #19583
- Authentication-results: sourceware.org; auth=none
- References: <SN1PR12MB073325E2FB320E3CECD22660978B0 at SN1PR12MB0733 dot namprd12 dot prod dot outlook dot com> <CAMe9rOo_pgS7Vh1+JGWiYbHr3yXZmRDpxaLX6Xs9dzHr-TSH1A at mail dot gmail dot com> <SN1PR12MB0733B252EEDF7DE08EE91AF9978B0 at SN1PR12MB0733 dot namprd12 dot prod dot outlook dot com> <CAMe9rOqhAUNhvD0=FZm23MDeVMRaYrnkZ51wWB1O4JRu8o2ywg at mail dot gmail dot com> <SN1PR12MB07332500CE527AA6EAC1C360978C0 at SN1PR12MB0733 dot namprd12 dot prod dot outlook dot com> <CAMe9rOqGKGsWHsM1NO7L46QdtMALoG_Wq3mahg=beWSesAg0jg at mail dot gmail dot com> <SN1PR12MB0733A07FB69B2EC3831FB091978C0 at SN1PR12MB0733 dot namprd12 dot prod dot outlook dot com> <CAMe9rOoYXJQWB_T0SOM9+vj38yTndYknBZyjU3MbdAyc9x+g8A at mail dot gmail dot com> <SN1PR12MB0733522F9520520B45459C24978C0 at SN1PR12MB0733 dot namprd12 dot prod dot outlook dot com>
On Fri, Mar 18, 2016 at 6:22 AM, Pawar, Amit <Amit.Pawar@amd.com> wrote:
>>No, it isn't fixed. Avoid_AVX_Fast_Unaligned_Load should disable __memcpy_avx_unaligned and nothing more. Also you need to fix ALL selections.
>
> diff --git a/sysdeps/x86_64/multiarch/memcpy.S b/sysdeps/x86_64/multiarch/memcpy.S
> index 8882590..a5afaf4 100644
> --- a/sysdeps/x86_64/multiarch/memcpy.S
> +++ b/sysdeps/x86_64/multiarch/memcpy.S
> @@ -39,6 +39,8 @@ ENTRY(__new_memcpy)
> ret
> #endif
> 1: lea __memcpy_avx_unaligned(%rip), %RAX_LP
> + HAS_ARCH_FEATURE (Avoid_AVX_Fast_Unaligned_Load)
> + jnz 3f
> HAS_ARCH_FEATURE (AVX_Fast_Unaligned_Load)
> jnz 2f
> lea __memcpy_sse2_unaligned(%rip), %RAX_LP
> @@ -52,6 +54,8 @@ ENTRY(__new_memcpy)
> jnz 2f
> lea __memcpy_ssse3(%rip), %RAX_LP
> 2: ret
> +3: lea __memcpy_ssse3(%rip), %RAX_LP
> + ret
> END(__new_memcpy)
>
> # undef ENTRY
>
> Will update all IFUNC's if this ok else please suggest.
>
Better, but not OK. Try something like
iff --git a/sysdeps/x86_64/multiarch/memcpy.S
b/sysdeps/x86_64/multiarch/memcpy.S
index ab5998c..2abe2fd 100644
--- a/sysdeps/x86_64/multiarch/memcpy.S
+++ b/sysdeps/x86_64/multiarch/memcpy.S
@@ -42,9 +42,11 @@ ENTRY(__new_memcpy)
ret
#endif
1: lea __memcpy_avx_unaligned(%rip), %RAX_LP
+ HAS_ARCH_FEATURE (Avoid_AVX_Fast_Unaligned_Load)
+ jnz 3f
HAS_ARCH_FEATURE (AVX_Fast_Unaligned_Load)
jnz 2f
- lea __memcpy_sse2_unaligned(%rip), %RAX_LP
+3: lea __memcpy_sse2_unaligned(%rip), %RAX_LP
HAS_ARCH_FEATURE (Fast_Unaligned_Load)
jnz 2f
lea __memcpy_sse2(%rip), %RAX_LP
--
H.J.