This is the mail archive of the
libc-alpha@sourceware.org
mailing list for the glibc project.
Re: [PATCH x86_64] Update memcpy, mempcpy and memmove selection order for Excavator CPU BZ #19583
- From: "H.J. Lu" <hjl dot tools at gmail dot com>
- To: "Pawar, Amit" <Amit dot Pawar at amd dot com>
- Cc: "libc-alpha at sourceware dot org" <libc-alpha at sourceware dot org>
- Date: Fri, 18 Mar 2016 05:34:23 -0700
- Subject: Re: [PATCH x86_64] Update memcpy, mempcpy and memmove selection order for Excavator CPU BZ #19583
- Authentication-results: sourceware.org; auth=none
- References: <SN1PR12MB073325E2FB320E3CECD22660978B0 at SN1PR12MB0733 dot namprd12 dot prod dot outlook dot com> <CAMe9rOo_pgS7Vh1+JGWiYbHr3yXZmRDpxaLX6Xs9dzHr-TSH1A at mail dot gmail dot com> <SN1PR12MB0733B252EEDF7DE08EE91AF9978B0 at SN1PR12MB0733 dot namprd12 dot prod dot outlook dot com> <CAMe9rOqhAUNhvD0=FZm23MDeVMRaYrnkZ51wWB1O4JRu8o2ywg at mail dot gmail dot com> <SN1PR12MB07332500CE527AA6EAC1C360978C0 at SN1PR12MB0733 dot namprd12 dot prod dot outlook dot com> <CAMe9rOqGKGsWHsM1NO7L46QdtMALoG_Wq3mahg=beWSesAg0jg at mail dot gmail dot com> <SN1PR12MB0733A07FB69B2EC3831FB091978C0 at SN1PR12MB0733 dot namprd12 dot prod dot outlook dot com>
On Fri, Mar 18, 2016 at 5:25 AM, Pawar, Amit <Amit.Pawar@amd.com> wrote:
>>diff --git a/sysdeps/x86_64/multiarch/memcpy.S
>>b/sysdeps/x86_64/multiarch/memcpy.S
>>index 8882590..3c67da8 100644
>>--- a/sysdeps/x86_64/multiarch/memcpy.S
>>+++ b/sysdeps/x86_64/multiarch/memcpy.S
>>@@ -40,7 +40,7 @@ ENTRY(__new_memcpy)
>> #endif
>> 1: lea __memcpy_avx_unaligned(%rip), %RAX_LP
>> HAS_ARCH_FEATURE (AVX_Fast_Unaligned_Load)
>>- jnz 2f
>>+ jnz 3f
>> lea __memcpy_sse2_unaligned(%rip), %RAX_LP
>> HAS_ARCH_FEATURE (Fast_Unaligned_Load)
>> jnz 2f
>>@@ -52,6 +52,10 @@ ENTRY(__new_memcpy)
>> jnz 2f
>> lea __memcpy_ssse3(%rip), %RAX_LP
>> 2: ret
>>+3: HAS_ARCH_FEATURE (Avoid_AVX_Fast_Unaligned_Load) jz 2b lea
>>+__memcpy_ssse3_back(%rip), %RAX_LP ret
>> END(__new_memcpy)
>>
>>This is wrong. You should check Avoid_AVX_Fast_Unaligned_Load
>>to disable __memcpy_avx_unaligned, not select
>> __memcpy_ssse3_back. Each selection should be loaded
>>only once.
>
> Now OK?.
No, it isn't fixed. Avoid_AVX_Fast_Unaligned_Load should
disable __memcpy_avx_unaligned and nothing more. Also
you need to fix ALL selections.
--
H.J.