This is the mail archive of the
libc-alpha@sourceware.org
mailing list for the glibc project.
Re: [PATCH x86_64] Update memcpy, mempcpy and memmove selection order for Excavator CPU BZ #19583
- From: "H.J. Lu" <hjl dot tools at gmail dot com>
- To: "Pawar, Amit" <Amit dot Pawar at amd dot com>
- Cc: "libc-alpha at sourceware dot org" <libc-alpha at sourceware dot org>
- Date: Thu, 17 Mar 2016 07:45:58 -0700
- Subject: Re: [PATCH x86_64] Update memcpy, mempcpy and memmove selection order for Excavator CPU BZ #19583
- Authentication-results: sourceware.org; auth=none
- References: <SN1PR12MB073325E2FB320E3CECD22660978B0 at SN1PR12MB0733 dot namprd12 dot prod dot outlook dot com> <CAMe9rOo_pgS7Vh1+JGWiYbHr3yXZmRDpxaLX6Xs9dzHr-TSH1A at mail dot gmail dot com> <SN1PR12MB0733B252EEDF7DE08EE91AF9978B0 at SN1PR12MB0733 dot namprd12 dot prod dot outlook dot com>
On Thu, Mar 17, 2016 at 7:16 AM, Pawar, Amit <Amit.Pawar@amd.com> wrote:
>>A few comments:
>>
>>1. Since there is bit_arch_Fast_Copy_Backward already, please add bit_arch_Avoid_AVX_Fast_Unaligned_Load instead.
> Thought to ask for one more suggestion, bit_arch_Avoid_AVX_Fast_Unaligned_Load is more readable but what to select by avoiding it? Can it be used to set any feature bit and not only Fast_Copy_Backward right?
If bit_arch_Avoid_AVX_Fast_Unaligned_Load is set, the next best
one will be selected. See bit_arch_Slow_SSE4_2 for example.
>>2. Please verify that index_arch_XXX are the same and use one index_arch_XXX to set all bits. There are examples in sysdeps/x86/cpu-features.c.
> My fault, didnât notice that.
>
>>3. Please use proper ChangeLog format:
>>
>> * file (name of function, macro, ...): What changed.
>>
>
> Thanks,
> Amit Pawar
--
H.J.