This is the mail archive of the
libc-alpha@sourceware.org
mailing list for the glibc project.
RE: [PATCH x86_64] Update memcpy, mempcpy and memmove selection order for Excavator CPU BZ #19583
- From: "Pawar, Amit" <Amit dot Pawar at amd dot com>
- To: "H.J. Lu" <hjl dot tools at gmail dot com>
- Cc: "libc-alpha at sourceware dot org" <libc-alpha at sourceware dot org>
- Date: Thu, 17 Mar 2016 14:16:05 +0000
- Subject: RE: [PATCH x86_64] Update memcpy, mempcpy and memmove selection order for Excavator CPU BZ #19583
- Authentication-results: sourceware.org; auth=none
- Authentication-results: gmail.com; dkim=none (message not signed) header.d=none;gmail.com; dmarc=none action=none header.from=amd.com;
- References: <SN1PR12MB073325E2FB320E3CECD22660978B0 at SN1PR12MB0733 dot namprd12 dot prod dot outlook dot com> <CAMe9rOo_pgS7Vh1+JGWiYbHr3yXZmRDpxaLX6Xs9dzHr-TSH1A at mail dot gmail dot com>
- Spamdiagnosticmetadata: NSPM
- Spamdiagnosticoutput: 1:23
>A few comments:
>
>1. Since there is bit_arch_Fast_Copy_Backward already, please add bit_arch_Avoid_AVX_Fast_Unaligned_Load instead.
Thought to ask for one more suggestion, bit_arch_Avoid_AVX_Fast_Unaligned_Load is more readable but what to select by avoiding it? Can it be used to set any feature bit and not only Fast_Copy_Backward right?
>2. Please verify that index_arch_XXX are the same and use one index_arch_XXX to set all bits. There are examples in sysdeps/x86/cpu-features.c.
My fault, didnât notice that.
>3. Please use proper ChangeLog format:
>
> * file (name of function, macro, ...): What changed.
>
Thanks,
Amit Pawar