]> sourceware.org Git - glibc.git/commit
[x86] Add a feature bit: Fast_Unaligned_Copy hjl/pr19583
authorH.J. Lu <hjl.tools@gmail.com>
Wed, 23 Mar 2016 17:33:19 +0000 (10:33 -0700)
committerH.J. Lu <hjl.tools@gmail.com>
Wed, 23 Mar 2016 17:56:38 +0000 (10:56 -0700)
commit327aadf6348bd41d1fae46ee7780e214c0a493c1
tree3a1f3550ee36ea010e53e1ad8f4e1ffc450b5c18
parent7a25d6a84df9fea56963569ceccaaf7c2a88f161
[x86] Add a feature bit: Fast_Unaligned_Copy

On AMD processors, memcpy optimized with unaligned SSE load is
slower than emcpy optimized with aligned SSSE3 while other string
functions are faster with unaligned SSE load.  A feature bit,
Fast_Unaligned_Copy, is added to select memcpy optimized with
unaligned SSE load.

[BZ #19583]
* sysdeps/x86/cpu-features.c (init_cpu_features): Set
Fast_Unaligned_Copy with Fast_Unaligned_Load for Intel
processors.  Set Fast_Copy_Backward for AMD Excavator
processors.
* sysdeps/x86/cpu-features.h (bit_arch_Fast_Unaligned_Copy):
New.
(index_arch_Fast_Unaligned_Copy): Likewise.
* sysdeps/x86_64/multiarch/memcpy.S (__new_memcpy): Check
Fast_Unaligned_Copy instead of Fast_Unaligned_Load.
sysdeps/x86/cpu-features.c
sysdeps/x86/cpu-features.h
sysdeps/x86_64/multiarch/memcpy.S
This page took 0.040768 seconds and 5 git commands to generate.