[glibc/release/2.31/master] AArch64: Improve backwards memmove performance
Wilco Dijkstra
wilco@sourceware.org
Wed Oct 14 15:31:34 GMT 2020
https://sourceware.org/git/gitweb.cgi?p=glibc.git;h=d0a5b769027b17a7000ebc58e240ddd98ae0d719
commit d0a5b769027b17a7000ebc58e240ddd98ae0d719
Author: Wilco Dijkstra <wdijkstr@arm.com>
Date: Fri Aug 28 17:51:40 2020 +0100
AArch64: Improve backwards memmove performance
On some microarchitectures performance of the backwards memmove improves if
the stores use STR with decreasing addresses. So change the memmove loop
in memcpy_advsimd.S to use 2x STR rather than STP.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
(cherry picked from commit bd394d131c10c9ec22c6424197b79410042eed99)
Diff:
---
sysdeps/aarch64/multiarch/memcpy_advsimd.S | 7 ++++---
1 file changed, 4 insertions(+), 3 deletions(-)
diff --git a/sysdeps/aarch64/multiarch/memcpy_advsimd.S b/sysdeps/aarch64/multiarch/memcpy_advsimd.S
index d4ba747777..48bb6d7ca4 100644
--- a/sysdeps/aarch64/multiarch/memcpy_advsimd.S
+++ b/sysdeps/aarch64/multiarch/memcpy_advsimd.S
@@ -223,12 +223,13 @@ L(copy_long_backwards):
b.ls L(copy64_from_start)
L(loop64_backwards):
- stp A_q, B_q, [dstend, -32]
+ str B_q, [dstend, -16]
+ str A_q, [dstend, -32]
ldp A_q, B_q, [srcend, -96]
- stp C_q, D_q, [dstend, -64]
+ str D_q, [dstend, -48]
+ str C_q, [dstend, -64]!
ldp C_q, D_q, [srcend, -128]
sub srcend, srcend, 64
- sub dstend, dstend, 64
subs count, count, 64
b.hi L(loop64_backwards)
More information about the Glibc-cvs
mailing list