memcpy performance on skylake server
Ji, Cheng
jicheng1017@gmail.com
Tue Jul 6 08:17:14 GMT 2021
Hello,
I found that memcpy is slower on skylake server CPUs during our
optimization work, and I can't really explain what we got and need some
guidance here.
The problem is that memcpy is noticeably slower than a simple for loop when
copying large chunks of data. This genuinely sounds like an amateur mistake
in our testing code but here's what we have tried:
* The test data is large enough: 1GB.
* We noticed a change quite a while ago regarding skylake and AVX512:
https://patchwork.ozlabs.org/project/glibc/patch/20170418183712.GA22211@intel.com/
* We updated glibc from 2.17 to the latest 2.33, we did see memcpy is 5%
faster but still slower than a simple loop.
* We tested on multiple bare metal machines with different cpus: Xeon Gold
6132, Gold 6252, Silver 4114, as well as a virtual machine on google cloud,
the result is reproducible.
* On an older generation Xeon E5-2630 v3, memcpy is about 50% faster than
the simple loop. On my desktop (i7-7700k) memcpy is also significantly
faster.
* numactl is used to ensure everything is running on a single core.
* The code is compiled by gcc 10.3
The numbers on a Xeon Gold 6132, with glibc 2.33:
simple_memcpy 4.18 seconds, 4.79 GiB/s 5.02 GB/s
simple_copy 3.68 seconds, 5.44 GiB/s 5.70 GB/s
simple_memcpy 4.18 seconds, 4.79 GiB/s 5.02 GB/s
simple_copy 3.68 seconds, 5.44 GiB/s 5.71 GB/s
The result is worse with system provided glibc 2.17:
simple_memcpy 4.38 seconds, 4.57 GiB/s 4.79 GB/s
simple_copy 3.68 seconds, 5.43 GiB/s 5.70 GB/s
simple_memcpy 4.38 seconds, 4.56 GiB/s 4.78 GB/s
simple_copy 3.68 seconds, 5.44 GiB/s 5.70 GB/s
The code to generate this result (compiled with g++ -O2 -g, run with: numactl
--membind 0 --physcpubind 0 -- ./a.out)
=====
#include <chrono>
#include <cstring>
#include <functional>
#include <string>
#include <vector>
class TestCase {
using clock_t = std::chrono::high_resolution_clock;
using sec_t = std::chrono::duration<double, std::ratio<1>>;
public:
static constexpr size_t NUM_VALUES = 128 * (1 << 20); // 128 million *
8 bytes = 1GiB
void init() {
vals_.resize(NUM_VALUES);
for (size_t i = 0; i < NUM_VALUES; ++i) {
vals_[i] = i;
}
dest_.resize(NUM_VALUES);
}
void run(std::string name, std::function<void(const int64_t *, int64_t
*, size_t)> &&func) {
// ignore the result from first run
func(vals_.data(), dest_.data(), vals_.size());
constexpr size_t count = 20;
auto start = clock_t::now();
for (size_t i = 0; i < count; ++i) {
func(vals_.data(), dest_.data(), vals_.size());
}
auto end = clock_t::now();
double duration =
std::chrono::duration_cast<sec_t>(end-start).count();
printf("%s %.2f seconds, %.2f GiB/s, %.2f GB/s\n", name.data(),
duration,
sizeof(int64_t) * NUM_VALUES / double(1 << 30) * count /
duration,
sizeof(int64_t) * NUM_VALUES / double(1e9) * count /
duration);
}
private:
std::vector<int64_t> vals_;
std::vector<int64_t> dest_;
};
void simple_memcpy(const int64_t *src, int64_t *dest, size_t n) {
memcpy(dest, src, n * sizeof(int64_t));
}
void simple_copy(const int64_t *src, int64_t *dest, size_t n) {
for (size_t i = 0; i < n; ++i) {
dest[i] = src[i];
}
}
int main(int, char **) {
TestCase c;
c.init();
c.run("simple_memcpy", simple_memcpy);
c.run("simple_copy", simple_copy);
c.run("simple_memcpy", simple_memcpy);
c.run("simple_copy", simple_copy);
}
=====
The assembly of simple_copy generated by gcc is very simple:
Dump of assembler code for function _Z11simple_copyPKlPlm:
0x0000000000401440 <+0>: mov %rdx,%rcx
0x0000000000401443 <+3>: test %rdx,%rdx
0x0000000000401446 <+6>: je 0x401460 <_Z11simple_copyPKlPlm+32>
0x0000000000401448 <+8>: xor %eax,%eax
0x000000000040144a <+10>: nopw 0x0(%rax,%rax,1)
0x0000000000401450 <+16>: mov (%rdi,%rax,8),%rdx
0x0000000000401454 <+20>: mov %rdx,(%rsi,%rax,8)
0x0000000000401458 <+24>: inc %rax
0x000000000040145b <+27>: cmp %rax,%rcx
0x000000000040145e <+30>: jne 0x401450 <_Z11simple_copyPKlPlm+16>
0x0000000000401460 <+32>: retq
When compiling with -O3, gcc vectorized the loop using xmm0, the
simple_loop is around 1% faster.
I took a brief look at the glibc source code. Though I don't have enough
knowledge to understand it yet, I'm curious about the underlying mechanism.
Thanks.
Cheng
More information about the Libc-help
mailing list