This is the mail archive of the libc-alpha@sourceware.org mailing list for the glibc project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Intel's new rte_memcpy()


Howdy!

I am hoping for some feedback and advice for me as an application developer.

Intel have recently posted a couple of memcpy() implementations and
suggested that these have significant advantages for networking
applications. There is one for Sandy Bridge and one for Haswell. The
proposal is that networking application developers would statically
link one or both of these into their applications instead of
dynamically linking with glibc. The proposal is part of their Data
Plane Development Kit (dpdk.org).

They explain it much better than I do:
http://dpdk.org/ml/archives/dev/2014-November/008158.html

and their code is here:
https://gist.github.com/lukego/efc82a15bde5ec83cb1b

My question to the list is this:

Should networking application developers adopt Intel's custom
implementation if (like me) they are absolutely dependent on good and
consistent performance of memcpy on all recent hardware (>= Sandy
Bridge) and Linux distributions? (and then -- what to do about
memmove?)

I have done some cursory benchmarks with cachebench:
http://dpdk.org/ml/archives/dev/2015-January/011574.html

... with a correction to the rte_memcpy on Haswell results:
http://dpdk.org/ml/archives/dev/2015-January/011691.html

Cheers,
-Luke


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]