This is the mail archive of the
mailing list for the libc-ports project.
Re: [PATCH] sysdeps/arm/armv7/multiarch/memcpy_impl.S: Improve performance.
- From: OndÅej BÃlka <neleai at seznam dot cz>
- To: "Ryan S. Arnold" <ryan dot arnold at gmail dot com>
- Cc: Carlos O'Donell <carlos at redhat dot com>, Will Newton <will dot newton at linaro dot org>, "libc-ports at sourceware dot org" <libc-ports at sourceware dot org>, Patch Tracking <patches at linaro dot org>, Siddhesh Poyarekar <siddhesh at redhat dot com>
- Date: Wed, 4 Sep 2013 00:27:15 +0200
- Subject: Re: [PATCH] sysdeps/arm/armv7/multiarch/memcpy_impl.S: Improve performance.
- Authentication-results: sourceware.org; auth=none
- References: <520894D5 dot 7060207 at linaro dot org> <CANu=DmiBHoymFKTvaW_VsdhWZEYwkfViz1tTeRgj7H80f0FntA at mail dot gmail dot com> <5220D30B dot 9080306 at redhat dot com> <CANu=DmiXLL9v1Z1KS0sBOs-pL8csEUGc9YE829_-tidKd-GruQ at mail dot gmail dot com> <5220F1F0 dot 80501 at redhat dot com> <CANu=DmhA9QvSe6RS72Db2P=yyjC72fsE8d4QZKHEcNiwqxNMvw at mail dot gmail dot com> <52260BD0 dot 6090805 at redhat dot com> <CAAKybw99YcSoyU58w2iqHGRTQpajAtKX6JZp=r57bT37fjvQ2Q at mail dot gmail dot com>
On Tue, Sep 03, 2013 at 02:31:14PM -0500, Ryan S. Arnold wrote:
> On Tue, Sep 3, 2013 at 11:18 AM, Carlos O'Donell <email@example.com> wrote:
> > We have one, it's the glibc microbenchmark, and we want to expand it,
> > otherwise when ACME comes with their patch for ARM and breaks performance
> > for targets that Linaro cares about I have no way to reject the patch
> > objectively :-)
> Can you be objective in analyzing performance when two different
> people have differing opinions on what performance preconditions
> should be coded against?
I fear more situations like when google and facebook will send their
implementation which saves each milion dollars in energy bill over others
> > You need to statistically analyze the numbers, assign weights to ranges,
> > and come up with some kind of number that evaluates the results based
> > on *some* formula. That is the only way we are going to keep moving
> > performance forward (against some kind of criteria).
> This sounds like establishing preconditions (what types of data will
> be optimized for).
> Unless technology evolves that you can statistically analyze data in
> real time and adjust the implementation based on what you find (an
> implementation with a different set of preconditions) to account for
> this you're going to end up with a lot of in-fighting over
Technology is there at least for x64, its political problem.
You do not need real time analysis most of time. When user accepts
possibility performance could be worse by constant factor then profiling
becomes easier. We could rely on data from previous runs of program
First step would be to replace ifunc selection by first running
benchmarks on processor and then creating hash table with numbers of
fastest implementation and selection will be done by this lookup.
My profiler could(modulo externalities) for each program measure
which implementation is fastest on given processor.
Then AFAIR appending data to elf is legal so we could append hash table
with program specific numbers to binary.
A ifunc resolver will first look to end of binary if there is ifunc table.
This could be implemented that worst that can happen for false positive
is selecting slow implementation. If there is entry for current processor it
will use it, otherwise it will look to table at end of libc.so and
repeat this step and if it was not found it will use default implementation.
This is doable, problems is motivate somebody to do profiling. My
approach tries to it possible for programmers (write testing, run it at
different machines and assemble), distributions (find enough users that
will agree to have sometimes turned profiling on and sending results.)
or end users who could turn profiling on to learn their patterns.