This is the mail archive of the
mailing list for the libc-ports project.
Re: [PATCH] sysdeps/arm/armv7/multiarch/memcpy_impl.S: Improve performance.
- From: Siddhesh Poyarekar <siddhesh at redhat dot com>
- To: "Carlos O'Donell" <carlos at redhat dot com>
- Cc: OndÅej BÃlka <neleai at seznam dot cz>, Will Newton <will dot newton at linaro dot org>, "libc-ports at sourceware dot org" <libc-ports at sourceware dot org>, Patch Tracking <patches at linaro dot org>
- Date: Wed, 4 Sep 2013 13:00:09 +0530
- Subject: Re: [PATCH] sysdeps/arm/armv7/multiarch/memcpy_impl.S: Improve performance.
- Authentication-results: sourceware.org; auth=none
- References: <CANu=DmiBHoymFKTvaW_VsdhWZEYwkfViz1tTeRgj7H80f0FntA at mail dot gmail dot com> <5220D30B dot 9080306 at redhat dot com> <CANu=DmiXLL9v1Z1KS0sBOs-pL8csEUGc9YE829_-tidKd-GruQ at mail dot gmail dot com> <5220F1F0 dot 80501 at redhat dot com> <CANu=DmhA9QvSe6RS72Db2P=yyjC72fsE8d4QZKHEcNiwqxNMvw at mail dot gmail dot com> <52260BD0 dot 6090805 at redhat dot com> <20130903173710 dot GA2028 at domone dot kolej dot mff dot cuni dot cz> <522621E2 dot 6020903 at redhat dot com> <20130903185721 dot GA3876 at domone dot kolej dot mff dot cuni dot cz> <5226354D dot 8000006 at redhat dot com>
On Tue, Sep 03, 2013 at 03:15:25PM -0400, Carlos O'Donell wrote:
> I agree. The eventual goal of the project is to have some kind of
> whole system benchmarking that allows users to feed in their profiles
> and allow us as developers to see what users are doing with our library.
> Just like CPU designers feed in a whole distribution of applications
> and look at the probability of instruction selection and tweak instruction
> to microcode mappings.
> I am willing to accept a certain error in the process as long as I know
> we are headed in the right direction. If we all disagree about the
> direction we are going in then we should talk about it.
> I see:
> microbenchmarks -> whole system benchmarks -> profile driven optimizations
I've mentioned this before - microbenchmarks are not a way to whole
system benchmarks in that they don't replace system benchmarks. We
need to work on both in parallel because both have different goals.
A microbenchmark would have parameters such as alignment, size and
cache pressure to determine how an implementation scales. These are
generic numbers (i.e. they're not tied to specific high level
workloads) that a developer can use to design their programs.
Whole system benchmarks however work at a different level. They would
give an average case number that describes how a specific recipe
impacts performance of a set of programs. An administrator would use
these to tweak the system for the workload.
> I would be happy to accept a patch that does:
> * Shows the benchmark numbers.
> * Explains relevant factors not caught by the benchmark that affect
> performance, what they are, and why the patch should go in.
> My goal is to increase the quality of the written rationales for
> performance related submissions.
Agreed. In fact, this should go in as a large comment in the
implementation itself. Someone had mentioned in the past (was it
Torvald?) that every assembly implementation we write should be as
verbose in comments as it can possibly be so that there is no
ambiguity about the rationale for selection of specific instruction
sequences over others.
> >> If we have N tests and they produce N numbers, for a given target,
> >> for a given device, for a given workload, there is a set of importance
> >> weights on N that should give you some kind of relevance.
> > You are jumping to case when we will have these weights. Problematic
> > part is getting those.
> I agree.
> It's hard to know the weights without having an intuitive understanding
> of the applications you're running on your system and what's relevant
> for their performance.
1. Assume aligned input. Nothing should take (any noticeable)
performance away from align copies/moves
2. Scale with size
3. Provide acceptable performance for unaligned sizes without
penalizing the aligned case
4. Measure the effect of dcache pressure on function performance
5. Measure effect of icache pressure on function performance.
Depending on the actual cost of cache misses on different processors,
the icache/dcache miss cost would either have higher or lower weight
but for 1-3, I'd go in that order of priorities with little concern
for unaligned cases.