This is the mail archive of the libc-alpha@sourceware.org mailing list for the glibc project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: benchmark improvements (Was: Re: [PATCH] sysdeps/arm/armv7/multiarch/memcpy_impl.S: Improve performance.)


On 2 September 2013 15:20, Siddhesh Poyarekar <siddhesh@redhat.com> wrote:
> Adding libc-alpha since my comments on this are more suitable there -
> maybe we should just get rid of the ports mailing list altogether.
>
> On Mon, Sep 02, 2013 at 02:58:23PM +0100, Will Newton wrote:
>> The key advantage of the cortex-strings framework is that it allows
>> graphing the results of benchmarks. Often changes to string function
>> performance can only really be analysed graphically as otherwise you
>> end up with a huge soup of numbers, some going up, some going down and
>> it is very hard to separate the signal from the noise.
>
> We already depend on gd-devel to draw graphs for memusagestat.  Maybe
> we could leverage that in the microbenchmark as well and come up with
> html reports rather than just a bench.out?  Graphs in general are a
> lot more readable and I agree that they would be the next logical
> improvement to the benchmark framework.

gd is quite basic from a graph plotting point of view, it's more of a
low-level graphics library and requires wrappers to call from
scripting languages. I already have some code I can cannibalize from
cortex-strings that uses Python and Pylab to do the plotting, I'll
submit a patch for review soon.

>> The glibc benchmarks also have some other weaknesses that should
>> really be addressed, hopefully I'll have some time to write patches
>> for some of this work.
>
> I know Ondrej had proposed a few improvements as well.  I'd like to
> see those reposted so that we can look at it and if possible, have
> them merged in.

I already have a patch to do multiple runs of benchmarks  - some
things like physical page allocation that can impact a benchmark can
only be controlled for this way. As I mentioned above I'd also like to
get graphing capability in there too. Beyond that it would be nice to
have a look at the various sizes and alignments used and make sure
there is a reasonably complete set, and to make sure the tests are run
for a useful number of iterations (not too large or too small).


-- 
Will Newton
Toolchain Working Group, Linaro


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]