This is the mail archive of the
libc-alpha@sourceware.org
mailing list for the glibc project.
Re: [PATCH] Support separate benchmark outputs
- From: Siddhesh Poyarekar <siddhesh at redhat dot com>
- To: OndÅej BÃlka <neleai at seznam dot cz>
- Cc: libc-alpha at sourceware dot org
- Date: Tue, 16 Apr 2013 19:33:55 +0530
- Subject: Re: [PATCH] Support separate benchmark outputs
- References: <20130416122544 dot GH3063 at spoyarek dot pnq dot redhat dot com> <20130416132838 dot GA29626 at domone dot kolej dot mff dot cuni dot cz>
On Tue, Apr 16, 2013 at 03:28:38PM +0200, OndÅej BÃlka wrote:
> I already wrote systemwide profiler for string functions. It integrates
> results so you do not have to.
> I also included unit test there. See kam/WWW/memcpy_profile.tar.bz2
>
> I plan to integrate this to dryrun framework.
Systemwide profiling has different goals compared to microbenchmarks.
> > + for (i = 0; i < 32; ++i)
> > + {
> > + HP_TIMING_NOW (start);
> > + CALL (impl, dst, src, len);
> > + HP_TIMING_NOW (stop);
> > + HP_TIMING_BEST (best_time, start, stop);
> > + }
> > +
> You simply cannot do measurements in this way. They are biased and
> you will get result that is about 20 cycles off because you it did
> not take branch misprediction and thousand other factors.
And I think that's fine because I get measurements for what I have
defined. While I agree that systemwide profiling might give a good
overall picture about string function performance, it does not give
any information about its performance in specific cases. Also, the
key factor here is the ability to compare function implementations
side by side. More than numbers, what matters here is the relative
performance.
In other words, it would be more productive to help enhance the data
in the tests to increase coverage.
Siddhesh