This is the mail archive of the mailing list for the glibc project.

Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: [PATCH] Converted benchmark to benchtest.

On 5 June 2014 19:21, OndÅej BÃlka <> wrote:
> For benchmark review you should definitely run it to check if result
> make sense. It helps to prevent oops like what happened to me when I
> checked pthread_once it took always few cycles and problem turned out to
> be not linking with pthread.

If sanity is what you're looking for, i.e. the requisite calls not
being optimized out, then yes, I did verify that and even ran it once
against current master:

  "pthread_rwlock_test": {
   "rwlock": {
    "duration": 2.87958e+09,
    "iterations": 2.7688e+07,
    "max": 172.944,
    "min": 97.516,
    "mean": 104.001
   "rdlock": {
    "duration": 2.88252e+09,
    "iterations": 2.7022e+07,
    "max": 252.66,
    "min": 101.541,
    "mean": 106.673

> Siddhesh send a alternate benchmark so somebody needs to stand before
> it. Who will do it? I wont as I am convinced by previous benchmark.

The benchmark you proposed is equivalent in terms of stuff we're
measuring and I was only trying to help you get your benchmark into
the standard format so that you don't have to redo the json output or
leave that for someone else to do.  If uncontended performance is
important then this benchmark is useful regardless of whether it shows
that the assembly implementation is better than the C one or not.  In
fact you argued in your patch proposal that the benchmark itself is
orthogonal to Andi's patch.  Did you change your mind about that?

> Will one of you stop bikesheding and say that there is really no difference
> between assembly and c rwlock implementation or will you keep arguing
> about name of benchmark for next month?

I like my benchmark name in italics :)


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]