This is the mail archive of the
mailing list for the glibc project.
Re: libc-alpha Digest 19 Dec 2017 01:07:32 -0000 Issue 6029
- From: "christopher dot aoki at oracle dot com" <christopher dot aoki at oracle dot com>
- To: DJ Delorie <dj at redhat dot com>
- Cc: Chris Aoki <christopher dot aoki at oracle dot com>, libc-alpha at sourceware dot org
- Date: Mon, 18 Dec 2017 18:38:04 -0800
- Subject: Re: libc-alpha Digest 19 Dec 2017 01:07:32 -0000 Issue 6029
- Authentication-results: sourceware.org; auth=none
- References: <email@example.com>
> On Dec 18, 2017, at 6:17 PM, DJ Delorie <firstname.lastname@example.org> wrote:
> "email@example.com" <firstname.lastname@example.org> writes:
>> Just to be clear, by “random benchmarks” do you mean synthetic benchmarks
>> based on workloads generated using a pseudo-random number generator, or
>> something else?
> I meant a benchmark you found on the internet, vs one we've collected as
> part of glibc's benchmark collection.
> So, for example, if you say "Hey folks, I found this benchmark on
> foo.bar.com and it says my patch is wonderful!" then you really haven't
> added much to my confidence. Instead, I'd rather you say "Hey folks, I
> ran all the glibc benchmarks and they say my patch is wonderful!" ;-)
> Of course, this doesn't mean I don't want to know about benchmarks you
> find on the internet! But if a benchmark is relevent to glibc's goals,
> reflects glibc's users and their use cases, and is redistributable (or
> we can trace it), we should include it in our corpus of benchmarks. The
> goal here is that everyone should have access to the same set of
> benchmarks, so that results are reproducible and use cases aren't
> Specifically, we don't want to do a glibc release and *then* find out
> some major application sees worse performance than before, because we
> didn't benchmark that application and thus didn't notice that a change
> affected it negatively.
Thanks for the clarification.
That seems quite reasonable to me.