This is the mail archive of the
libc-alpha@sourceware.org
mailing list for the glibc project.
Re: [PATCH] Fixes tree-loop-distribute-patterns issues
- From: Torvald Riegel <triegel at redhat dot com>
- To: OndÅej BÃlka <neleai at seznam dot cz>
- Cc: Roland McGrath <roland at hack dot frob dot com>, Adhemerval Zanella <azanella at linux dot vnet dot ibm dot com>, "Carlos O'Donell" <carlos at redhat dot com>, "GNU C. Library" <libc-alpha at sourceware dot org>, Siddhesh Poyarekar <siddhesh at redhat dot com>
- Date: Fri, 21 Jun 2013 12:44:03 +0200
- Subject: Re: [PATCH] Fixes tree-loop-distribute-patterns issues
- References: <51C1BFE9 dot 4070805 at linux dot vnet dot ibm dot com> <51C1CEFC dot 9000100 at redhat dot com> <51C1FE4C dot 3020400 at linux dot vnet dot ibm dot com> <20130619221130 dot 7B91A2C10E at topped-with-meat dot com> <51C31177 dot 90303 at linux dot vnet dot ibm dot com> <20130620175832 dot 0E6FA2C133 at topped-with-meat dot com> <20130620213141 dot GA4833 at domone dot kolej dot mff dot cuni dot cz> <20130620205919 dot 9156B2C135 at topped-with-meat dot com> <20130621020055 dot GA4729 at domone dot kolej dot mff dot cuni dot cz> <1371802028 dot 964 dot 3605 dot camel at triegel dot csb> <20130621112409 dot GA7504 at domone dot kolej dot mff dot cuni dot cz>
On Fri, 2013-06-21 at 13:24 +0200, OndÅej BÃlka wrote:
> On Fri, Jun 21, 2013 at 10:07:08AM +0200, Torvald Riegel wrote:
> > On Fri, 2013-06-21 at 04:00 +0200, OndÅej BÃlka wrote:
> > > I choose a O0 as lesser evil than having reference implementation twice
> > > faster depending what compiler you do use.
> > >
> > > One solution is mandate to run benchmarks with fixed version of gcc and
> > > fixed flags.
> > >
> > > Second variant could be have assemblies and regeneration script that would
> > > be ran with specific gcc.
> >
> > Yes you can try to find a niche where you hope you can compare stuff.
> > But you can as well just get all the measurements you can from people
> > out there -- with whatever version of gcc is available -- and take this
> > into account when drawing conclusions from the data. That is, you'd
> > setup your machine learning in such a way that it looks a data and
> > checks whether there is high confidence for a certain conclusion (eg,
> > new version of code faster or not). Confidence will be lower if, for
> > example, we see performance vary a lot with different versions of gcc,
> > but remain more or less unchanged when gcc versions don't differ; but if
> > performance varies independently of the gcc version, that's also useful
> > to know because it means we draw our conclusion from a wider set of
> > tests. Likewise for other properties of the test environment such as
> > the CPU etc.
> >
> And what we will do with this data?
>
> You typicaly use machine learning to learn trivial facts from data sets
> that are too vast to browse manualy.
Is your average web search just about trivial facts?
Seriously, if all that machine learning and "big data" gave you were
trivial facts, do you think that people would invest as much into this
as they do?
> It is faster to just browse results
> and you will train intuition on it.
Manual inspection just doesn't scale to the scope we need it to scale
to. We know that there are *lots* of parameters that can influence
performance, we cannot control all of them, and we likely don't even
know all of them.