This is the mail archive of the libc-alpha@sourceware.org mailing list for the glibc project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: [PATCH 1/2] benchtests: Memory walking benchmark for memcpy


On 10/04/2017 04:12 PM, Victor Rodriguez wrote:
> On Wed, Oct 4, 2017 at 5:49 PM, Carlos O'Donell <carlos@redhat.com> wrote:
>> On 10/04/2017 03:45 PM, Victor Rodriguez wrote:
>>> On Wed, Oct 4, 2017 at 5:19 PM, Carlos O'Donell <carlos@redhat.com> wrote:
>>>> On 10/03/2017 11:53 PM, Siddhesh Poyarekar wrote:
>>>>> On Friday 22 September 2017 05:29 AM, Siddhesh Poyarekar wrote:
>>>>>> On Thursday 21 September 2017 11:59 PM, Carlos O'Donell wrote:
>>>>>>> I like the idea, and the point that the other benchmark eventually degrades
>>>>>>> into measuring L1 performance an interesting insight.
>>>>>>>
>>>>>>> I do not like that it produces total data rate not time taken per execution.
>>>>>>> Why the change? If time taken per execution was OK before, why not here?
>>>>>>
>>>>>> That is because it seems more natural to express string function
>>>>>> performance by the rate at which it processes data than the time it
>>>>>> takes to execute.  It also makes comparison across sizes a bit
>>>>>> interesting, i.e. the data rate for processing 1MB 32 bytes at a time vs
>>>>>> 128 bytes at a time.
>>>>>>
>>>>>> The fact that "twice as fast" sounds better than "takes half the time"
>>>>>> is an added bonus :)
>>>>>
>>>>> Carlos, do you think this is a reasonable enough explanation?  I'll fix
>>>>> up the output in a subsequent patch so that it has a 'throughput'
>>>>> property that the post-processing scripts can read without needing the
>>>>> additional argument in 2/2.
>>>>
>>>> As the subsystem maintainer I defer to your choice here. I don't have a
>>>> strong opinion, other than a desire for conformity of measurements to
>>>> avoid confusion. If I could say anything, consider the consumer and make
>>>> sure the data is tagged such that a consumer can determine if it is time
>>>> or throughput.
>>>>
>>>> --
>>>> Cheers,
>>>> Carlos.
>>>
>>> Quick question , do you think it might be good idea to add this test
>>> into the prhonix glibc bench :
>>>
>>> https://openbenchmarking.org/test/pts/glibc-bench
>>> https://openbenchmarking.org/innhold/cac2836cd5dbb8ae279f8a5e7b0896272e82dc76
>>>
>>> If so, let me know so I can work on add it
>>
>> As a volunteer I appreciated any work you may wish to do for the project.
>>
>> Certainly, if you find it valuable to keep the pts/glibc-bench in sync
>> with glibc benchtests/ then it sounds like a good idea to update it
>> regularly based on the glibc changes.
> 
> Sure, happy to help comunity
>>
>> What is your impression of how pts/glibc-bench is being used?
>>
> 
> The section: " Recent Results With This Test " shows that it has been
> used to measure things like :
> 
> Linux 4.14-rc1 vs. Linux 4.13 Kernel Benchmarks
> https://openbenchmarking.org/result/1709186-TY-LINUX414R23
> 
> as well as other core CPU systems
> 
> So in my humble opinion, i think is getting a lot of attraction
> 
> There is still work that need to be done but is good to have a way to
> measure the performance with Phoronix framework

That sounds great!

-- 
Cheers,
Carlos.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]