This is the mail archive of the libc-alpha@sourceware.org mailing list for the glibc project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: Ping: ILP32 on aarch64 patches


Hi Yury,

On Sat, Jan 28, 2017 at 1:53 PM, Yury Norov <ynorov@caviumnetworks.com> wrote:
> Hi Ramana,
>
> On Fri, Jan 27, 2017 at 05:39:10PM +0000, Ramana Radhakrishnan wrote:
>> >
>> >> `4. More testing (LTP, trinity, performance regressions etc.)
>> >
>> > Performance, LTP, trinity and glibc testsuites are ran, and
>> > some regressions found comparing to lp64, but nothing critical
>> > there. For example, LTP shows ~5 extra fails, and most of them
>> > are due to weird fail of mkfs tool, which is called in that
>> > tests. Trinity is OK to me, and performance in lp64 is looking
>> > the same - some tests little faster, some little slower, all in
>> > 5% range.
>>
>>
>> I don't grok enough about LTP to know if those extra failures are
>> acceptable and will leave that to Catalin and other kernel folks to
>> comment.
>>
>> This statement about performance variations with +/- 5% in lp64 is a
>> bit too much of hand-wavium for me and begs a few questions :) Most
>> teams fight to get performance improvements across the toolchain for
>> aarch64, thus dropping overall performance is something in some
>> situations worries me.
>
> It doesn't mean that lp64 gets 5% slower, at all. It only means that
> some tests little (less that 1%) slower while some faster. The biggest
> difference is about 2.3%, so +/- 2.5% - that's what I mean. Anyway,
> all that performance measurements are related to both kernel and
> toolchain.

What benchmarks are these where you see the swings ?

>
> We will run all tests again before new submission. If you'd like to
> see some specific test in addition to ones we already have - just let
> me know.

When you post again could you actually provide a breakdown of
performance swings in specific benchmarks and some initial analysis of
why you see them ?

I assume that you are running the glibc tests and they are clean for ilp32 ?


>> - I'm not familiar with the trinity benchmark. For posterity in this
>> mailing list archive - can you post a link to it and do you have / can
>> you post the analysis for this swing in performance ?
>> I'm surprised
>> that this has an impact on performance on lp64 and if so what
>> circumstances do you see this performance loss and for what type of
>> workloads ?  Is trinity a synthetic benchmark which typically has
>> noise in it that this swing of +/-5% is acceptable ?
>> - what's the impact on standard lp64 benchmarks? Are we going to see a
>> loss in performance for benchmarks like SPECCPU - I don't see why
>> there should be any performance degradation -  but if so can you
>> explain why ?
>
> Trinity is a syscall fuzzer. The program pushes random data to
> syscalls and checks if kernel survives after it.
> https://codemonkey.org.uk/projects/trinity/
>
> I'm not too familiar with it as well. What I do is running trinity for
> a long period - weekend or bigger, and check how kernel feels after it.
> As I see, people usually do similar things. If there is some obvious
> error in some syscall, 1000 calls of it with different parameters
> should highlight the problem. I didn't find something like this.

Ok, so trinity is a syscall correctness tester rather than a
performance benchmark ?

Thanks for the pointers .

Ramana

>
> Yury


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]