This is the mail archive of the
mailing list for the glibc project.
Re: [RFC PATCH] aarch64: improve memset
- From: Richard Henderson <rth at twiddle dot net>
- To: Will Newton <will dot newton at linaro dot org>, Marcus Shawcroft <marcus dot shawcroft at gmail dot com>
- Cc: libc-alpha <libc-alpha at sourceware dot org>
- Date: Thu, 06 Nov 2014 07:55:17 +0100
- Subject: Re: [RFC PATCH] aarch64: improve memset
- Authentication-results: sourceware.org; auth=none
- References: <539BF47F dot 3030907 at twiddle dot net> <CAFqB+Py4Vk2vE2CmD7OGo88akzuOJEyuJ8n4e97-53+HM7FE9g at mail dot gmail dot com> <CANu=DmjSwqt_L=5bKdj3eTwPfxpr+tywJ23BffAaur326xTWSQ at mail dot gmail dot com>
On 11/05/2014 03:35 PM, Will Newton wrote:
> On 30 September 2014 12:03, Marcus Shawcroft <email@example.com> wrote:
>> On 14 June 2014 08:06, Richard Henderson <firstname.lastname@example.org> wrote:
>>> The major idea here is to use IFUNC to check the zva line size once, and use
>>> that to select different entry points. This saves 3 branches during startup,
>>> and allows significantly more flexibility.
>>> Also, I've cribbed several of the unaligned store ideas that Ondrej has done
>>> with the x86 versions.
>>> I've done some performance testing using cachebench, which suggests that the
>>> unrolled memset_zva_64 path is 1.5x faster than the current memset at 1024
>>> bytes and above. The non-zva path appears to be largely unchanged.
>> OK Thanks /Marcus
> It looks like this patch has slipped through the cracks. Richard, are
> you happy to apply this or do you think it warrants further
Sorry for the radio silence.
Just before I went to apply it I thought I spotted a bug that would affect
ld.so. I haven't had time to make sure one way or another.