[PATCH, ARM] memchr for ARM

David Gilbert david.gilbert@linaro.org
Tue Oct 11 16:32:00 GMT 2011


On 11 October 2011 12:46, Greta Yorsh <Greta.Yorsh@arm.com> wrote:
> A couple of small comments below.

Hi Greta,
  Thanks for the comments.

>> -----Original Message-----
>> From: newlib-owner@sourceware.org [mailto:newlib-owner@sourceware.org]
>> On Behalf Of Dr. David Alan Gilbert
>> Sent: 10 October 2011 19:11
>> To: newlib@sourceware.org
>> Cc: nickc@redhat.com; michael.hope@linaro.org; patches@linaro.org
>> Subject: [PATCH, ARM] memchr for ARM
>>
>> Please find below an ARM optimised memchr; it needs
>> Thumb2 and at least ARMv6T2. In the case of not having thumb or being
>> on
>> an earlier architecture it just includes the generic .c version.
>>
>> It's been tested on arm-sim target; and the same assembly
>> has been tested on eglibc as well on real hardware under Linux.
>>
>> Performance testing (on a 1GHz A9) shows it's faster
>> than the plain newlib code in most cases, and the xscale version
>> in almost all cases, see:
>>
>> https://wiki.linaro.org/WorkingGroups/ToolChain/Benchmarks/InitialMemch
>> r?action=AttachFile&do=view&target=sizes-memchr-08.png
>>
>> ('this' on the graph is the routine attached).
>>
>> I wasn't sure whether the right thing was to include the Makefile.in
>>  diff, or just let you regenerate on release - in the end I included
>> it.
>>
>> Thanks in advance,
>>
>> Dave
>>
>> 2011-10-10 Dr David Alan Gilbert <david.gilbert@linaro.org>
>>   * libc/machine/arm/Makefile.am (lib_a_SOURCES): Add memchr.c
>>   * libc/machine/arm/memchr.c: New file.
>>   * libc/machine/arm/Makefile.in: Manually add same change as
>> Makefile.am
>>
>> .....
>> +
>> +   This memchr routine is optimised on a Cortex-A9 and should work on
>> +   all ARMv7 processors.   It has a fast past for short sizes, and has
>
> typo: past -> path ?

Oops yes.


>> +   an optimised path for large data sets; the worst case is finding
>> the
>> +   match early in a large data set. */
>> +
>> +/* 2011-02-07 david.gilbert@linaro.org
>> +    Extracted from local git a5b438d861
>> +   2011-07-14 david.gilbert@linaro.org
>> +    Import endianness fix from local git ea786f1b
>> +   2011-10-06 david.gilbert@linaro.org
>> +    Import into Newlib from cortex-strings bzr rev 63 */
>> +
>> +#include "arm_asm.h"
>> +
>> +/* The code makes use of cbz, cbnz (in a way that's difficult to ifdef
>> around
>> +   so it's thumb2 only, and uses uadd8/sel that are 6T2 (Thumb) and 7a
>> */
>> +#if !defined( __thumb2__ ) || !(defined(_ISA_ARM_7) || \
>> +     (defined(__ARM_ARCH_6T2__)))
>> +  /* Fall back to generic C code */
>
> Performance results you have for memchr show that the optimized version is much better than the plain memchr in newlib. Perhaps this version should be enabled in ARM mode as well?

Perhaps; in other libraries I'd had it in a .S file and just marked
the function and file as being assembled in thumb;
it's a little more tricky to do here if I'm sometimes #including the .c file.

> How much slower would it be if cbz/cbnz are replaced with instructions available in both ARM and Thumb (instead of ifdef)?

It will certainly add an instruction or two to the loops and also mean
that I end up with
the comparison right next to the following branch, and given that
I use the uadd8/sel it's always on a processor than can do Thumb2.

>> +#include "../../string/memchr.c"
>> +#else
>> +
>> +
>> +#include <sys/types.h>
>> +
>> +/* this lets us check a flag in a 00/ff byte easily in either
>> endianness */
>> +#ifdef __ARMEB__
>> +#define CHARTSTMASK(c) "1<<(31-(" #c "*8))"
>> +#else
>> +#define CHARTSTMASK(c) "1<<(" #c "*8)"
>> +#endif
>> +
>> +/* -------------------------------------------------------------------
>> ------- */
>> +void*
>> +__attribute__((naked)) memchr(const void* s, int c, size_t n)
>> +{
>> +  asm(
>> +     /* r0 = start of memory to scan
>> +        r1 = character to look for
>> +        r2 = length
>> +        returns r0 = pointer to character or NULL if not found */
>> +     "and    r1,r1,#0xff\n\t"        /* Don't think we can trust the caller
>> +                                        to actually pass a char */
>> +
>> +     "cmp    r2,#16\n\t"             /* If it's short don't bother with
>> +                                        anything clever */
>> +     "blt    20f\n\t"
>> +
>> +     "tst    r0, #7\n\t"             /* already aligned? skip the next bit */
>> +     "beq    10f\n"
>> +
>> +     /* Work up to an aligned point */
>> +"5:\n\t"
>> +     "ldrb   r3, [r0],#1\n\t"
>> +     "subs   r2, r2, #1\n\t"
>
> The flags are not used: subs -> sub.

Using the S form gives it the 16bit encoding in thumb, and there
is no disadvantage as far as I can tell.

>> +     "cmp    r3, r1\n\t"
>> +     "beq    50f\n\t"                /* If it matches exit found */
>> +     "tst    r0, #7\n\t"
>> +     "cbz    r2, 40f\n\t"            /* If we run off the end, exit
>> !found */
>> +     "bne    5b\n"                   /* If not aligned yet, do next byte */
>> +
>> +"10:\n\t"
>> +     /* At this point, we are aligned, we know we have at least 8
>> bytes
>> +        to work with */
>> +     "push   {r4,r5,r6,r7}\n\t"
>> +     "orr    r1, r1, r1, lsl #8\n\t" /* expand the match word to all
>> bytes */
>> +     "orr    r1, r1, r1, lsl #16\n\t"
> Does this shift work for either endianness?

Yes; the value that's passed in is always the byte at the right
of the word (it's not coming from memory).

>
>> +     "bic    r4, r2, #7\n\t"         /* No of double words to work with
>> */
> The comment is a bit confusing: r4 is the number of bytes (of the double words to work with), not the number of words.

Hmm true; it's no. double words*8 - point being is it's whole double words.

>
>> +     "mvns   r7, #0\n\t"             /* all F's */
>> +     "movs   r3, #0\n"
>
> Flags not used: mvns -> mvn, movs -> mov ?

See above; should be shorter with the s.

>
>> +
>> +"15:\n\t"
>> +     "ldmia  r0!,{r5,r6}\n\t"
>
> It is probably better to use LDRD instead of LDM. On Cortex-A9, LDRD should perform similar to LDM of 2 words, but on Cortex-A15 LDRD is expected to be faster.
> There are more restrictions on ARM for registers used with LDRD, but it should be easy to change of roles of the temporary registers such that the same code works in both ARM and Thumb
> modes (e.g., using r6 for the counter and "ldrd r4, r5, [r0], #8").

I'd been told to avoid ldrd on A9 it supposedly being worse than ldm;
although I did a test
that seemed to show little difference.

>> +     "subs   r4, r4, #8\n\t"
>> +     "eor    r5,r5, r1\n\t"  /* Make r5,r6 00's where bytes match
>> target */
>> +     "eor    r6,r6, r1\n\t"
>> +     "uadd8  r5, r5, r7\n\t" /* Par add 0xff, sets GE bits for
>> bytes!=0 */
>> +     "sel    r5, r3, r7\n\t" /* bytes are 00 for none-00 bytes,
>> +                                or ff for 00 bytes - NOTE INVERSION */
>> +     "uadd8  r6, r6, r7\n\t" /* Par add 0xff, sets GE bits for
>> bytes!=0 */
>> +     "sel    r6, r5, r7\n\t" /* chained....bytes are 00 for none-00
>> bytes,
>> +                                or ff for 00 bytes - NOTE INVERSION */
>
> Why is it chained? Wouldn't it also be correct to use r3 again: "sel r6, r3, r7\n\t" ?

No; by chaining it the r6 that's produced indicates whether there
has been a 00 in either word; r3 is all 0's, so r5 has an ff block if there
are any 00's in the 1st word, this gets passed through the chain
if there isn't a corresponding 00 in the 2nd word.

>> +     "cbnz   r6, 60f\n\t"
>> +     "bne    15b\n\t"        /* (Flags from the subs above) */
>> +
>> +     "pop    {r4,r5,r6,r7}\n\t"
>> +     "and    r1,r1,#0xff\n\t" /* r1 back to a single character from
>> above */
>> +     "and    r2,r2,#7\n"     /* Leave the count remaining as the number
>> +                                after the double words have been done */
>> +
>> +"20:\n\t"
>> +     "cbz    r2, 40f\n\t"    /* 0 length or hit the end already ->
>> !found */
>> +
>> +"21:\n\t"  /* Post aligned section, or just a short call */
>> +     "ldrb   r3,[r0],#1\n\t"
>> +     "subs   r2,r2,#1\n\t"
>> +     "eor    r3,r3,r1\n\t"   /* r3 = 0 if match */
>> +     "cbz    r3, 50f\n\t"
>> +     "bne    21b\n"          /* on r2 flags */
>> +
>> +"40:\n\t"
>> +     "movs   r0,#0\n\t"      /* not found */
>> +     "bx     lr\n"
>> +
>> +"50:\n\t"
>> +     "subs   r0,r0,#1\n\t"   /* found */
>> +     "bx     lr\n"
>> +
>> +"60:\n\t"
>> +     /* We're here because the fast path found a hit - now we have to
>> track
>> +        down exactly which word it was
>> +        r0 points to the start of the double word after the one that
>> +             was tested
>> +        r5 has the 00/ff pattern for the first word, r6 has the
>> chained
>> +             value */
>> +     "cmp    r5, #0\n\t"
>> +     "itte   eq\n\t"
>> +     "moveq  r5, r6\n\t"     /* the end is in the 2nd word */
>> +     "subeq  r0,r0,#3\n\t"   /* Points to 2nd byte of 2nd word
>> */
>> +     "subne  r0,r0,#7\n\t"   /* or 2nd byte of 1st word */
>> +
>> +     /* r0 currently points to the 3rd byte of the word containing the
>> hit */
>
> Do you mean "the 2nd byte of the word containing the hit"?

hmm yes I think you are right.

>> +     "tst    r5, #" CHARTSTMASK(0) "\n\t"    /* 1st character */
>> +     "bne    61f\n\t"
>> +     "adds   r0,r0,#1\n\t"
>
> Flags not used: adds -> add.
>
>> +     "tst    r5, #" CHARTSTMASK(1) "\n\t"    /* 2nd character */
>> +     "ittt   eq\n\t"
>> +     "addeq  r0,r0,#1\n\t"
>> +     "tsteq  r5, # (3<<15)\n\t"      /* 2nd & 3rd character */
>
> Does it need to take into account endianness here?

No, 3<<15 is 0x00018000 which is one bit of each of the
two middle characters (which are either 00 or ff so it
doesn't matter which bit you test) - in either endianness
it still gets you a test of both characters.

>> +     /* If not the 3rd must be the last one */
>> +     "addeq  r0,r0,#1\n"
>> +
>> +"61:\n\t"
>> +     "pop    {r4,r5,r6,r7}\n\t"
>> +     "subs   r0,r0,#1\n\t"
>> +     "bx     lr\n"
>> +  );
>> +}
>> +#endif
>> +

Thanks again for the comments,

Dave



More information about the Newlib mailing list