This is the mail archive of the
libc-alpha@sourceware.org
mailing list for the glibc project.
Re: [PATCH 2/*] Optimize generic strchrnul and strchr
- From: Chris Metcalf <cmetcalf at ezchip dot com>
- To: OndÅej BÃlka <neleai at seznam dot cz>, Wilco Dijkstra <wdijkstr at arm dot com>
- Cc: <libc-alpha at sourceware dot org>
- Date: Wed, 27 May 2015 17:07:14 -0400
- Subject: Re: [PATCH 2/*] Optimize generic strchrnul and strchr
- Authentication-results: sourceware.org; auth=none
- Authentication-results: spf=none (sender IP is ) smtp dot mailfrom=cmetcalf at ezchip dot com;
- References: <000d01d09879$ae9c2d80$0bd48880$ at com> <20150527160237 dot GA3621 at domone>
On 05/27/2015 12:02 PM, OndÅej BÃlka wrote:
The other thing is support for big-endian - this is generally tricky as
>the mask returned by the zero check won't work even if byte-reversed.
>
Nice catch, didn't though about that. Short answer is that you need a
more complicated expression that doesn't cause carry propagation like
(((x | 128) - 127) ^ 128) & ~x & 128
Then you could do byte reversal but its isnt needed as it would be
faster to count leading zero bytes directly.
So we will need add separate BIG_ENDIAN_EXPRESSION macro to support
these. Possibly one could squeeze extra performance if he is more
careful but I don't care that much.
Then first_nonzero_byte would need some work to support that, you could
do it directly without reversing.
See sysdeps/tile/tilegx/strchrnul.c, which uses a string-endian.h
header to manage bigendian mode.
--
Chris Metcalf, EZChip Semiconductor
http://www.ezchip.com