This is the mail archive of the libc-alpha@sourceware.org mailing list for the glibc project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: [PATCH][AArch64] Inline mempcpy again



On 29/06/2017 13:20, Wilco Dijkstra wrote:
> Recent changes removed the generic mempcpy inline.  Given GCC still
> doesn't optimize mempcpy (PR70140), I am adding it again.  Since
> string/string.h no longer includes an architecture specific header, do this
> inside include/string.h and for now only on AArch64.

Should we reopen PR70140 then, its current RESOLVED/FIXED state gives
indicates recent gcc does not show the issue.  I also noted some
discussion on PR81657, which is also set as RESOLVED/FIXED.

> 
> OK for commit?
> 
> ChangeLog: 
> 2017-06-29  Wilco Dijkstra  <wdijkstr@arm.com>
> 
>         * include/string.h: (mempcpy): Redirect to __mempcpy_inline.  
>         (__mempcpy): Likewise.
>         (__mempcpy_inline): New inline function.
>         * sysdeps/aarch64/string_private.h: Define _INLINE_mempcpy.

We removed because the consensus is we do not want this kind of
optimization to be provided by libc anymore, adding this exception
can potentially get the previous state we are with multiple
architectures providing its own string.h/string_private.h hacks. 
I see adding this patch  is a step back and I hardly think it is really 
an optimization which yield a large performance improvements to add 
an exception.

If optimizing mempcpy is really required I think a better option would
to provide the optimized based on current memcpy/memmove. I have created
an implementation [1] which provides the expected optimized mempcpy with
the cost of only extra 'mov' instruction on both memcpy and memmove (to
use the same memcpy/memmove code)

[1] https://sourceware.org/git/?p=glibc.git;a=shortlog;h=refs/heads/azanella/aarch64-mempcpy

> 
> --
> diff --git a/include/string.h b/include/string.h
> index 069efd0b87010e5fdb64c87ced7af1dc4f54f232..46b90b8f346149f075fad026e562dfb27b658969 100644
> --- a/include/string.h
> +++ b/include/string.h
> @@ -197,4 +197,23 @@ extern char *__strncat_chk (char *__restrict __dest,
>  			    size_t __len, size_t __destlen) __THROW;
>  #endif
>  
> +#if defined __USE_GNU && defined __OPTIMIZE__ \
> +    && defined __extern_always_inline && __GNUC_PREREQ (3,2) \
> +    && defined _INLINE_mempcpy
> +
> +#undef mempcpy
> +#undef __mempcpy
> +
> +#define mempcpy(dest, src, n) __mempcpy_inline (dest, src, n)
> +#define __mempcpy(dest, src, n) __mempcpy_inline (dest, src, n)
> +
> +__extern_always_inline void *
> +__mempcpy_inline (void *__restrict __dest,
> +		  const void *__restrict __src, size_t __n)
> +{
> +  return (char *) memcpy (__dest, __src, __n) + __n;
> +}
> +
> +#endif
> +
>  #endif
> diff --git a/sysdeps/aarch64/string_private.h b/sysdeps/aarch64/string_private.h
> index 09dedbf3db40cf06077a44af992b399a6b37b48d..8b8fdddcc17a3f69455e72efe9c3616d2d33abe2 100644
> --- a/sysdeps/aarch64/string_private.h
> +++ b/sysdeps/aarch64/string_private.h
> @@ -18,3 +18,6 @@
>  
>  /* AArch64 implementations support efficient unaligned access.  */
>  #define _STRING_ARCH_unaligned 1
> +
> +/* Inline mempcpy since GCC doesn't optimize it (PR70140).  */
> +#define _INLINE_mempcpy 1
> 


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]