This is the mail archive of the glibc-bugs@sourceware.org mailing list for the glibc project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

[Bug string/18858] _HAVE_STRING_ARCH_xxx aren't defined for i386 nor x86_64


https://sourceware.org/bugzilla/show_bug.cgi?id=18858

--- Comment #14 from cvs-commit at gcc dot gnu.org <cvs-commit at gcc dot gnu.org> ---
This is an automated email from the git hooks/post-receive script. It was
generated because a ref change was pushed to the repository containing
the project "GNU C Library master sources".

The branch, hjl/erms/master has been created
        at  0f7985d681251d1a2c4a0e8a1bee97092a7dfccf (commit)

- Log -----------------------------------------------------------------
https://sourceware.org/git/gitweb.cgi?p=glibc.git;h=0f7985d681251d1a2c4a0e8a1bee97092a7dfccf

commit 0f7985d681251d1a2c4a0e8a1bee97092a7dfccf
Author: H.J. Lu <hjl.tools@gmail.com>
Date:   Fri Mar 25 08:20:17 2016 -0700

    Add x86-64 memset with vector unaliged stores and rep stosb

        * sysdeps/x86_64/multiarch/Makefile (sysdep_routines): Add
        memset-sse2-unaligned-erms, memset-avx2-unaligned-erms and
        memset-avx512-unaligned-erms.
        * sysdeps/x86_64/multiarch/ifunc-impl-list.c
        (__libc_ifunc_impl_list): Test __memset_chk_sse2_unaligned,
        __memset_chk_sse2_unaligned_erms, __memset_chk_avx2_unaligned,
        __memset_chk_avx2_unaligned_erms, __memset_chk_avx512_unaligned,
        __memset_chk_avx512_unaligned_erms, __memset_sse2_unaligned,
        __memset_sse2_unaligned_erms, __memset_erms,
        __memset_avx2_unaligned, __memset_avx2_unaligned_erms,
        __memset_avx512_unaligned_erms and __memset_avx512_unaligned.
        * sysdeps/x86_64/multiarch/memset-avx2-unaligned-erms.S: New
        file.
        * sysdeps/x86_64/multiarch/memset-avx512-unaligned-erms.S:
        Likewise.
        * sysdeps/x86_64/multiarch/memset-sse2-unaligned-erms.S:
        Likewise.
        * sysdeps/x86_64/multiarch/memset-vec-unaligned-erms.S:
        Likewise.
        * sysdeps/x86_64/multiarch/memset.S (memset): Support ERMS.

https://sourceware.org/git/gitweb.cgi?p=glibc.git;h=70579c5ae8056ff4e0c30a063ace064b7b45f50a

commit 70579c5ae8056ff4e0c30a063ace064b7b45f50a
Author: H.J. Lu <hjl.tools@gmail.com>
Date:   Fri Mar 18 12:36:03 2016 -0700

    Add x86-64 memmove with vector unaliged loads and rep movsb

    1. __mempcpy_avx_unaligned if AVX_Fast_Unaligned_Load bit is set.
    2. __mempcpy_sse2_unaligned if Fast_Unaligned_Load bit is set.
    3. __mempcpy_sse2 if SSSE3 isn't available.
    4. __mempcpy_ssse3_back if Fast_Copy_Backward bit it set.
    5. __mempcpy_ssse3

        [BZ #19776]
        * sysdeps/x86/cpu-features.c (init_cpu_features): Set
        bit_arch_Hybrid_ERMS if ERMS is supported.
        * sysdeps/x86/cpu-features.h (bit_arch_Hybrid_ERMS): New.
        (index_arch_Hybrid_ERMS): Likewise.
        * sysdeps/x86_64/multiarch/Makefile (sysdep_routines): Add
        memmove-sse2-unaligned-erms, memmove-avx-unaligned-erms and
        memmove-avx512-unaligned-erms.
        * sysdeps/x86_64/multiarch/ifunc-impl-list.c
        (__libc_ifunc_impl_list): Test
        __memmove_chk_avx512_unaligned_2,
        __memmove_chk_avx512_unaligned_erms,
        __memmove_chk_avx_unaligned_2, __memmove_chk_avx_unaligned_erms,
        __memmove_chk_sse2_unaligned_2,
        __memmove_chk_sse2_unaligned_erms, __memmove_avx_unaligned_2,
        __memmove_avx_unaligned_erms, __memmove_avx512_unaligned_2,
        __memmove_avx512_unaligned_erms, __memmove_erms,
        __memmove_sse2_unaligned_2, __memmove_sse2_unaligned_erms,
        __memcpy_chk_avx512_unaligned_2,
        __memcpy_chk_avx512_unaligned_erms,
        __memcpy_chk_avx_unaligned_2, __memcpy_chk_avx_unaligned_erms,
        __memcpy_chk_sse2_unaligned_2,
        __memcpy_chk_sse2_unaligned_erms, __memcpy_avx_unaligned_2,
        __memcpy_avx_unaligned_erms, __memcpy_avx512_unaligned_2,
        __memcpy_avx512_unaligned_erms, __memcpy_sse2_unaligned_2,
        __memcpy_sse2_unaligned_erms, __memcpy_erms,
        __mempcpy_chk_avx512_unaligned_2,
        __mempcpy_chk_avx512_unaligned_erms,
        __mempcpy_chk_avx_unaligned_2, __mempcpy_chk_avx_unaligned_erms,
        __mempcpy_chk_sse2_unaligned_2, __mempcpy_chk_sse2_unaligned_erms,
        __mempcpy_avx512_unaligned_2, __mempcpy_avx512_unaligned_erms,
        __mempcpy_avx_unaligned_2, __mempcpy_avx_unaligned_erms,
        __mempcpy_sse2_unaligned_2, __mempcpy_sse2_unaligned_erms
        and __mempcpy_erms.
        * sysdeps/x86_64/multiarch/memcpy (__new_memcpy): Support ERMS.
        * sysdeps/x86_64/multiarch/memcpy_chk.S (__memcpy_chk): Likewise.
        * sysdeps/x86_64/multiarch/mempcpy.S (__mempcpy): Likewise.
        * sysdeps/x86_64/multiarch/mempcpy_chk.S (__mempcpy_chk):
        Likwise.
        * sysdeps/x86_64/multiarch/memmove-avx-unaligned-erms.S: New
        file.
        * sysdeps/x86_64/multiarch/memmove-avx512-unaligned-erms.S:
        Likwise.
        * sysdeps/x86_64/multiarch/memmove-sse2-unaligned-erms.S:
        Likwise.
        * sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S:
        Likwise.

https://sourceware.org/git/gitweb.cgi?p=glibc.git;h=81641f1f3aafc33f33f20e02f88e696480578195

commit 81641f1f3aafc33f33f20e02f88e696480578195
Author: H.J. Lu <hjl.tools@gmail.com>
Date:   Thu Sep 15 15:47:01 2011 -0700

    Initial Enhanced REP MOVSB/STOSB (ERMS) support

    The newer Intel processors support Enhanced REP MOVSB/STOSB (ERMS) which
    has a feature bit in CPUID.  This patch adds the Enhanced REP MOVSB/STOSB
    (ERMS) bit to x86 cpu-features.

        * sysdeps/x86/cpu-features.h (bit_cpu_ERMS): New.
        (index_cpu_ERMS): Likewise.
        (reg_ERMS): Likewise.

https://sourceware.org/git/gitweb.cgi?p=glibc.git;h=c6a93f91c66e4de17d1e1e25a87ac8fc460a02b0

commit c6a93f91c66e4de17d1e1e25a87ac8fc460a02b0
Author: H.J. Lu <hjl.tools@gmail.com>
Date:   Fri Mar 25 06:32:44 2016 -0700

    Make __memcpy_avx512_no_vzeroupper an alias

    Since x86-64 memcpy-avx512-no-vzeroupper.S implements memmove, we can make
    __memcpy_avx512_no_vzeroupper an alias of __memmove_avx512_no_vzeroupper
    to reduce code size of libc.so.

        * sysdeps/x86_64/multiarch/Makefile (sysdep_routines): Remove
        memcpy-avx512-no-vzeroupper.
        * sysdeps/x86_64/multiarch/memcpy-avx512-no-vzeroupper.S: Renamed
        to ...
        * sysdeps/x86_64/multiarch/memmove-avx512-no-vzeroupper.S: This.
        (MEMCPY): Don't define.
        (MEMCPY_CHK): Likewise.
        (MEMPCPY): Likewise.
        (MEMPCPY_CHK): Likewise.
        (MEMPCPY_CHK): Renamed to ...
        (__mempcpy_chk_avx512_no_vzeroupper): This.
        (MEMPCPY_CHK): Renamed to ...
        (__mempcpy_chk_avx512_no_vzeroupper): This.
        (MEMCPY_CHK): Renamed to ...
        (__memmove_chk_avx512_no_vzeroupper): This.
        (MEMCPY): Renamed to ...
        (__memmove_avx512_no_vzeroupper): This.
        (__memcpy_avx512_no_vzeroupper): New alias.
        (__memcpy_chk_avx512_no_vzeroupper): Likewise.

https://sourceware.org/git/gitweb.cgi?p=glibc.git;h=85f40c56ef1e0a1716a85943bfd2824663b976c0

commit 85f40c56ef1e0a1716a85943bfd2824663b976c0
Author: H.J. Lu <hjl.tools@gmail.com>
Date:   Sun Mar 6 13:37:31 2016 -0800

    Implement x86-64 multiarch mempcpy in memcpy

    Implement x86-64 multiarch mempcpy in memcpy to share most of code.
    It will reduce code size of libc.so.

        [BZ #18858]
        * sysdeps/x86_64/multiarch/Makefile (sysdep_routines): Remove
        mempcpy-ssse3, mempcpy-ssse3-back, mempcpy-avx-unaligned
        and mempcpy-avx512-no-vzeroupper.
        * sysdeps/x86_64/multiarch/memcpy-avx-unaligned.S (MEMPCPY_CHK):
        New.
        (MEMPCPY): Likewise.
        * sysdeps/x86_64/multiarch/memcpy-avx512-no-vzeroupper.S
        (MEMPCPY_CHK): New.
        (MEMPCPY): Likewise.
        * sysdeps/x86_64/multiarch/memcpy-ssse3-back.S (MEMPCPY_CHK): New.
        (MEMPCPY): Likewise.
        * sysdeps/x86_64/multiarch/memcpy-ssse3.S (MEMPCPY_CHK): New.
        (MEMPCPY): Likewise.
        * sysdeps/x86_64/multiarch/mempcpy-avx-unaligned.S: Removed.
        * sysdeps/x86_64/multiarch/mempcpy-avx512-no-vzeroupper.S:
        Likewise.
        * sysdeps/x86_64/multiarch/mempcpy-ssse3-back.S: Likewise.
        * sysdeps/x86_64/multiarch/mempcpy-ssse3.S: Likewise.

-----------------------------------------------------------------------

-- 
You are receiving this mail because:
You are on the CC list for the bug.

Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]