Joseph Myers [Mon, 27 Jun 2016 17:25:47 +0000 (17:25 +0000)]
Avoid "inexact" exceptions in i386/x86_64 floor functions (bug 15479).
As discussed in
<https://sourceware.org/ml/libc-alpha/2016-05/msg00577.html>, TS
18661-1 disallows ceil, floor, round and trunc functions from raising
the "inexact" exception, in accordance with general IEEE 754 semantics
for when that exception is raised. Fixing this for x87 floating point
is more complicated than for the other versions of these functions,
because they use the frndint instruction that raises "inexact" and
this can only be avoided by saving and restoring the whole
floating-point environment.
As I noted in
<https://sourceware.org/ml/libc-alpha/2016-06/msg00128.html>, I have
now implemented a GCC option -fno-fp-int-builtin-inexact for GCC 7,
such that GCC will inline these functions on x86, without caring about
"inexact", when the default -ffp-int-builtin-inexact is in effect.
This allows users to get optimized code depending on the options they
pass to the compiler, while making the out-of-line functions follow TS
18661-1 semantics and avoid "inexact".
This patch duly fixes the out-of-line floor function implementations
to avoid "inexact", in the same way as the nearbyint implementations.
I do not know how the performance of implementations such as these
based on saving the environment and changing the rounding mode
temporarily compares to that of the C versions or SSE 4.1 versions (of
course, for 32-bit x86 SSE implementations still need to get the
return value in an x87 register); it's entirely possible other
implementations could be faster in some cases.
Tested for x86_64 and x86.
[BZ #15479]
* sysdeps/i386/fpu/s_floor.S (__floor): Save and restore
floating-point environment rather than just control word.
* sysdeps/i386/fpu/s_floorf.S (__floorf): Likewise.
* sysdeps/i386/fpu/s_floorl.S (__floorl): Save and restore
floating-point environment, with "invalid" exceptions merged in,
rather than just control word.
* sysdeps/x86_64/fpu/s_floorl.S (__floorl): Likewise.
* math/libm-test.inc (floor_test_data): Do not allow spurious
"inexact" exceptions.
Joseph Myers [Mon, 27 Jun 2016 17:23:19 +0000 (17:23 +0000)]
Avoid "inexact" exceptions in i386/x86_64 ceil functions (bug 15479).
As discussed in
<https://sourceware.org/ml/libc-alpha/2016-05/msg00577.html>, TS
18661-1 disallows ceil, floor, round and trunc functions from raising
the "inexact" exception, in accordance with general IEEE 754 semantics
for when that exception is raised. Fixing this for x87 floating point
is more complicated than for the other versions of these functions,
because they use the frndint instruction that raises "inexact" and
this can only be avoided by saving and restoring the whole
floating-point environment.
As I noted in
<https://sourceware.org/ml/libc-alpha/2016-06/msg00128.html>, I have
now implemented a GCC option -fno-fp-int-builtin-inexact for GCC 7,
such that GCC will inline these functions on x86, without caring about
"inexact", when the default -ffp-int-builtin-inexact is in effect.
This allows users to get optimized code depending on the options they
pass to the compiler, while making the out-of-line functions follow TS
18661-1 semantics and avoid "inexact".
This patch duly fixes the out-of-line ceil function implementations to
avoid "inexact", in the same way as the nearbyint implementations.
I do not know how the performance of implementations such as these
based on saving the environment and changing the rounding mode
temporarily compares to that of the C versions or SSE 4.1 versions (of
course, for 32-bit x86 SSE implementations still need to get the
return value in an x87 register); it's entirely possible other
implementations could be faster in some cases.
Tested for x86_64 and x86.
[BZ #15479]
* sysdeps/i386/fpu/s_ceil.S (__ceil): Save and restore
floating-point environment rather than just control word.
* sysdeps/i386/fpu/s_ceilf.S (__ceilf): Likewise.
* sysdeps/i386/fpu/s_ceill.S (__ceill): Save and restore
floating-point environment, with "invalid" exceptions merged in,
rather than just control word.
* sysdeps/x86_64/fpu/s_ceill.S (__ceill): Likewise.
* math/libm-test.inc (ceil_test_data): Do not allow spurious
"inexact" exceptions.
Aurelien Jarno [Tue, 21 Jun 2016 21:59:37 +0000 (23:59 +0200)]
MIPS, SPARC: more fixes to the vfork aliases in libpthread.so
Commit 43c29487 tried to fix the vfork aliases in libpthread.so on MIPS
and SPARC, but failed to do it correctly, introducing an ABI change.
This patch does the remaining changes needed to align the MIPS and SPARC
vfork implementations with the other architectures. That way the the
alpha version of pt-vfork.S works correctly for MIPS and SPARC. The
changes for alpha were done in 82aab97c.
Changelog:
* sysdeps/unix/sysv/linux/mips/vfork.S (__vfork): Rename into
__libc_vfork.
(__vfork) [IS_IN (libc)]: Remove alias.
(__libc_vfork) [IS_IN (libc)]: Define as an alias.
* sysdeps/unix/sysv/linux/sparc/sparc32/vfork.S: Likewise.
* sysdeps/unix/sysv/linux/sparc/sparc64/vfork.S: Likewise.
Torvald Riegel [Tue, 14 Jun 2016 13:12:00 +0000 (15:12 +0200)]
Remove atomic_compare_and_exchange_bool_rel.
atomic_compare_and_exchange_bool_rel and
catomic_compare_and_exchange_bool_rel are removed and replaced with the
new C11-like atomic_compare_exchange_weak_release. The concurrent code
in nscd/cache.c has not been reviewed yet, so this patch does not add
detailed comments.
Joseph Myers [Thu, 23 Jun 2016 22:17:41 +0000 (22:17 +0000)]
Fix i386/x86_64 scalbl with sNaN input (bug 20296).
The x86_64 and i386 versions of scalbl return sNaN for some cases of
sNaN input and are missing "invalid" exceptions for other cases. This
results from overly complicated code that either returns a NaN input,
or discards both inputs when one is NaN and loads a NaN from memory.
This patch fixes this by simplifying the code to add the arguments
when either one is NaN.
Tested for x86_64 and x86.
[BZ #20296]
* sysdeps/i386/fpu/e_scalbl.S (__ieee754_scalbl): Add arguments
when either argument is a NaN.
* sysdeps/x86_64/fpu/e_scalbl.S (__ieee754_scalbl): Likewise.
* math/libm-test.inc (scalb_test_data): Add sNaN tests.
Joseph Myers [Thu, 23 Jun 2016 21:59:34 +0000 (21:59 +0000)]
Add more sNaN tests (most remaining real functions).
This patch adds tests of sNaN inputs to more functions to
libm-test.inc. This covers the remaining real functions except for
scalb, where there's a bug to fix, and hypot pow fmin fmax, where
there are cases where a qNaN input does not result in a qNaN output
and so sNaN support according to TS 18661-1 is more of a new feature.
Avoid attempt for runtime checks if all environments are defined
getconf has the capability to do a runtime check for environment
support in cases where there is optional support for an environment
(_POSIX_V7_ILP32_OFF32 on x86_64 for example) and this is indicated by
not defining the _POSIX_V7_ILP32_OFF32 macro, which results in getconf
doing an additional execve of _POSIX_V7_ILP32_OFF32 in the
$GETCONF_DIR.
The default bits/environments.h however does not leave any environment
macros undefined, which means that no such additional execve is
needed. gcc trunk catches this as a build failure since it finds that
the code block inside switch(specs[i].num) is not reachable. Avoid
this error by not bothering about the additional exec (and looking in
specific environments) when all environments are defined.
Tested on aarch64.
* posix/getconf.c: Define ALL_ENVIRONMENTS_DEFINED if all
environment macros are defined.
(main): Avoid execve if ALL_ENVIRONMENTS_DEFINED is defined.
Florian Weimer [Thu, 23 Jun 2016 18:01:40 +0000 (20:01 +0200)]
libio: Implement vtable verification [BZ #20191]
This commit puts all libio vtables in a dedicated, read-only ELF
section, so that they are consecutive in memory. Before any indirect
jump, the vtable pointer is checked against the section boundaries,
and the process is terminated if the vtable pointer does not fall into
the special ELF section.
To enable backwards compatibility, a special flag variable
(_IO_accept_foreign_vtables), protected by the pointer guard, avoids
process termination if libio stream object constructor functions have
been called earlier. Such constructor functions are called by the GCC
2.95 libstdc++ library, and this mechanism ensures compatibility with
old binaries. Existing callers inside glibc of these functions are
adjusted to call the original functions, not the wrappers which enable
vtable compatiblity.
The compatibility mechanism is used to enable passing FILE * objects
across a static dlopen boundary, too.
Florian Weimer [Thu, 23 Jun 2016 12:17:57 +0000 (14:17 +0200)]
test-skeleton.c (xrealloc): Support realloc-as-free
If the requested size is zero, realloc returns NULL, but the
deallocation is still successful, unless the pointer is also
NULL, when realloc behaves as malloc (0).
Florian Weimer [Thu, 23 Jun 2016 09:01:21 +0000 (11:01 +0200)]
test-skeleton.c: xmalloc, xcalloc, xrealloc are potentially unused
__attribute__ ((used)) means that the function has to be
emitted in assembly because it is referenced in ways the
compiler cannot detect (such as asm statements, or some
post-processing on the generated assembly).
The unused attribute needs to come first, otherwise it is
applied to the return type and not the function definition.
Joseph Myers [Wed, 22 Jun 2016 15:40:30 +0000 (15:40 +0000)]
Simplify x86 nearbyint functions.
The i386 implementations of nearbyint functions, and x86_64
nearbyintl, contain code to mask the "inexact" exception. However,
the fnstenv instruction has the effect of masking all exceptions, so
this masking code has been redundant since fnstenv was added to those
implementations (by commit 846d9a4a3acdb4939ca7bf6aed48f9f6f26911be;
commit 71d1b0166b4ace0d804af2993b3815758b852efc added the test
math/test-nearbyint-except-2.c that verifies these functions do work
when called with "inexact" traps enabled); this patch removes the
redundant code.
Tested for x86_64 and x86.
* sysdeps/i386/fpu/s_nearbyint.S (__nearbyint): Do not mask
"inexact" exceptions after fnstenv.
* sysdeps/i386/fpu/s_nearbyintf.S (__nearbyintf): Likewise.
* sysdeps/i386/fpu/s_nearbyintl.S (__nearbyintl): Likewise.
* sysdeps/x86_64/fpu/s_nearbyintl.S (__nearbyintl): Likewise.
Zack Weinberg [Wed, 22 Jun 2016 12:51:12 +0000 (05:51 -0700)]
Move sysdeps/generic/bits/hwcap.h to top-level bits/
This file was added to sysdeps/generic/bits in 2012. This appears to
have been an oversight, as the entire sysdeps/generic/bits directory was
moved to the top level in 2005. Accordingly the generic bits/hwcap.h
belongs there too.
* sysdeps/generic/bits/hwcap.h: Moved to ...
* bits/hwcap.h: Here.
Zack Weinberg [Wed, 22 Jun 2016 12:48:11 +0000 (05:48 -0700)]
Move sysdeps/generic/bits/hwcap.h to top-level bits/
This file was added to sysdeps/generic/bits in 2012. This appears to
have been an oversight, as the entire sysdeps/generic/bits directory was
moved to the top level in 2005. Accordingly the generic bits/hwcap.h
belongs there too.
* sysdeps/generic/bits/hwcap.h: Moved to ...
* bits/hwcap.h: Here.
Florian Weimer [Tue, 21 Jun 2016 19:29:21 +0000 (21:29 +0200)]
malloc: Avoid premature fallback to mmap [BZ #20284]
Before this change, the while loop in reused_arena which avoids
returning a corrupt arena would never execute its body if the selected
arena were not corrupt. As a result, result == begin after the loop,
and the function returns NULL, triggering fallback to mmap.
This patch fixes the p{readv,writev}{64} consolidation implementation
from commits 4e77815 and af5fdf5. Different from pread/pwrite
implementation, preadv/pwritev implementation does not require
__ALIGNMENT_ARG because kernel syscall prototypes define
the high and low part of the off_t, if it is the case, directly
(different from pread/pwrite where the architecture ABI for passing
64-bit values must be in consideration for passsing the arguments).
Wilco Dijkstra [Mon, 20 Jun 2016 16:48:20 +0000 (17:48 +0100)]
Add a simple rawmemchr implementation. Use strlen for rawmemchr(s, '\0') as it
is the fastest way to search for '\0'. Otherwise use memchr with an infinite
size. This is 3x faster on benchtests for large sizes. Passes GLIBC tests.
* sysdeps/aarch64/rawmemchr.S (__rawmemchr): New file.
* sysdeps/aarch64/strlen.S (__strlen): Change to __strlen to avoid PLT.
Wilco Dijkstra [Mon, 20 Jun 2016 16:38:13 +0000 (17:38 +0100)]
This is an optimized memcpy/memmove for AArch64. Copies are split into 3 main
cases: small copies of up to 16 bytes, medium copies of 17..96 bytes which are
fully unrolled. Large copies of more than 96 bytes align the destination and
use an unrolled loop processing 64 bytes per iteration. In order to share code
with memmove, small and medium copies read all data before writing, allowing
any kind of overlap. All memmoves except for the large backwards case fall
into memcpy for optimal performance. On a random copy test memcpy/memmove are
40% faster on Cortex-A57 and 28% on Cortex-A53.
* sysdeps/aarch64/memcpy.S (memcpy):
Rewrite of optimized memcpy and memmove.
* sysdeps/aarch64/memmove.S (memmove): Remove
memmove code (merged into memcpy.S).
Aurelien Jarno [Sat, 18 Jun 2016 17:11:23 +0000 (19:11 +0200)]
MIPS, SPARC: fix wrong vfork aliases in libpthread.so
With recent binutils versions the GNU libc fails to build on at least
MISP and SPARC, with this kind of error:
/home/aurel32/glibc/glibc-build/nptl/libpthread.so:(*IND*+0x0): multiple definition of `vfork@GLIBC_2.0'
/home/aurel32/glibc/glibc-build/nptl/libpthread.so::(.text+0xee50): first defined here
It appears that on these architectures pt-vfork.S includes vfork.S
(through the alpha version of pt-vfork.S) and that the __vfork aliases
are not conditionalized on IS_IN (libc) like on other architectures.
Therefore the aliases are also wrongly included in libpthread.so.
Fix this by properly conditionalizing the aliases like on other
architectures.
Changelog:
* sysdeps/unix/sysv/linux/mips/vfork.S (__vfork): Conditionalize
hidden_def, weak_alias and strong_alias on [IS_IN (libc)].
* sysdeps/unix/sysv/linux/sparc/sparc32/vfork.S: Likewise.
* sysdeps/unix/sysv/linux/sparc/sparc64/vfork.S: Likewise.
TS 18661 adds nextup and nextdown functions alongside nextafter to provide
support for float128 equivalent to it. This patch adds nextupl, nextup,
nextupf, nextdownl, nextdown and nextdownf to libm before float128 support.
The nextup functions return the next representable value in the direction of
positive infinity and the nextdown functions return the next representable
value in the direction of negative infinity. These are currently enabled
as GNU extensions.
Joseph Myers [Tue, 14 Jun 2016 16:41:50 +0000 (16:41 +0000)]
Fix i386 fdim double rounding (bug 20255).
fdim suffers from double rounding on i386 because subtracting two
double values can produce an inexact long double value exactly half
way between two double values. This patch fixes this by creating an
i386-specific version of fdim - C, based on the generic version,
unlike the previous .S version - which sets the x87 precision control
to double precision for the subtraction and then restores it
afterwards. As noted in the comment added, there are no issues of
double rounding for subnormals (a case that setting precision control
does not address) because subtraction cannot produce an inexact result
in the subnormal range.
Tested for x86_64 and x86.
[BZ #20255]
* sysdeps/i386/fpu/s_fdim.c: New file. Based on math/s_fdim.c.
* math/libm-test.inc (fdim_test_data): Add another test.
Joseph Myers [Tue, 14 Jun 2016 16:04:19 +0000 (16:04 +0000)]
Use generic fdim on more architectures (bug 6796, bug 20255, bug 20256).
Some architectures have their own versions of fdim functions, which
are missing errno setting (bug 6796) and may also return sNaN instead
of qNaN for sNaN input, in the case of the x86 / x86_64 long double
versions (bug 20256).
These versions are not actually doing anything that a compiler
couldn't generate, just straightforward comparisons / arithmetic (and,
in the x86 / x86_64 case, testing for NaNs with fxam, which isn't
actually needed once you use an unordered comparison and let the NaNs
pass through the same subtraction as non-NaN inputs). This patch
removes the x86 / x86_64 / powerpc versions, so that those
architectures use the generic C versions, which correctly handle
setting errno and deal properly with sNaN inputs. This seems better
than dealing with setting errno in lots of .S versions.
The i386 versions also return results with excess range and precision,
which is not appropriate for a function exactly defined by reference
to IEEE operations. For errno setting to work correctly on overflow,
it's necessary to remove excess range with math_narrow_eval, which
this patch duly does in the float and double versions so that the
tests can reliably pass on x86. For float, this avoids any double
rounding issues as the long double precision is more than twice that
of float. For double, double rounding issues will need to be
addressed separately, so this patch does not fully fix bug 20255.
Tested for x86_64, x86 and powerpc.
[BZ #6796]
[BZ #20255]
[BZ #20256]
* math/s_fdim.c: Include <math_private.h>.
(__fdim): Use math_narrow_eval on result.
* math/s_fdimf.c: Include <math_private.h>.
(__fdimf): Use math_narrow_eval on result.
* sysdeps/i386/fpu/s_fdim.S: Remove file.
* sysdeps/i386/fpu/s_fdimf.S: Likewise.
* sysdeps/i386/fpu/s_fdiml.S: Likewise.
* sysdeps/i386/i686/fpu/s_fdim.S: Likewise.
* sysdeps/i386/i686/fpu/s_fdimf.S: Likewise.
* sysdeps/i386/i686/fpu/s_fdiml.S: Likewise.
* sysdeps/powerpc/fpu/s_fdim.c: Likewise.
* sysdeps/powerpc/fpu/s_fdimf.c: Likewise.
* sysdeps/powerpc/powerpc32/fpu/s_fdim.c: Likewise.
* sysdeps/powerpc/powerpc64/fpu/s_fdim.c: Likewise.
* sysdeps/x86_64/fpu/s_fdiml.S: Likewise.
* math/libm-test.inc (fdim_test_data): Expect errno setting on
overflow. Add sNaN tests.
Joseph Myers [Tue, 14 Jun 2016 14:56:42 +0000 (14:56 +0000)]
Simplify generic fdim implementations.
The generic fdim implementations have unnecessarily complicated code,
using fpclassify to determine whether the arguments are NaNs,
subtracting NaNs if so and otherwise subtracting the non-NaN arguments
if not (x <= y), then using fpclassify on the result to see if it is
infinite.
This patch simplifies the code. Instead of handling NaNs separately,
it suffices to use an unordered comparison with islessequal (x, y) to
determine whether to return zero, and otherwise NaNs can go through
the same subtraction as non-NaN arguments; no explicit tests for NaN
are needed at all. Then, isinf instead of fpclassify can be used to
determine whether to set errno (in the normal non-overflow case, only
one classification will need to occur, unlike the three in the
previous code, of which two occurred even if returning zero, because
the result will not be infinite in the normal case).
The resulting logic is essentially the same as that in the powerpc
version, except that the powerpc version is missing errno setting and
uses <= not islessequal, so relying on
<https://gcc.gnu.org/bugzilla/show_bug.cgi?id=58684>, the GCC bug that
unordered comparison instructions are wrongly used on powerpc for
ordered comparisons.
The compiled code for fdim and fdimf on x86_64 is less than half the
size of the previous code.
Tested for x86_64.
* math/s_fdim.c (__fdim): Use islessequal and isinf instead of
fpclassify.
* math/s_fdimf.c (__fdimf): Likewise.
* math/s_fdiml.c (__fdiml): Likewise.
raji [Tue, 14 Jun 2016 09:21:16 +0000 (14:51 +0530)]
powerpc: strcasecmp/strncasecmp optmization for power8
This implementation utilizes vectors to improve performance
compared to current byte by byte implementation for POWER7.
The performance improvement is upto 4x. This patch is tested
on powerpc64 and powerpc64le.
Joseph Myers [Mon, 13 Jun 2016 21:43:22 +0000 (21:43 +0000)]
Fix dbl-64 atan2 (sNaN, qNaN) (bug 20252).
The dbl-64 implementation of atan2, passed arguments (sNaN, qNaN),
fails to raise the "invalid" exception. This patch fixes it to add
both arguments, rather than just adding the second argument to itself,
in the case where the second argument is a NaN (which is checked for
before checking for the first argument being a NaN). sNaN tests for
atan2 are added, along with some qNaN tests I noticed were missing but
should have been there by analogy with other tests present.
Tested for x86_64 and x86.
[BZ #20252]
* sysdeps/ieee754/dbl-64/e_atan2.c (__ieee754_atan2): Add both
arguments when second argument is a NaN.
* math/libm-test.inc (atan2_test_data): Add sNaN tests and more
qNaN tests.
Joseph Myers [Mon, 13 Jun 2016 17:27:19 +0000 (17:27 +0000)]
Fix frexp (NaN) (bug 20250).
Various implementations of frexp functions return sNaN for sNaN
input. This patch fixes them to add such arguments to themselves so
that qNaN is returned.
nptl: Add sendmmsg and recvmmsg cancellation tests
This patch adds cancellation tests for both sendmmsg and recvmmsg
syscalls. Since for some system configuration (x86_64/i686 on
older kernels and non-Linux platforms), the tests are added as
two independent that report as unsupported if the syscall is not
presented.
Both new tests uses the already tst-cancel4.c code, which as moved
to a common tst-cancel4-common{.c,h} files.
Tested on x86_64 and i686.
* nptl/Makefile (test): Add tst-cancel4_1 and tst-cancel4_2.
* nptl/tst-cancel4-common.c: New file.
* nptl/tst-cancel4-common.h: Likewise.
* nptl/tst-cancel4.c: Move common definitions to
tst-cancel4-common.{c,h} file.
* nptl/tst-cancel4_1.c: New test.
* nptl/tst-cancel4_2.c: New test.
This patch removes __ASSUME_FUTEX_LOCK_PI usage and assumes that
kernel will correctly return if it supports or not
futex_atomic_cmpxchg_inatomic.
Current PI mutex code already has runtime support by calling
prio_inherit_missing and returns ENOTSUP if the futex operation fails
at initialization (it issues a FUTEX_UNLOCK_PI futex operation).
Also, current minimum supported kernel (v3.2) will return ENOSYS if
futex_atomic_cmpxchg_inatomic is not supported in the system:
kernel/futex.c:
2628 long do_futex(u32 __user *uaddr, int op, u32 val, ktime_t *timeout,
2629 u32 __user *uaddr2, u32 val2, u32 val3)
2630 {
2631 int ret = -ENOSYS, cmd = op & FUTEX_CMD_MASK;
[...]
2667 case FUTEX_UNLOCK_PI:
2668 if (futex_cmpxchg_enabled)
2669 ret = futex_unlock_pi(uaddr, flags);
[...]
2686 return ret;
2687 }
The futex_cmpxchg_enabled is initialized by calling cmpxchg_futex_value_locked,
which calls futex_atomic_cmpxchg_inatomic.
For ARM futex_atomic_cmpxchg_inatomic will be either defined (if both
CONFIG_CPU_USE_DOMAINS and CONFIG_SMP are not defined) or use the
default generic implementation that returns ENOSYS.
For m68k is uses the default generic implementation.
For mips futex_atomic_cmpxchg_inatomic will return ENOSYS if cpu has no
'cpu_has_llsc' support (defined by each chip supporte inside kernel).
For sparc, 32-bit kernel will just use default generic implementation,
while 64-bit kernel has support.
Florian Weimer [Sat, 11 Jun 2016 10:12:56 +0000 (12:12 +0200)]
nss_db: Fix initialization of iteration position [BZ #20237]
When get*ent is called without a preceding set*ent, we need
to set the initial iteration position in get*ent.
Reproducer: Add “services: db files” to /etc/nsswitch.conf, then run
“perl -e getservent”. It will segfault before this change, and exit
silently after it.
Paras pradhan [Thu, 19 May 2016 22:05:31 +0000 (18:05 -0400)]
localedata: ne_NP: misc updates [BZ #1170]
This locale was originally copied from ne_IN and it shows: many
fields are incorrect for the Nepal territory, and many fields are
missing translations. I've vetted most of these against CLDR as
not all fields are covered by it.
LC_NAME:
name_fmt: %p%t%f%t%g -> %p%t%g%t%m%t%f
name_gen: setting to ज्यू
name_mr: setting to श्रीमान्
name_mrs: setting to श्रीमती
name_miss: setting to सुश्री
Joseph Myers [Fri, 10 Jun 2016 23:16:27 +0000 (23:16 +0000)]
Fix modf (sNaN) (bug 20240).
Various modf implementations return sNaN (both outputs) for sNaN
input. In fact they contain code to convert sNaN to qNaN for both
outputs, but the way this is done is multiplying by 1.0 (for a wider
range of inputs that includes NaNs as well as numbers with exponent
large enough to ensure that they are integers), and that
multiplication by 1.0 is optimized away by GCC in the absence of
-fsignaling-nans, unlike other operations on NaNs used for this
purpose that are not no-ops for non-sNaN input. This patch arranges
for those files to be built with -fsignaling-nans so that this
existing code is effective as intended.
Carlos O'Donell [Fri, 10 Jun 2016 18:40:38 +0000 (14:40 -0400)]
Bug 20215: Always undefine __always_inline before defining it.
The Linux kernel defines __always_inline in stddef.h (283d7573),
and it conflicts with the definition in misc/sys/cdefs.h. To fix
this we undefine it first and always use the glibc definition.
After some discussion in libc-alpha about this POSIX compliance fix, I see
that GLIBC should indeed revert back to previous definition of msghdr and
cmsghdr and implementation of sendmsg, recvmsg, sendmmsg, recvmmsg due some
reasons:
* The possible issue where the syscalls wrapper add the compatibility
layer is quite limited in scope and range. And kernel current
also add some limits to the values on the internal msghdr and
cmsghdr fields:
- msghdr::msg_iovlen larger than UIO_MAXIOV (1024) returns
EMSGSIZE.
- msghdr::msg_controllen larger than INT_MAX returns ENOBUFS.
* There is a small performance hit for recvmsg/sendmsg/recmmsg which
is neglectable, but it is a big hit for sendmmsg since now instead
of calling the syscall for the packed structure, GLIBC is calling
multiple sendmsg. This defeat the very existence of the syscall.
* It currently breaks libsanitizer build on GCC [1] (I fixed on compiler-rt).
However the fix is incomplete because it does add any runtime check
since libsanitizer currently does not have any facility to intercept
symbols with multiple version [2].
This, along with incorret dlsym/dlvsym return for versioned symbol due
another bug [3], makes hard to interpose versioned symbols.
Also, current approach of fixing GCC PR#71445 leads to half-baked
solutions without versioned symbol interposing.
This patch basically reverts commits 2f0dc39029ae08, 222c2d7f4357d66, af7f7c7ec8dea1. I decided to not revert abf29edd4a3918 (Adjust
kernel-features.h defaults for recvmsg and sendmsg) mainly because it
does not really address the POSIX compliance original issue and also
adds some cleanups.
Tested on x86, i386, s390, s390x, aarch64, and powerpc64le.
Florian Weimer [Fri, 10 Jun 2016 08:46:05 +0000 (10:46 +0200)]
malloc: Remove __malloc_initialize_hook from the API [BZ #19564]
__malloc_initialize_hook is interposed by application code, so
the usual approach to define a compatibility symbol does not work.
This commit adds a new mechanism based on #pragma GCC poison in
<stdc-predef.h>.
Joseph Myers [Thu, 9 Jun 2016 18:04:30 +0000 (18:04 +0000)]
Fix i386/x86_64 log2l (sNaN) (bug 20235).
The i386/x86_64 versions of log2l return sNaN for sNaN input. This
patch fixes them to add NaN inputs to themselves so that qNaN is
returned in this case.
Tested for x86_64 and x86.
[BZ #20235]
* sysdeps/i386/fpu/e_log2l.S (__ieee754_log2l): Add NaN input to
itself.
* sysdeps/x86_64/fpu/e_log2l.S (__ieee754_log2l): Likewise.
* math/libm-test.inc (log2_test_data): Add sNaN tests.
Joseph Myers [Thu, 9 Jun 2016 17:25:54 +0000 (17:25 +0000)]
Fix ldbl-128ibm log1pl (sNaN) (bug 20234).
The ldbl-128ibm version of log1pl returns sNaN for sNaN input. This
patch fixes it to add such inputs to themselves so that qNaN is
returned in this case.
Tested for powerpc.
[BZ #20234]
* sysdeps/ieee754/ldbl-128ibm/s_log1pl.c (__log1pl): Add positive
infinity or NaN input to itself.
Joseph Myers [Thu, 9 Jun 2016 17:24:52 +0000 (17:24 +0000)]
Fix ldbl-128ibm expm1l (sNaN) (bug 20233).
The ldbl-128ibm version of expm1l returns sNaN for sNaN input. This
patch fixes it to add such inputs to themselves so that qNaN is
returned in this case.
Tested for powerpc.
[BZ #20233]
* sysdeps/ieee754/ldbl-128ibm/s_expm1l.c (__expm1l): Add NaN input
to itself.
Joseph Myers [Thu, 9 Jun 2016 17:23:51 +0000 (17:23 +0000)]
Fix ldbl-128 expm1l (sNaN) (bug 20232).
The ldbl-128 version of expm1l returns sNaN for sNaN input. This
patch fixes it to add such inputs to themselves so that qNaN is
returned in this case.
Tested for mips64.
[BZ #20232]
* sysdeps/ieee754/ldbl-128/s_expm1l.c (__expm1l): Add NaN input to
itself.
H.J. Lu [Thu, 9 Jun 2016 11:43:16 +0000 (04:43 -0700)]
Always indirect branch to __libc_start_main via GOT
Since __libc_start_main in libc.so is called very early, lazy binding
isn't relevant. Always call __libc_start_main with indirect branch via
GOT to avoid extra branch to PLT slot. In case of static executable,
ld in binutils 2.26 or above can convert indirect branch into direct
branch:
H.J. Lu [Thu, 9 Jun 2016 11:38:34 +0000 (04:38 -0700)]
X86-64: Add dummy memcopy.h and wordcopy.c
Since x86-64 no longer uses memory copy functions, add dummy memcopy.h
and wordcopy.c to reduce code size. It reduces the size of libc.so by
about 1 KB.
* sysdeps/x86_64/memcopy.h: New file.
* sysdeps/x86_64/wordcopy.c: Likewise.
Florian Weimer [Thu, 9 Jun 2016 10:02:42 +0000 (12:02 +0200)]
quick_exit tests: Do not use C++ headers
If C++ headers such as <cstdlib> or <thread> are used, GCC 6
will include /usr/include/stdlib.h (instead of stdlib/stdlib.h
in the glibc source directory), and this turns up as a make
dependency. An implicit rule will kick in and make will try to
install stdlib/stdlib.h as /usr/include/stdlib.h because the
target is out of date.
This commit switches to <stdlib.h> and <pthread.h> instead of
<cstdlib> and <thread>.
Andreas Schwab [Wed, 2 Mar 2016 16:58:42 +0000 (17:58 +0100)]
Fix nscd assertion failure in gc (bug 19755)
If a GETxxBYyy request (for passwd or group) is running in parallel to
an INVALIDATE request (for the same database) then in a particular order
of events the garbage collector is not properly marking all used memory
and fails an assertion:
GETGRBYNAME (root)
Haven't found "root" in group cache!
add new entry "root" of type GETGRBYNAME for group to cache (first)
handle_request: request received (Version = 2) from PID 7413
INVALIDATE (group)
pruning group cache; time 9223372036854775807
considering GETGRBYNAME entry "root", timeout 1456763027
add new entry "0" of type GETGRBYGID for group to cache
remove GETGRBYNAME entry "root"
nscd: mem.c:403: gc: Assertion `next_data == &he_data[db->head->nentries]' failed.
Here the first call to cache_add added the GETGRBYNAME entry, which is
immediately marked for collection by prune_cache. Then the GETGRBYGID
entry is added which shares the data packet with the first entry and
therefore is marked as !first, while the marking look in prune_cache has
already finished. When the garbage collector runs, it only considers
references by entries marked as first, missing the reference by the
secondary entry.
The only way to fix that is to prevent prune_cache from running while the
two related entries are added.
Joseph Myers [Wed, 8 Jun 2016 23:11:42 +0000 (23:11 +0000)]
Fix i386/x86_64 log1pl (sNaN) (bug 20229).
The i386/x86_64 versions of log1pl return sNaN for sNaN input. This
patch fixes them to add a NaN input to itself so that qNaN is returned
in this case.
Tested for x86_64 and x86.
[BZ #20229]
* sysdeps/i386/fpu/s_log1pl.S (__log1pl): Add NaN input to itself.
* sysdeps/x86_64/fpu/s_log1pl.S (__log1pl): Likewise.
* math/libm-test.inc (log1p_test_data): Add sNaN tests.
Joseph Myers [Wed, 8 Jun 2016 22:59:18 +0000 (22:59 +0000)]
Fix i386/x86_64 log10l (sNaN) (bug 20228).
The i386/x86_64 versions of log10l return sNaN for sNaN input. This
patch fixes them to add a NaN input to itself so that qNaN is returned
in this case.
Tested for x86_64 and x86.
[BZ #20228]
* sysdeps/i386/fpu/e_log10l.S (__ieee754_log10l): Add NaN input to
itself.
* sysdeps/x86_64/fpu/e_log10l.S (__ieee754_log10l): Likewise.
* math/libm-test.inc (log10_test_data): Add sNaN tests.
Joseph Myers [Wed, 8 Jun 2016 21:55:06 +0000 (21:55 +0000)]
Fix i386/x86_64 expl, exp10l, expm1l for sNaN input (bug 20226).
The i386 and x86_64 implementations of expl, exp10l and expm1l (code
shared between the functions) return sNaN for sNaN input. This patch
fixes them to add NaN inputs to themselves so that qNaN is returned in
this case.
Joseph Myers [Wed, 8 Jun 2016 21:32:57 +0000 (21:32 +0000)]
Fix ldexp, scalbn, scalbln for sNaN input (bug 20225).
The wrapper implementations of ldexp / scalbn / scalbln
(architecture-independent), and their float / long double variants,
return sNaN for sNaN input. This patch fixes them to add relevant
arguments to themselves so that qNaN is returned in this case.
Joseph Myers [Wed, 8 Jun 2016 21:02:40 +0000 (21:02 +0000)]
Fix i386 cbrtl (sNaN) (bug 20224).
The i386 version of cbrtl returns sNaN (without raising any
exceptions) for sNaN input. This patch fixes it to add non-finite
arguments to themselves (the code path in question is also reached for
zero arguments, for which adding them to themselves is also harmless),
so that "invalid" is raised and qNaN returned.
Tested for x86_64 and x86.
[BZ #20224]
* sysdeps/i386/fpu/s_cbrtl.S (__cbrtl): Add non-finite or zero
argument to itself.
* math/libm-test.inc (cbrt_test_data): Add sNaN tests.
Since the new SSE2/AVX2 memcpy/memmove are faster than the previous ones,
we can remove the previous SSE2/AVX2 memcpy/memmove and replace them with
the new ones.
No change in IFUNC selection if SSE2 and AVX2 memcpy/memmove weren't used
before. If SSE2 or AVX2 memcpy/memmove were used, the new SSE2 or AVX2
memcpy/memmove optimized with Enhanced REP MOVSB will be used for
processors with ERMS. The new AVX512 memcpy/memmove will be used for
processors with AVX512 which prefer vzeroupper.
Since the new SSE2 memcpy/memmove are faster than the previous default
memcpy/memmove used in libc.a and ld.so, we also remove the previous
default memcpy/memmove and make them the default memcpy/memmove, except
that non-temporal store isn't used in ld.so.
Together, it reduces the size of libc.so by about 6 KB and the size of
ld.so by about 2 KB.
[BZ #19776]
* sysdeps/x86_64/memcpy.S: Make it dummy.
* sysdeps/x86_64/mempcpy.S: Likewise.
* sysdeps/x86_64/memmove.S: New file.
* sysdeps/x86_64/memmove_chk.S: Likewise.
* sysdeps/x86_64/multiarch/memmove.S: Likewise.
* sysdeps/x86_64/multiarch/memmove_chk.S: Likewise.
* sysdeps/x86_64/memmove.c: Removed.
* sysdeps/x86_64/multiarch/memcpy-avx-unaligned.S: Likewise.
* sysdeps/x86_64/multiarch/memcpy-sse2-unaligned.S: Likewise.
* sysdeps/x86_64/multiarch/memmove-avx-unaligned.S: Likewise.
* sysdeps/x86_64/multiarch/memmove-sse2-unaligned-erms.S:
Likewise.
* sysdeps/x86_64/multiarch/memmove.c: Likewise.
* sysdeps/x86_64/multiarch/memmove_chk.c: Likewise.
* sysdeps/x86_64/multiarch/Makefile (sysdep_routines): Remove
memcpy-sse2-unaligned, memmove-avx-unaligned,
memcpy-avx-unaligned and memmove-sse2-unaligned-erms.
* sysdeps/x86_64/multiarch/ifunc-impl-list.c
(__libc_ifunc_impl_list): Replace
__memmove_chk_avx512_unaligned_2 with
__memmove_chk_avx512_unaligned. Remove
__memmove_chk_avx_unaligned_2. Replace
__memmove_chk_sse2_unaligned_2 with
__memmove_chk_sse2_unaligned. Remove __memmove_chk_sse2 and
__memmove_avx_unaligned_2. Replace __memmove_avx512_unaligned_2
with __memmove_avx512_unaligned. Replace
__memmove_sse2_unaligned_2 with __memmove_sse2_unaligned.
Remove __memmove_sse2. Replace __memcpy_chk_avx512_unaligned_2
with __memcpy_chk_avx512_unaligned. Remove
__memcpy_chk_avx_unaligned_2. Replace
__memcpy_chk_sse2_unaligned_2 with __memcpy_chk_sse2_unaligned.
Remove __memcpy_chk_sse2. Remove __memcpy_avx_unaligned_2.
Replace __memcpy_avx512_unaligned_2 with
__memcpy_avx512_unaligned. Remove __memcpy_sse2_unaligned_2
and __memcpy_sse2. Replace __mempcpy_chk_avx512_unaligned_2
with __mempcpy_chk_avx512_unaligned. Remove
__mempcpy_chk_avx_unaligned_2. Replace
__mempcpy_chk_sse2_unaligned_2 with
__mempcpy_chk_sse2_unaligned. Remove __mempcpy_chk_sse2.
Replace __mempcpy_avx512_unaligned_2 with
__mempcpy_avx512_unaligned. Remove __mempcpy_avx_unaligned_2.
Replace __mempcpy_sse2_unaligned_2 with
__mempcpy_sse2_unaligned. Remove __mempcpy_sse2.
* sysdeps/x86_64/multiarch/memcpy.S (__new_memcpy): Support
__memcpy_avx512_unaligned_erms and __memcpy_avx512_unaligned.
Use __memcpy_avx_unaligned_erms and __memcpy_sse2_unaligned_erms
if processor has ERMS. Default to __memcpy_sse2_unaligned.
(ENTRY): Removed.
(END): Likewise.
(ENTRY_CHK): Likewise.
(libc_hidden_builtin_def): Likewise.
Don't include ../memcpy.S.
* sysdeps/x86_64/multiarch/memcpy_chk.S (__memcpy_chk): Support
__memcpy_chk_avx512_unaligned_erms and
__memcpy_chk_avx512_unaligned. Use
__memcpy_chk_avx_unaligned_erms and
__memcpy_chk_sse2_unaligned_erms if if processor has ERMS.
Default to __memcpy_chk_sse2_unaligned.
* sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S
Change function suffix from unaligned_2 to unaligned.
* sysdeps/x86_64/multiarch/mempcpy.S (__mempcpy): Support
__mempcpy_avx512_unaligned_erms and __mempcpy_avx512_unaligned.
Use __mempcpy_avx_unaligned_erms and __mempcpy_sse2_unaligned_erms
if processor has ERMS. Default to __mempcpy_sse2_unaligned.
(ENTRY): Removed.
(END): Likewise.
(ENTRY_CHK): Likewise.
(libc_hidden_builtin_def): Likewise.
Don't include ../mempcpy.S.
(mempcpy): New. Add a weak alias.
* sysdeps/x86_64/multiarch/mempcpy_chk.S (__mempcpy_chk): Support
__mempcpy_chk_avx512_unaligned_erms and
__mempcpy_chk_avx512_unaligned. Use
__mempcpy_chk_avx_unaligned_erms and
__mempcpy_chk_sse2_unaligned_erms if if processor has ERMS.
Default to __mempcpy_chk_sse2_unaligned.
H.J. Lu [Wed, 8 Jun 2016 20:55:45 +0000 (13:55 -0700)]
X86-64: Remove the previous SSE2/AVX2 memsets
Since the new SSE2/AVX2 memsets are faster than the previous ones, we
can remove the previous SSE2/AVX2 memsets and replace them with the
new ones. This reduces the size of libc.so by about 900 bytes.
No change in IFUNC selection if SSE2 and AVX2 memsets weren't used
before. If SSE2 or AVX2 memset was used, the new SSE2 or AVX2 memset
optimized with Enhanced REP STOSB will be used for processors with
ERMS. The new AVX512 memset will be used for processors with AVX512
which prefer vzeroupper.
[BZ #19881]
* sysdeps/x86_64/multiarch/memset-sse2-unaligned-erms.S: Folded
into ...
* sysdeps/x86_64/memset.S: This.
(__bzero): Removed.
(__memset_tail): Likewise.
(__memset_chk): Likewise.
(memset): Likewise.
(MEMSET_CHK_SYMBOL): New. Define only if MEMSET_SYMBOL isn't
defined.
(MEMSET_SYMBOL): Define only if MEMSET_SYMBOL isn't defined.
* sysdeps/x86_64/multiarch/memset-avx2.S: Removed.
(__memset_zero_constant_len_parameter): Check SHARED instead of
PIC.
* sysdeps/x86_64/multiarch/Makefile (sysdep_routines): Remove
memset-avx2 and memset-sse2-unaligned-erms.
* sysdeps/x86_64/multiarch/ifunc-impl-list.c
(__libc_ifunc_impl_list): Remove __memset_chk_sse2,
__memset_chk_avx2, __memset_sse2 and __memset_avx2_unaligned.
* sysdeps/x86_64/multiarch/memset-vec-unaligned-erms.S
(__bzero): Enabled.
* sysdeps/x86_64/multiarch/memset.S (memset): Replace
__memset_sse2 and __memset_avx2 with __memset_sse2_unaligned
and __memset_avx2_unaligned. Use __memset_sse2_unaligned_erms
or __memset_avx2_unaligned_erms if processor has ERMS. Support
__memset_avx512_unaligned_erms and __memset_avx512_unaligned.
(memset): Removed.
(__memset_chk): Likewise.
(MEMSET_SYMBOL): New.
(libc_hidden_builtin_def): Replace __memset_sse2 with
__memset_sse2_unaligned.
* sysdeps/x86_64/multiarch/memset_chk.S (__memset_chk): Replace
__memset_chk_sse2 and __memset_chk_avx2 with
__memset_chk_sse2_unaligned and __memset_chk_avx2_unaligned_erms.
Use __memset_chk_sse2_unaligned_erms or
__memset_chk_avx2_unaligned_erms if processor has ERMS. Support
__memset_chk_avx512_unaligned_erms and
__memset_chk_avx512_unaligned.
Paul E. Murphy [Wed, 8 Jun 2016 19:56:04 +0000 (14:56 -0500)]
Generate new format names in auto-libm-test-out
This converts the inclusion macro for each test to use
the format specific macro. In addition, the format
specifier is removed as it is applied via the LIT() macro
which is itself applied when converting the auto inputs and
libm-test.inc into libm-test.c.
Paul E. Murphy [Wed, 8 Jun 2016 19:37:15 +0000 (14:37 -0500)]
Remove CHOOSE() macro from libm-tests.inc
Use gen-libm-test.pl to generate a list of macros
mapping to libm-test-ulps.h as this simplifies adding new
types without having to modify a growing number of
static headers each time a type is added.
This also removes the final usage of the TEST_(DOUBLE|FLOAT|LDOUBLE)
macros. Thus, they too are removed.
Paul E. Murphy [Wed, 8 Jun 2016 19:28:07 +0000 (14:28 -0500)]
Apply LIT(x) to floating point literals in libm-test.c
With the exception of the second argument of nexttoward,
any suffixes should be stripped from the test input, and
the macro LIT(x) should be applied to use the correct
suffix for the type being tested.
This adds a new argument type "j" to gen-test-libm.pl
to signify an argument to a test input which does not
require fixup. The test cases of nexttoward have
been updated to use this new feature.
This applies post-processing to all of the test inputs
through gen-libm-test.pl to strip literal suffixes and
apply the LIT(x) macro, with one exception stated above.
This seems a bit cleaner than tossing the macro onto
everything, albeit slightly more obfuscated.
Florian Weimer [Wed, 8 Jun 2016 18:50:21 +0000 (20:50 +0200)]
malloc: Correct size computation in realloc for dumped fake mmapped chunks
For regular mmapped chunks there are two size fields (hence a reduction
by 2 * SIZE_SZ bytes), but for fake chunks, we only have one size field,
so we need to subtract SIZE_SZ bytes.
Joseph Myers [Tue, 7 Jun 2016 22:54:58 +0000 (22:54 +0000)]
Fix i386 asinhl (sNaN) (bug 20218).
The i386 version of asinhl returns sNaN (without raising any
exceptions) for sNaN input. This patch fixes it to add non-finite
arguments to themselves, so that "invalid" is raised and qNaN
returned.
H.J. Lu [Tue, 7 Jun 2016 15:00:21 +0000 (08:00 -0700)]
Check FMA after COMMON_CPUID_INDEX_80000001
Since the FMA4 bit is in COMMON_CPUID_INDEX_80000001 and FMA4 requires
AVX, determine if FMA4 is usable after COMMON_CPUID_INDEX_80000001 is
available and if AVX is usable.
Carlos O'Donell [Tue, 7 Jun 2016 08:46:37 +0000 (04:46 -0400)]
Bug 20214: Fix linux/in6.h and netinet/in.h sync.
In: https://sourceware.org/glibc/wiki/Synchronizing_Headers
we explain how we synchronize our headers with Linux kernel
headers.
In order to synchronize with the Linux linux/in6.h and
linux/ipv6.h headers we checked for their guard macros and
then defined __USE_KERNEL_IPV6_DEFS and conditionalized code
on this macro.
In upstream kernel 56c176c9 the _UAPI prefix was stripped and
this broke our synchronized headers again. We now need to check
for _LINUX_IN6_H and _IPV6_H, and keep checking the old versions
of the header guard checks for maximum backwards compatibility
with older Linux headers (the history is actually a bit muddled
here and it appears upstream linus kernel broke this 10 months
*before* our fix was ever applied to glibc, but without glibc
testing we didn't notice and distro kernels have their own
testing to fix this).
This patch fixes synchronization with linux/in6.h and
with netinet/in.h.
Carlos O'Donell [Mon, 6 Jun 2016 18:20:58 +0000 (14:20 -0400)]
Bug 20198: quick_exit should not call destructors.
In C++11 18.5.12 says "Objects shall not be destroyed as a
result of calling quick_exit." In C11 quick_exit is silent
about thread object destruction. Therefore to make glibc
C++ compliant we do not call any thread local destructors.
A new regression test verifies the fix.
I will note that C++11 18.5.3 makes it clear that C++
defines additional requirements for _Exit() to prevent it
from executing destructors.
Given that the point of _Exit() is to terminate the process
immediately it makes sense the C and C++ should line up
and avoid calling destructors.
Joseph Myers [Mon, 6 Jun 2016 22:21:11 +0000 (22:21 +0000)]
Fix dbl-64 asin (sNaN) (bug 20213).
The dbl-64 version of asin returns sNaN for sNaN arguments. This
patch fixes it to add NaN arguments to themselves so that qNaN is
returned in this case.
Tested for x86_64 and x86.
[BZ #20213]
* sysdeps/ieee754/dbl-64/e_asin.c (__ieee754_asin): Add NaN
argument to itself.
* math/libm-test.inc (asin_test_data): Add sNaN tests.
This patch consolidates all the pwritev{64} implementation for Linux
in only one (sysdeps/unix/sysv/linux/pwritev{64}.c). It also removes the
syscall from the auto-generation using assembly macros.
It was based on previous pwrite/pwrite64 consolidation patch. The new macro
SYSCALL_LL{64} is used to handle the offset argument and alias is created
for __ASSUME_OFF_DIFF_OFF64 in case of pread64.
Checked on x86_64, i386, aarch64, and powerpc64le.
This patch consolidates all the preadv{64} implementation for Linux
in only one (sysdeps/unix/sysv/linux/preadv{64}.c). It also removes the
syscall from the auto-generation using assembly macros.
It was based on previous pread/pread64 consolidation patch. The new macro
SYSCALL_LL{64} is used to handle the offset argument and alias is created
for __ASSUME_OFF_DIFF_OFF64 in case of pread64.
Checked on x86_64, i386, aarch64, and powerpc64le.
Joseph Myers [Mon, 6 Jun 2016 22:10:11 +0000 (22:10 +0000)]
Fix dbl-64 acos (sNaN) (bug 20212).
The dbl-64 version of acos returns sNaN for sNaN arguments. This
patch fixes it to add NaN arguments to themselves so that qNaN is
returned in this case.
Tested for x86_64 and x86.
[BZ #20212]
* sysdeps/ieee754/dbl-64/e_asin.c (__ieee754_acos): Add NaN
argument to itself.
* math/libm-test.inc (acos_test_data): Add sNaN tests.
Stefan Liebler [Mon, 6 Jun 2016 09:03:04 +0000 (11:03 +0200)]
tst-rec-dlopen: Fix build fail due to missing inclusion of string.h
on S390, I get a compile error for dlfcn/tst-rec-dlopen.c:
tst-rec-dlopen.c: In function ‘malloc’:
tst-rec-dlopen.c:101:4: error: implicit declaration of function ‘strlen’ [-Werror=implicit-function-declaration]
(void) write (STDOUT_FILENO, message, strlen (message));
^
tst-rec-dlopen.c:101:42: error: incompatible implicit declaration of built-in function ‘strlen’ [-Werror]
(void) write (STDOUT_FILENO, message, strlen (message));
^
tst-rec-dlopen.c:112:42: error: incompatible implicit declaration of built-in function ‘strlen’ [-Werror]
(void) write (STDOUT_FILENO, message, strlen (message));
^
This patch adds the missing "#include <string.h>" for strlen.