Szabolcs Nagy [Tue, 2 Feb 2021 15:02:09 +0000 (15:02 +0000)]
Remove PR_TAGGED_ADDR_ENABLE from sys/prctl.h
The value of PR_TAGGED_ADDR_ENABLE was incorrect in the installed
headers and the prctl command macros were missing that are needed
for it to be useful (PR_SET_TAGGED_ADDR_CTRL). Linux headers have
the definitions since 5.4 so it's widely available, we don't need
to repeat these definitions. The remaining definitions are from
Linux 5.10.
To build glibc with --enable-memory-tagging, Linux 5.4 headers and
binutils 2.33.1 or newer is needed.
linux: sysconf: limit _SC_MAX_ARG to 6 MiB (BZ #25305)
Since Linux 4.13, kernel limits the maximum command line arguments
length to 6 MiB [1]. Normally the limit is still quarter of the maximum
stack size but if that limit exceeds 6 MiB it's clamped down.
glibc's __sysconf implementation for Linux platform is not aware of
this limitation and for stack sizes of over 24 MiB it returns higher
ARG_MAX than Linux will actually accept. This can be verified by
executing the following application on Linux 4.13 or newer:
On affected systems the program will report ARG_MAX as 10 MiB but
despite that executing /bin/true with a bit over 6 MiB of command line
arguments will fail with E2BIG error. Expected result is that ARG_MAX
is reported as 6 MiB.
Update the __sysconf function to clamp ARG_MAX value to 6 MiB if it
would otherwise exceed it. This resolves bug #25305 which was market
WONTFIX as suggested solution was to cap ARG_MAX at 128 KiB.
As an aside and point of comparison, bionic (a libc implementation for
Android systems) decided to resolve this issue by always returning 128
KiB ignoring any potential xargs regressions [2].
On older kernels this results in returning overly conservative value
but that's a safer option than being aggressive and returning invalid
value on recent systems. It's also worth noting that at this point
all supported Linux releases have the 6 MiB barrier so only someone
running an unsupported kernel version would get incorrectly truncated
result.
Dan Raymond [Tue, 13 Apr 2021 13:26:12 +0000 (10:26 -0300)]
misc: syslog: Fix calls to openlog() with LOG_KERN facility (BZ #3604)
POSIX states for syslog [1]:
"Values of the priority argument are formed by OR'ing together a
severity-level value and an optional facility value. If no
facility value is specified, the current default facility value is
used."
So the patch fixes an existing violation of the openlog interface contract
where it is ignoring the facility argument when the value is zero
It allows the use LOG_KERN by calling openlog prior syslog usage.
Paul Eggert [Mon, 12 Apr 2021 02:06:00 +0000 (19:06 -0700)]
Improve documentation for malloc etc. (BZ#27719)
Cover key corner cases (e.g., whether errno is set) that are well
settled in glibc, fix some examples to avoid integer overflow, and
update some other dated examples (code needed for K&R C, e.g.).
* manual/charset.texi (Non-reentrant String Conversion):
* manual/filesys.texi (Symbolic Links):
* manual/memory.texi (Allocating Cleared Space):
* manual/socket.texi (Host Names):
* manual/string.texi (Concatenating Strings):
* manual/users.texi (Setting Groups):
Use reallocarray instead of realloc, to avoid integer overflow issues.
* manual/filesys.texi (Scanning Directory Content):
* manual/memory.texi (The GNU Allocator, Hooks for Malloc):
* manual/tunables.texi:
Use code font for 'malloc' instead of roman font.
(Symbolic Links): Don't assume readlink return value fits in 'int'.
* manual/memory.texi (Memory Allocation and C, Basic Allocation)
(Malloc Examples, Alloca Example):
* manual/stdio.texi (Formatted Output Functions):
* manual/string.texi (Concatenating Strings, Collation Functions):
Omit pointer casts that are needed only in ancient K&R C.
* manual/memory.texi (Basic Allocation):
Say that malloc sets errno on failure.
Say "convert" rather than "cast", since casts are no longer needed.
* manual/memory.texi (Basic Allocation):
* manual/string.texi (Concatenating Strings):
In examples, use C99 declarations after statements for brevity.
* manual/memory.texi (Malloc Examples): Add portability notes for
malloc (0), errno setting, and PTRDIFF_MAX.
(Changing Block Size): Say that realloc (p, 0) acts like
(p ? (free (p), NULL) : malloc (0)).
Add xreallocarray example, since other examples can use it.
Add portability notes for realloc (0, 0), realloc (p, 0),
PTRDIFF_MAX, and improve notes for reallocating to the same size.
(Allocating Cleared Space): Reword now-confusing discussion
about replacement, and xref "Replacing malloc".
* manual/stdio.texi (Formatted Output Functions):
Don't assume message size fits in 'int'.
* manual/string.texi (Concatenating Strings):
Fix undefined behavior involving arithmetic on a freed pointer.
linux: Normalize and return timeout on select (BZ #27651)
The commit 2433d39b697, which added time64 support to select, changed
the function to use __NR_pselect6 (or __NR_pelect6_time64) on all
architectures. However, on architectures where the symbol was
implemented with __NR_select the kernel normalizes the passed timeout
instead of return EINVAL. For instance, the input timeval
{ 0, 5000000 } is interpreted as { 5, 0 }.
And as indicated by BZ #27651, this semantic seems to be expected
and changing it results in some performance issues (most likely
the program does not check the return code and keeps issuing
select with unormalized tv_usec argument).
To avoid a different semantic depending whether which syscall the
architecture used to issue, select now always normalize the timeout
input. This is a slight change for some ABIs (for instance aarch64).
Fix SXID_ERASE behavior in setuid programs (BZ #27471)
When parse_tunables tries to erase a tunable marked as SXID_ERASE for
setuid programs, it ends up setting the envvar string iterator
incorrectly, because of which it may parse the next tunable
incorrectly. Given that currently the implementation allows malformed
and unrecognized tunables pass through, it may even allow SXID_ERASE
tunables to go through.
This change revamps the SXID_ERASE implementation so that:
- Only valid tunables are written back to the tunestr string, because
of which children of SXID programs will only inherit a clean list of
identified tunables that are not SXID_ERASE.
- Unrecognized tunables get scrubbed off from the environment and
subsequently from the child environment.
- This has the side-effect that a tunable that is not identified by
the setxid binary, will not be passed on to a non-setxid child even
if the child could have identified that tunable. This may break
applications that expect this behaviour but expecting such tunables
to cross the SXID boundary is wrong. Reviewed-by: Carlos O'Donell <carlos@redhat.com>
Instead of passing GLIBC_TUNABLES via the environment, pass the
environment variable from parent to child. This allows us to test
multiple variables to ensure better coverage.
The test list currently only includes the case that's already being
tested. More tests will be added later. Reviewed-by: Carlos O'Donell <carlos@redhat.com>
Add a new function support_capture_subprogram_self_sgid that spawns an
sgid child of the running program with its own image and returns the
exit code of the child process. This functionality is used by at
least three tests in the testsuite at the moment, so it makes sense to
consolidate.
There is also a new function support_subprogram_wait which should
provide simple system() like functionality that does not set up file
actions. This is useful in cases where only the return code of the
spawned subprocess is interesting.
This patch also ports tst-secure-getenv to this new function. A
subsequent patch will port other tests. This also brings an important
change to tst-secure-getenv behaviour. Now instead of succeeding, the
test fails as UNSUPPORTED if it is unable to spawn a setgid child,
which is how it should have been in the first place. Reviewed-by: Carlos O'Donell <carlos@redhat.com>
i.e. sp at setjmp was outside of the altstack range. Here we know that
longjmp is called from a signal handler on the altstack (SS_ONSTACK),
and that it jumps in the wrong direction (sp decreases), so the check
wants to ensure the jump goes to another stack.
The check is wrong when altstack_sp == setjmp_sp which can happen
when the altstack is a local buffer in the function that calls setjmp,
so the patch allows == too. This fixes bug 27709.
Note that the generic __longjmp_chk check seems to be different.
(it checks if longjmp was on the altstack but does not check setjmp,
so it would not catch incorrect longjmp use within the signal handler).
Paul Zimmermann [Fri, 2 Apr 2021 06:21:06 +0000 (08:21 +0200)]
Improve the accuracy of tgamma (BZ #26983)
With this patch, the maximal known error for tgamma is now reduced to 9 ulps
for dbl-64, for all rounding modes. Since exhaustive testing is not possible
for dbl-64, it might be that there are still cases with an error larger than
9 ulps, but all known cases are fixed (intensive tests were done to find cases
with large errors).
Tested on x86_64 and powerpc (and by Adhemerval Zanella on aarch64, arm,
s390x, sparc, and i686). Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
The simplification of tunable_set interfaces took care of
signed/unsigned conversions while setting values, but comparison with
bounds ended up being incorrect; comparing TUNABLE_SIZE_T values for
example will fail because SIZE_MAX is seen as -1.
Add comparison helpers that take tunable types into account and use
them to do comparison instead.
Update sv_SE to treate 'W' as a distinct character (Bug 25036)
The 13th edition of Svenska Akademiens ordlista lists 'W' as a
distinct letter that sorts after 'V'. We adjust the sv_SE locale
(and tests) to match this updated and "reformed" language change.
This harmonizes us with CLDR 1.5.0 (2007) for sv_SE sorting of
the letter 'W'.
No regressions on x86_64, and locale sorting tests all pass.
Co-authored-by: Carlos O'Donell <carlos@redhat.com>
Maninder Singh [Wed, 10 Jan 2018 15:17:30 +0000 (15:17 +0000)]
elf: Fix data race in _dl_name_match_p [BZ #21349]
dlopen updates libname_list by writing to lastp->next, but concurrent
reads in _dl_name_match_p were not synchronized when it was called
without holding GL(dl_load_lock), which can happen during lazy symbol
resolution.
This patch fixes the race between _dl_name_match_p reading lastp->next
and add_name_to_object writing to it. This could cause segfault on
targets with weak memory order when lastp->next->name is read, which
was observed on an arm system. Fixes bug 21349.
(Code is from Maninder Singh, comments and description is from Szabolcs
Nagy.)
Szabolcs Nagy [Thu, 11 Feb 2021 13:38:10 +0000 (13:38 +0000)]
aarch64: free tlsdesc data on dlclose [BZ #27403]
DL_UNMAP_IS_SPECIAL and DL_UNMAP were not defined. The definitions are
now copied from arm, since the same is needed on aarch64. The cleanup
of tlsdesc data is handled by the custom _dl_unmap.
Required after 9acda61d94acc "Fix the inaccuracy of j0f/j1f/y0f/y1f
[BZ #14469, #14470, #14471, #14472]" and db3f7bb558 "math: Remove
slow paths from asin and acos [BZ #15267]".
Paul Zimmermann [Thu, 1 Apr 2021 06:14:10 +0000 (08:14 +0200)]
Fix the inaccuracy of j0f/j1f/y0f/y1f [BZ #14469, #14470, #14471, #14472]
For j0f/j1f/y0f/y1f, the largest error for all binary32
inputs is reduced to at most 9 ulps for all rounding modes.
The new code is enabled only when there is a cancellation at the very end of
the j0f/j1f/y0f/y1f computation, or for very large inputs, thus should not
give any visible slowdown on average. Two different algorithms are used:
* around the first 64 zeros of j0/j1/y0/y1, approximation polynomials of
degree 3 are used, computed using the Sollya tool (https://www.sollya.org/)
* for large inputs, an asymptotic formula from [1] is used
[1] Fast and Accurate Bessel Function Computation,
John Harrison, Proceedings of Arith 19, 2009.
Inputs yielding the new largest errors are added to auto-libm-test-in,
and ulps are regenerated for various targets (thanks Adhemerval Zanella).
Tested on x86_64 with --disable-multi-arch and on powerpc64le-linux-gnu. Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
H.J. Lu [Tue, 16 Mar 2021 14:41:46 +0000 (07:41 -0700)]
x86_64: Correct THREAD_SETMEM/THREAD_SETMEM_NC for movq [BZ #27591]
config/i386/constraints.md in GCC has
(define_constraint "e"
"32-bit signed integer constant, or a symbolic reference known
to fit that range (for immediate operands in sign-extending x86-64
instructions)."
(match_operand 0 "x86_64_immediate_operand"))
Since movq takes a signed 32-bit immediate or a register source operand,
use "er", instead of "nr"/"ir", constraint for 32-bit signed integer
constant or register on movq.
Andreas Schwab [Wed, 31 Mar 2021 12:17:24 +0000 (14:17 +0200)]
powerpc64le: Use ifunc for _Float128 functions also in libc
This fixes missing definition of math functions in libc in a static link
that are no longer built for libm after commit 4898d9712b ("Avoid adding
duplicated symbols into static libraries").
Stefan Liebler [Wed, 31 Mar 2021 14:17:01 +0000 (16:17 +0200)]
S390: Allow "v" constraint for long double math_opt_barrier and math_force_eval with GCC 11.
Starting with GCC 11, long double values can also be processed in vector
registers if build with -march >= z14. Then GCC defines the
__LONG_DOUBLE_VX__ macro.
io: Check at runtime if timestamp supports nanoseconds
Now that non-LFS stat function is implemented on to on LFS, it will
use statx when available. It allows to check for nanosecond timestamp
if the kernel supports __NR_statx.
Stefan Liebler [Wed, 31 Mar 2021 08:23:16 +0000 (10:23 +0200)]
Fix conform linknamespace tests due to gnu_dev_makedev
If building on s390 / i686 with -Os, various conformance
tests are failing with e.g.
conform/ISO/assert.h/linknamespace.out:
[initial] __assert_fail -> [libc.a(assert.o)] __dcgettext -> [libc.a(dcgettext.o)] __dcigettext -> [libc.a(dcigettext.o)] __getcwd -> [libc.a(getcwd.o)] __fstatat64 -> [libc.a(fstatat64.o)] gnu_dev_makedev
The usage of gnu_dev_makedev was recently introduced by
usage of the makedev makro in commit: 5b980d4809913088729982865188b754939bcd39
linux: Use statx for MIPSn64
This patch is now linking against __gnu_dev_makedev as
also done in commit: 8b4a118222c7ed41bc653943b542915946dff1dd
Fix -Os gnu_dev_* linknamespace, localplt issues (bug 15105, bug 19463).
About a decade ago, I accidentally wrote the GPLv3 license text on the
test case when the rest of glibc source is LGPL v2.1 or later. As
original author of the test (and there are no other legally
significant changes to the test) I propose to update the license text
to be consistent with the project.
Avoid adding duplicated symbols into static libraries
Some math functions (such as __isnan*) are built into both libm and
libc because they are needed in libc. The symbol gets exported from
libc.so and not libm.so, because of which dynamic linking works fine;
the symbols are always resolved from libc.so and libm.so uses its
internal copy of the same function if needed.
When linking statically though, the libm variants get used throughout
because the symbols are exported in both archives and libm.a is
searched first.
This patch removes these duplicate objects from the libm.a archive so
that programs always link to libc in both, the static and dynamic
case. The difference this will cause is that libm uses of these
functions will start using the libc versions in the !SHARED case.
This is harmless at the moment because the objects are identical
except for their names.
Some of these duplicates could be removed from libm.so too, but I
avoided that in the interest of retaining an internal reference if at
all those functions get used within libm in future.
Reviewed-by: Paul Zimmermann <Paul.Zimmermann@inria.fr>
Samuel Thibault [Wed, 24 Mar 2021 20:27:34 +0000 (21:27 +0100)]
fork.h: replace with register-atfork.h
UNREGISTER_ATFORK is now defined for all ports in register-atfork.h, so most
previous includes of fork.h actually only need register-atfork.h now, and
cxa_finalize.c does not need an ifdef UNREGISTER_ATFORK any more.
The nptl-specific fork generation counters can then go to pthreadP.h, and
fork.h be removed.
H.J. Lu [Sun, 7 Mar 2021 17:45:23 +0000 (09:45 -0800)]
x86-64: Use ZMM16-ZMM31 in AVX512 memmove family functions
Update ifunc-memmove.h to select the function optimized with AVX512
instructions using ZMM16-ZMM31 registers to avoid RTM abort with usable
AVX512VL since VZEROUPPER isn't needed at function exit.
H.J. Lu [Sun, 7 Mar 2021 17:44:18 +0000 (09:44 -0800)]
x86-64: Use ZMM16-ZMM31 in AVX512 memset family functions
Update ifunc-memset.h/ifunc-wmemset.h to select the function optimized
with AVX512 instructions using ZMM16-ZMM31 registers to avoid RTM abort
with usable AVX512VL and AVX512BW since VZEROUPPER isn't needed at
function exit.
H.J. Lu [Tue, 23 Feb 2021 14:33:10 +0000 (06:33 -0800)]
x86: Add string/memory function tests in RTM region
At function exit, AVX optimized string/memory functions have VZEROUPPER
which triggers RTM abort. When such functions are called inside a
transactionally executing RTM region, RTM abort causes severe performance
degradation. Add tests to verify that string/memory functions won't
cause RTM abort in RTM region.
H.J. Lu [Fri, 5 Mar 2021 15:26:42 +0000 (07:26 -0800)]
x86-64: Add AVX optimized string/memory functions for RTM
Since VZEROUPPER triggers RTM abort while VZEROALL won't, select AVX
optimized string/memory functions with
xtest
jz 1f
vzeroall
ret
1:
vzeroupper
ret
at function exit on processors with usable RTM, but without 256-bit EVEX
instructions to avoid VZEROUPPER inside a transactionally executing RTM
region.
H.J. Lu [Fri, 5 Mar 2021 15:20:28 +0000 (07:20 -0800)]
x86-64: Add memcmp family functions with 256-bit EVEX
Update ifunc-memcmp.h to select the function optimized with 256-bit EVEX
instructions using YMM16-YMM31 registers to avoid RTM abort with usable
AVX512VL, AVX512BW and MOVBE since VZEROUPPER isn't needed at function
exit.
H.J. Lu [Fri, 5 Mar 2021 15:15:03 +0000 (07:15 -0800)]
x86-64: Add memset family functions with 256-bit EVEX
Update ifunc-memset.h/ifunc-wmemset.h to select the function optimized
with 256-bit EVEX instructions using YMM16-YMM31 registers to avoid RTM
abort with usable AVX512VL and AVX512BW since VZEROUPPER isn't needed at
function exit.
H.J. Lu [Fri, 5 Mar 2021 14:46:08 +0000 (06:46 -0800)]
x86-64: Add memmove family functions with 256-bit EVEX
Update ifunc-memmove.h to select the function optimized with 256-bit EVEX
instructions using YMM16-YMM31 registers to avoid RTM abort with usable
AVX512VL since VZEROUPPER isn't needed at function exit.
H.J. Lu [Fri, 5 Mar 2021 14:36:50 +0000 (06:36 -0800)]
x86-64: Add strcpy family functions with 256-bit EVEX
Update ifunc-strcpy.h to select the function optimized with 256-bit EVEX
instructions using YMM16-YMM31 registers to avoid RTM abort with usable
AVX512VL and AVX512BW since VZEROUPPER isn't needed at function exit.
H.J. Lu [Fri, 5 Mar 2021 14:24:52 +0000 (06:24 -0800)]
x86-64: Add ifunc-avx2.h functions with 256-bit EVEX
Update ifunc-avx2.h, strchr.c, strcmp.c, strncmp.c and wcsnlen.c to
select the function optimized with 256-bit EVEX instructions using
YMM16-YMM31 registers to avoid RTM abort with usable AVX512VL, AVX512BW
and BMI2 since VZEROUPPER isn't needed at function exit.
For strcmp/strncmp, prefer AVX2 strcmp/strncmp if Prefer_AVX2_STRCMP
is set.
H.J. Lu [Fri, 26 Feb 2021 13:36:59 +0000 (05:36 -0800)]
x86: Set Prefer_No_VZEROUPPER and add Prefer_AVX2_STRCMP
1. Set Prefer_No_VZEROUPPER if RTM is usable to avoid RTM abort triggered
by VZEROUPPER inside a transactionally executing RTM region.
2. Since to compare 2 32-byte strings, 256-bit EVEX strcmp requires 2
loads, 3 VPCMPs and 2 KORDs while AVX2 strcmp requires 1 load, 2 VPCMPEQs,
1 VPMINU and 1 VPMOVMSKB, AVX2 strcmp is faster than EVEX strcmp. Add
Prefer_AVX2_STRCMP to prefer AVX2 strcmp family functions.
Paul Zimmermann [Fri, 19 Mar 2021 09:09:20 +0000 (10:09 +0100)]
add workload traces for missing functions (double format)
This patch adds workload traces for all double format functions where such
files are missing. For each function, a set of 1000 random values is
generated at random using SageMath, such that the output values are
meaningful (for example avoiding too large inputs for exp10 where the
output would be +Inf). More details about the generated values are
given at the beginning of each file. Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
The tests are refactored to use a common skeleton that handles whether
the underlying filesystem supports 64 bit time, skips 64 bit time
tests when the TU only supports 32 bit, and also skip 64 bit time
tests larger than 32 unsigned int (y2106) if the system does not
support it (MIPSn64 on kernels without statx support).
Checked on x86_64-linux-gnu and i686-linux-gnu. I also checked
on a mips64el-linux-gnu with 4.1.4 and 5.10.0-4-5kc-malta kernel
to verify if the y2106 are indeed skipped.
MIPSn64 kernel ABI for legacy stat uses unsigned 32 bit for second
timestamp, which limits the maximum value to y2106. This patch
make mips64 use statx as for 32-bit architectures.
Thie __cp_stat64_t64_statx is open coded, its usage is solely on
fstatat64 and it avoid the need to redefine the name for mips64
(which will call __cp_stat64_statx since its does not use
__stat64_t64 internally).
It makes fstatat use __NR_statx, which fix the s390 issue with
missing nanoxsecond support on compat stat syscalls (at least
on recent kernels) and limits the statx call to only one function
(which simplifies the __ASSUME_STATX support).
Checked on i686-linux-gnu and on powerpc-linux-gnu.
H.J. Lu [Fri, 19 Mar 2021 13:15:37 +0000 (06:15 -0700)]
x86: Properly disable XSAVE related features [BZ #27605]
1. Support GLIBC_TUNABLES=glibc.cpu.hwcaps=-XSAVE.
2. Disable all features which depend on XSAVE:
a. If OSXSAVE is disabled by glibc tunables. Or
b. If both XSAVE and XSAVEC aren't usable.
The libc version is identical and built with same flags. The libc
version is set as the default version.
The libpthread compat symbol requires to mask it when building the
loader object otherwise ld might complain about a missing
versioned symbol (as for alpha).
The libc version is identical and built with same flags. Both aarch64
and nios2 also requires to export __send and tt was done previously with
the HAVE_INTERNAL_SEND_SYMBOL (which forced the symbol creation).
All __send callers are internal to libc and the original issue that
required the symbol export was due a missing libc_hidden_def. So
a compat symbol is added for __send and the libc_hidden_def is
defined regardless.
Szabolcs Nagy [Mon, 15 Mar 2021 11:44:32 +0000 (11:44 +0000)]
malloc: Ensure mtag code path in checked_request2size is cold
This is a workaround (hack) for a gcc optimization issue (PR 99551).
Without this the generated code may evaluate the expression in the
cold path which causes performance regression for small allocations
in the memory tagging disabled (common) case.
Szabolcs Nagy [Fri, 12 Mar 2021 14:30:10 +0000 (14:30 +0000)]
malloc: Remove unnecessary tagging around _mid_memalign
The internal _mid_memalign already returns newly tagged memory.
(__libc_memalign and posix_memalign already relied on this, this
patch fixes the other call sites.)
Szabolcs Nagy [Thu, 11 Mar 2021 14:49:45 +0000 (14:49 +0000)]
malloc: Rename chunk2rawmem
The previous patch ensured that all chunk to mem computations use
chunk2rawmem, so now we can rename it to chunk2mem, and in the few
cases where the tag of mem is relevant chunk2mem_tag can be used.
Replaced tag_at (chunk2rawmem (x)) with chunk2mem_tag (x).
Renamed chunk2rawmem to chunk2mem.
Szabolcs Nagy [Tue, 9 Mar 2021 14:04:49 +0000 (14:04 +0000)]
malloc: Use chunk2rawmem throughout
The difference between chunk2mem and chunk2rawmem is that the latter
does not get the memory tag for the returned pointer. It turns out
chunk2rawmem almost always works:
The input of chunk2mem is a chunk pointer that is untagged so it can
access the chunk header. All memory that is not user allocated heap
memory is untagged, which in the current implementation means that it
has the 0 tag, but this patch does not rely on the tag value. The
patch relies on that chunk operations are either done on untagged
chunks or without doing memory access to the user owned part.
So only _int_realloc and functions outside this list need care.
Alignment checks do not need the right tag and tcache works with
untagged memory.
tag_at was kept in realloc after an mremap, which is not strictly
necessary, since the pointer is only used to retag the memory, but this
way the tag is guaranteed to be different from the old tag.
Szabolcs Nagy [Fri, 12 Mar 2021 09:46:15 +0000 (09:46 +0000)]
malloc: Use different tag after mremap
The comment explained why different tag is used after mremap, but
for that correctly tagged pointer should be passed to tag_new_usable.
Use chunk2mem to get the tag.
Szabolcs Nagy [Mon, 8 Mar 2021 12:59:05 +0000 (12:59 +0000)]
malloc: Use memsize instead of CHUNK_AVAILABLE_SIZE
This is a pure refactoring change that does not affect behaviour.
The CHUNK_AVAILABLE_SIZE name was unclear, the memsize name tries to
follow the existing convention of mem denoting the allocation that is
handed out to the user, while chunk is its internally used container.
The user owned memory for a given chunk starts at chunk2mem(p) and
the size is memsize(p). It is not valid to use on dumped heap chunks.
Moved the definition next to other chunk and mem related macros.
Szabolcs Nagy [Tue, 9 Feb 2021 17:59:11 +0000 (17:59 +0000)]
aarch64: Optimize __libc_mtag_tag_zero_region
This is a target hook for memory tagging, the original was a naive
implementation. Uses the same algorithm as __libc_mtag_tag_region,
but with instructions that also zero the memory. This was not
benchmarked on real cpu, but expected to be faster than the naive
implementation.
Szabolcs Nagy [Tue, 9 Feb 2021 17:56:02 +0000 (17:56 +0000)]
aarch64: Optimize __libc_mtag_tag_region
This is a target hook for memory tagging, the original was a naive
implementation. The optimized version relies on "dc gva" to tag 64
bytes at a time for large allocations and optimizes small cases without
adding too many branches. This was not benchmarked on real cpu, but
expected to be faster than the naive implementation.
Szabolcs Nagy [Thu, 4 Feb 2021 10:04:07 +0000 (10:04 +0000)]
aarch64: inline __libc_mtag_address_get_tag
This is a common operation when heap tagging is enabled, so inline the
instruction instead of using an extern call.
The .inst directive is used instead of the name of the instruction (or
acle intrinsics) because malloc.c is not compiled for armv8.5-a+memtag
architecture, runtime cpu support detection is used.
Prototypes are removed from the comments as they were not always
correct.
Szabolcs Nagy [Wed, 17 Feb 2021 10:15:18 +0000 (10:15 +0000)]
malloc: Use mtag_enabled instead of USE_MTAG
Use the runtime check where possible: it should not cause slow down in
the !USE_MTAG case since then mtag_enabled is constant false, but it
allows compiling the tagging logic so it's less likely to break or
diverge when developers only test the !USE_MTAG case.
Szabolcs Nagy [Mon, 8 Feb 2021 12:39:01 +0000 (12:39 +0000)]
malloc: Use branches instead of mtag_granule_mask
The branches may be better optimized since mtag_enabled is widely used.
Granule size larger than a chunk header is not supported since then we
cannot have both the chunk header and user area granule aligned. To
fix that for targets with large granule, the chunk layout has to change.
So code that attempted to handle the granule mask generally was changed.
This simplified CHUNK_AVAILABLE_SIZE and the logic in malloc_usable_size.