This is the mail archive of the
libc-alpha@sourceware.org
mailing list for the glibc project.
Re: [PATCH] Implement libc_once_retry for atomic initialization with allocation
- From: Torvald Riegel <triegel at redhat dot com>
- To: Florian Weimer <fweimer at redhat dot com>
- Cc: GNU C Library <libc-alpha at sourceware dot org>, "Dmitry V. Levin" <ldv at altlinux dot org>
- Date: Thu, 04 Jan 2018 18:15:44 +0100
- Subject: Re: [PATCH] Implement libc_once_retry for atomic initialization with allocation
- Authentication-results: sourceware.org; auth=none
- References: <6e4b6b22-3f38-f798-f2e0-eb22311ed6b5@redhat.com>
On Thu, 2018-01-04 at 16:09 +0100, Florian Weimer wrote:
> Subject: [PATCH] Implement libc_once_retry for atomic initialization with allocation
Can you find a different name for it? It has similarities to a generic
libc_once, but the pattern is more special.
Do you need it often enough to make it worth breaking it out?
> diff --git a/include/atomic.h b/include/atomic.h
> index 6af07dba58..91e176b040 100644
> --- a/include/atomic.h
>
> +++ b/include/atomic.h
I don't think this should go into atomic.h because this is where we
really handle the basics of synchronization. This is a synchronization
helper, and doesn't need to be touched by someone porting glibc to
another arch.
> @@ -826,4 +826,58 @@ void __atomic_link_error (void);
> # error ATOMIC_EXCHANGE_USES_CAS has to be defined.
> #endif
>
> +/* Slow path for libc_once_retry; see below. */
>
> +void *__libc_once_retry_slow (void **__place,
>
> + void *(*__allocate) (void *__closure),
>
> + void (*__deallocate) (void *__closure,
>
> + void *__ptr),
>
> + void *__closure);
>
> +
>
> +/* Perform an acquire MO load on *PLACE. If *PLACE is not NULL,
>
> + return *PLACE. Otherwise, call ALLOCATE (CLOSURE). If that
>
> + returns NULL, return NULL. Otherwise, atomically replace *PLACE
>
> + with PTR, the result of the ALLOCATE call (with acquire-release
That doesn't quite match what you implement. First, you don't replace
it unconditionally, but use a CAS (you can't just throw in an
"atomically" here because the allocations are mixed in and they aren't).
Second, you use the weak CAS but the loop includes alloc/dealloc; see
below.
> + MO). If *PLACE was updated concurrently, call DEALLOCATE (CLOSURE,
>
> + PTR) to undo the effect of allocate, and return the new value of
>
> + *PLACE. If DEALLOCATE is NULL, call the free (PTR) instead.
s/the //
>
> +
>
> + It is expected that callers define an inline helper function
>
> + function which adds type safety, like this.
s/function function which/function that/
s/./:/
>
> +
>
> + struct foo { ... };
>
> + struct foo *global_foo;
>
> + static void *allocate_foo (void *closure);
>
> + static void *deallocate_foo (void *closure, void *ptr);
>
> +
>
> + static inline struct foo *
>
> + get_foo (void)
>
> + {
>
> + return __libc_once_retry (&global_foo, allocate_foo, free_foo, NULL);
>
> + }
>
> +
>
> + Usage of this function looks like this:
>
> +
>
> + struct foo *local_foo = get_foo ();
>
> + if (local_foo == NULL)
>
> + report_allocation_failure ();
>
> +
>
> + Compare to __libc_once, __libc_once_retry has the advantage that it
Compare_d_
> + does not need separate space for a control variable, and that it is
>
> + safe with regards to cancellation and other forms of exception
>
> + handling if the provided callback functions are safe. */
>
> +static inline void *
>
> +libc_once_retry (void **__place, void *(*__allocate) (void *__closure),
>
> + void (*__deallocate) (void *__closure, void *__ptr),
>
> + void *__closure)
>
> +{
>
> + /* Synchronizes with the release-store CAS in
s/release-store/release MO/
> diff --git a/misc/libc_once_retry.c b/misc/libc_once_retry.c
> new file mode 100644
> index 0000000000..ecd352e2a3
> --- /dev/null
>
> +++ b/misc/libc_once_retry.c
>
> @@ -0,0 +1,55 @@
> +/* Concurrent initialization of a pointer.
>
> + Copyright (C) 2018 Free Software Foundation, Inc.
>
> + This file is part of the GNU C Library.
>
> +
>
> + The GNU C Library is free software; you can redistribute it and/or
>
> + modify it under the terms of the GNU Lesser General Public
>
> + License as published by the Free Software Foundation; either
>
> + version 2.1 of the License, or (at your option) any later version.
>
> +
>
> + The GNU C Library is distributed in the hope that it will be useful,
>
> + but WITHOUT ANY WARRANTY; without even the implied warranty of
>
> + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
>
> + Lesser General Public License for more details.
>
> +
>
> + You should have received a copy of the GNU Lesser General Public
>
> + License along with the GNU C Library; if not, see
>
> + <http://www.gnu.org/licenses/>. */
>
> +
>
> +#include <atomic.h>
>
> +#include <stdlib.h>
>
> +
>
> +void *
>
> +__libc_once_retry_slow (void **place, void *(*allocate) (void *closure),
>
> + void (*deallocate) (void *closure, void *ptr),
>
> + void *closure)
>
> +{
>
> + void *result;
>
> +
>
> + do
>
> + {
>
> + result = allocate (closure);
>
> + if (result == NULL)
>
> + return NULL;
>
> +
>
> + /* Synchronizes with the acquire MO load in
>
> + __libc_once_retry. */
>
> + void *expected = NULL;
>
> + if (atomic_compare_exchange_weak_release (place, &expected, result))
>
> + return result;
It seems you're looking for a strong CAS here, so why don't you make a
loop around the weak CAS? Using an acq_rel CAS, this is simpler.
You'd avoid having more than one allocate/deallocate pair, which might
be unexpected given that you're documentation of the function's
semantics doesn't make that clear (and it might perhaps matter for
custom alloc/dealloc functions).