This is the mail archive of the
libc-alpha@sourceware.org
mailing list for the glibc project.
Re: Consensus: data-race freedom as default for glibc code
- From: Torvald Riegel <triegel at redhat dot com>
- To: Florian Weimer <fweimer at redhat dot com>
- Cc: "Carlos O'Donell" <carlos at redhat dot com>, Roland McGrath <roland at hack dot frob dot com>, "Joseph S. Myers" <joseph at codesourcery dot com>, GLIBC Devel <libc-alpha at sourceware dot org>
- Date: Mon, 24 Nov 2014 16:45:56 +0100
- Subject: Re: Consensus: data-race freedom as default for glibc code
- Authentication-results: sourceware.org; auth=none
- References: <1414797659 dot 10085 dot 406 dot camel at triegel dot csb> <1416508239 dot 1771 dot 61 dot camel at triegel dot csb> <546F0733 dot 70304 at redhat dot com> <1416608824 dot 1771 dot 72 dot camel at triegel dot csb> <547340BD dot 4060306 at redhat dot com> <1416842616 dot 1771 dot 138 dot camel at triegel dot csb> <54734DE0 dot 2020606 at redhat dot com>
On Mon, 2014-11-24 at 16:25 +0100, Florian Weimer wrote:
> On 11/24/2014 04:23 PM, Torvald Riegel wrote:
> >> > * Parallel algorithms implemented in glibc itself will be free from
> >> > data races (as defined by C11 and its memory model) by default.
> >
> > I changed it to:
> > * Concurrent code in glibc is free from data races (as defined by C11
> > and its memory model) by default.
>
> Fine with me as well (although I think technically, this is about
> parallelism, not concurrency :-).
And I disagree, that's why I changed it :)
We don't parallelize anything in glibc. The code that the paragraph
addresses contains (shared-memory) synchronization, so code where things
don't actually run in parallel and separate (like parallel lines would
do), but where the different threads have to coordinate.
The distinction is a little fuzzy of course, because most forms of
parallelism have at least some concurrency/synchronization in them, and
good concurrent code tries to maintain parallelism where possible.
Likewise, when you look at programming abstractions for parallelism,
then these try to support splitting one piece of work into independent
parts; in contrast, concurrency abstractions try to handle threads of
execution that run concurrently and are not independent but have to
coordinate.
ISO C++ SG1 also uses this terminology, basically. For example, the
latches and barriers proposals go into the Concurrency TS, whereas
parallel algorithms are in the Parallelism TS.
How would you define both categories?