This is the mail archive of the
mailing list for the glibc project.
Re: Consensus on MT-, AS- and AC-Safety docs.
- From: Torvald Riegel <triegel at redhat dot com>
- To: Alexandre Oliva <aoliva at redhat dot com>
- Cc: "Carlos O'Donell" <carlos at redhat dot com>, GNU C Library <libc-alpha at sourceware dot org>, "Joseph S. Myers" <joseph at codesourcery dot com>, Rich Felker <dalias at aerifal dot cx>
- Date: Sun, 01 Dec 2013 19:06:05 +0100
- Subject: Re: Consensus on MT-, AS- and AC-Safety docs.
- Authentication-results: sourceware.org; auth=none
- References: <528A7C8F dot 8060805 at redhat dot com> <52991C3B dot 9080701 at redhat dot com> <ord2lhjnku dot fsf at livre dot home>
On Sun, 2013-12-01 at 00:45 -0200, Alexandre Oliva wrote:
> On Nov 29, 2013, "Carlos O'Donell" <firstname.lastname@example.org> wrote:
> > At present POSIX has no memory model,
> It does. In F2F conversation, Torvald retracted that assertion.
Speak for yourself, or at least be precise when summarizing what someone
> doesn't cover all of the richness of the atomics of recent C and C++
> standards, but a basic memory model in which they can fit in perfectly
> is there: concurrent writes or reads and writes to the same memory
> location, without intervening synchronization operations, invoke
> undefined behavior. This is the memory model that POSIX exposes to
That's not a complete definition, obviously. That's an attempt at
describing the rough idea behind a memory model.
> It doesn't preclude the use of any of the atomics, or even other
> features of the underlying hardware memory model, in the implementation,
> since the POSIX memory model applies to *users* of the interfaces it
> specifies; it doesn't matter if the implementation uses âmagicâ to
> implement the specified interfaces, as long as they behave as specified
> whenever users of the interfaces behave within well-defined boundaries
> (those that don't invoke undefined behavior).
> For the most part, the implementation memory model is that of the
> language in which it is implemented. I write for the most part because
> nothing stops the implementation from resorting to âmagicâ outside the
> implementation language (say asm code) for portions of the
> implementation that benefit from it. At that point, the hardware model
> is the limit.
> So, claiming there's no memory model is a double or even triple mistake:
> there is the memory model POSIX specifies for users of its interfaces,
> there is the implementation-language memory model shared by the
> interface implementation and its users, and there's the underlying
> hardware memory model. And claiming we're missing a memory model, let
> alone to define safety in terms of it, misses not only the existing
> model and the realization that multiple models may be operating at
> different abstraction layers, but also the fact that the portable
> interface is defined precisely so as to abstract away differences
> between the underlying hardware memory models! How could we have
> portably safe interfaces on different hardware with different memory
> models if the definition of safety depended on the underlying hardware
> memory model? If the premise was true, we couldn't.
I don't know who you think suggested to make the MT-Safe parametrized on
the HW memory model, but I haven't see any such suggestion. Certainly
not from Carlos.
> > and no strict definition of safe.
> But there is an *exhaustive* list of all interfaces that are not
> MT-Safe, and a rationale for this qualification.
Are you saying that a list of cases that conflict with a certain
criterium (e.g., MT-Safe) including some rationale why there is deemed
to be a conflict equates a strict definition?
If so -- and let's ignore for a while whether this would indeed
guarantee to be a complete and strict definition --, then this would
mean that to understand the MT-Safe definition, users would have to go
through the whole exhaustive list of interfaces and the rationale and
build up their own assumptions about what should be MT-Safe or not.
Trying to define by giving examples typically doesn't work well, in
contrast to *illustrating* by giving examples.
> This, and the various
> other requirements imposed to various functions throughout the standard,
> makes the situations that raise safety issues and what POSIX expects
> implementations to do to avoid them very clear. The end result may not
> be a perfect match for any of the transactional consistency models,
You can't do without atomic entities at some level of the model. In the
worst case, 1-bit-wide accesses to memory will be atomic.
(In the shared-memory synchronization / distributed computing
literature, the low-level state-holding entities are called "registers";
just a pointer if you want to look at some of the background to this...)
> since there are explicit allowances for interactions and interleaving of
> concurrent executions,
If there are explicit allowances in some cases, great. That doesn't
conflict at all with having to reason about the parts that the functions
are broken up into using atomicity and/or sequential specifications.
(But, per Carlos request, that should be discussed in a separate
> but that doesn't make it too weak, too poorly
> defined, or in need of a major revamp. People have been able to make
> perfect sense of the MT-Safety notion for at least 2 decades.
I find it interesting that you call it a "notion" and do not speak about
Second, define "perfect sense" and prove your claim.
> I recall
> studying so-named properties in Solaris man pages for multi-threaded
> projects I wrote as an undergrad back in 1994, maybe even 1993,
> including a multi-threading layer on top of reverse-engineered SunRPC.
> I recall writing and running some distributed and multi-threaded (toy)
> optimization programs in 1996, on dual- and six-processor SPARC machines
> that were already years old back then. The concept of MT-Safety was
> already well-defined back then;
Then I guess you should have no problem giving us a proper
Or maybe you could give us a list of these other definitions. Or relate
these definitions to the choices I outlined in my other email (about how
to reason about correctness of functions with sequential specs in a
Also, why are you mentioning the years? Is that just some story, or do
you want to convey anything substantial? Memory models are discussed at
least since the 70s, serializability is even older I believe, and
linearizability is used since the early 90s. So what's the point?