This is the mail archive of the mailing list for the glibc project.

Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: ctermid: return string literal, document MT-Safety pitfall

On Fri, 2014-11-21 at 07:30 -0200, Alexandre Oliva wrote:
> On Nov 19, 2014, Torvald Riegel <> wrote:
> > On Tue, 2014-11-18 at 20:23 -0200, Alexandre Oliva wrote:
> >> On Nov 17, 2014, Torvald Riegel <> wrote:
> >> 
> >> > On Fri, 2014-11-14 at 14:53 -0200, Alexandre Oliva wrote:
> >> >> On Nov 14, 2014, Torvald Riegel <> wrote:
> >> >> 
> >> >> > AFAICT memset_s is still a sequentially-specified function.
> >> >> 
> >> >> How can you tell?  It's not like the standard explicitly says so, is it?
> >> I'm asking how do you tell in general.
> > If the function is doing something, for example a store, and the
> > function does not specify what it does internally and which
> > inter-thread happens-before relations this creates, then there's
> > nothing specified that makes this not a data race if you try to look
> > at an intermediate state from another thread.
> Which is why I've resorted to non-threaded means of inspection of
> intermediate states.  I think we differ in whether âthe function does
> not specify what it does internallyâ.

Yes, that seems to be the case.

> If the definition of the function
> said âcopy n chars from src[0..n-1] to dest[0..n-1] respectivelyâ,
> besides any pre- and post- conditions, then it *does* specify what it
> does internally.

I think we differ on whether that specifies the details of an
implementation, or what condition will hold after the function returns.

The way I read it, and the way I think this needs to be understood to be
actually generally applicable, is that it effectively says:  "When the
function has returned (and thus finished execution), it will have copied
n chars from src[0..n-1] to dest[0..n-1] respectivelyâ.  That's the

One reason for why I think that this is indeed the intended verbose
specification is that it's clear how to inspect the state after the
function has returned; it's not executing anymore, a copy can be easily
understood, and there's no restriction on how you actually look at the
state (e.g., when you rely on the post-condition, or want it to check
whether it holds).

In contrast, while the function is still running, there are several
other things that would have to be specified for this to make sense and
to prevent that it is interpreted differently (which a standard wouldn't
want).  For example, which steps does copying a char take?  It sounds
trivial in this example, but what disallows this to be a bit-wise copy?
If we look at other functions like qsort, which are specified to sort
the array, then there are various ways to do sorting; do you think it
says anything except that the array is sorted when the function has
returned? (Ignoring the comparison callbacks, which too don't reveal why
sorting algorithm is used.)

Thus, if we consider these more complex functions, and you agree that
they only guarantee the effects that are in place when they have
returned, wouldn't it then make most sense to take this as the default
way to understand the specification?  Why would it be worthwhile to
special-case simpler functions such as memcpy?

> Not the order in which the chars are copied, for sure,
> but still, it says the function should copy each and every one of those
> chars.  It doesn't state how to copy a char, but anything other than
> load from src[i] and store the loaded value in dest[i] is hardly a copy.
> So while this makes room an interrupted copy to leave dest[i] in an
> unspecified state that could be its earlier value or the newly-copied
> one, it would be hard to argue that anything else complies with the
> behavior specification enclosed in quotes above.

Why couldn't it be a partially copied value?  Where does the standard
disallow bit-wise copy, or require atomic operations for every access to

> I can see value in making simplifying assumptions to reason about
> behavior in the presence of multiple threads, and I realize that the no
> data race requirements can enable *reasoning* about sequential functions
> in such contexts as if only the pre- and post-conditions mattered, I do
> not agree that applying similar reasoning to go backwards is logically
> sound.
> I mean, âI perceive this as a sequential function, which enables
> simplifying assumptions about internal behavior in multi-threaded
> contexts, therefore I can disregard the explicit behavior specification
> and only look at explicit or inferred (pre- and?) post-conditions to
> reason in any context whatsoever, or to implement the function however I
> like, even deviating from the specification, as long as it still
> satisfies the post-conditions when given the pre-conditionsâ doesn't
> hold, because there are issues that arise besides those that come up in
> multi-threaded contexts, to which the simplifying assumptions for
> reasoning about multi-threaded contexts do not apply.

I'm not sure I understand what you're saying.

First, I think what is precisely the behavioral specification is
something we still disagree about.  From my perspective, guaranteeing
the effects of a (sequential, non-synchronizing, non-volatile, ...)
function when it returns is perfectly in line with the specifications.
So, from my perspective, nothing is disregarded.

Compilers rely on the as-if rule to make optimizations.  For example,
store speculative values if they can prove that some value will be
written to a variable in all executions of the function.  That
speculative store might never have a right value.  Unless I
misunderstand you, such an compiler optimization would be incorrect in
your opinion because it stores a value that the abstract machine might
never store.

C11 (N1570) start with the following sentences:
"An implementation might define a one-to-one correspondence between
abstract and actual semantics: at every sequence point, the values of
the actual objects would agree with those specified by the abstract
semantics. The keyword volatile would then be redundant.
Alternatively, an implementation might perform various optimizations
within each translation unit, such that the actual semantics would agree
with the abstract semantics only when making function calls across
translation unit boundaries."

Wouldn't the alternative implementation not be able to provide what you
argue the standard requires?

> > Well, the comparison callbacks can't just look at will at every piece
> > of intermediate state.
> Why is that?  I mean, what, if any, part of the relevant standards says
> so?
> > So, I agree that these *specific* memory locations are intermediate
> > states, but the comparison functions are not guaranteed to be able to
> > look at other elements of the arrays and find sensible information in
> > those.
> The important question here IMHO is whether looking at them is invokes
> undefined behavior, or just yields unspecified values, possibly narrowed
> to a subset of all values that might be held by the types of the objects
> in those locations, if there can even be valid assumptions about the
> types of those memory locations.

I'm not sure about the comparison functions.  But even if there should
be a stronger requirement for the comparison functions, this wouldn't
imply that accesses from other threads wouldn't be a data race.

> >> I'm just trying to figure out what the
> >> heck you mean by âsequential functionâ, and by âsequential
> >> specificationâ.
> > What I mean is that they are not concurrent specifications that make
> > guarantees about states of an unfinished execution as visible to
> > concurrent observers.  They only make guarantees about the state after a
> > function has finished executing.  (Sorry if I'm using shared-memory
> > synchronization terminology here, but given that we want to distinguish
> > between concurrent and non-concurrent, that seems to make sense.)
> Thanks.  The definitely makes sense, when the goal is to reason about
> shared-memory multi-threaded (henceforth SMMT) issues.  But there are
> other issues for which this distinction, or the simplifications in SMMT
> reasoning that follow from it, don't apply, and may even contradict
> other standard-imposed requirements.  So please take the âsequential
> functionâ claims with a grain of salt, and don't use them to discard
> parts of the specification you don't generally have to worry about when
> you're thinking of SMMT, when the context is not limited to SMMT.

So which issues are you thinking about, and how do they affect

> >> I had understood the latter had to do with
> >> specifications limited to pre- and post-conditions, but the standards
> >> we've been talking about do not limit function specifications to that.
> > Why do you think that is the case?
> What does âthatâ mean?  That I had understood it in a certain way?  Or
> that the standards do not limit specs to pre- and post-conditions?

The latter.

> > The callback, or composition of functions in general, is one thing you
> > mentioned, and I hope was able to convince you that this doesn't give
> > guarantees about the caller (e.g., qsort) to the callee (e.g.,
> > comparison function), except when those guarantees overlap with
> > preconditions for the callee.
> I'm afraid you haven't, but you've helped me understand our differences
> in reasoning, because I won't turn specifications of behavior into pre-
> and post-conditions and label a function as sequential to then pretend
> the original specifications did not exist and did not impose any other
> requirements that are not necessarily relevant for SMMT contexts, but
> that might be in other contexts.
> > "A function that may be safely invoked by an application while the
> > asynchronous form of cancellation is enabled."
> > That doesn't really tell me a lot :)  I can interpret "safely invoked"
> > to at least mean that the mere act of cancellation will not break
> > anything.  But it doesn't tell me which state one can expect after
> > cancellation.
> Yup.  Again, the important question is: is it undefined or unspecified?

Do you think that unspecified would really help you a lot?  What do you
think it means?  Is it all values allowed by a type?  Or something else?

> > One way to define safety would be to say that a cancelled function
> > should either take effect or not, but never partially take effect.  IOW,
> > it's either just the precondition or the postcondition that holds.
> This would be a way to extend the simplifying assumptions of sequential
> functions to some other contexts.  Sequential functions would
> essentially be regarded as, and required to behave as, atomic.

Atomic if cancelled, yes.

> > Another option would be to allow specified intermediate steps to take
> > effect.  For 2), we could say that cancellation happens anywhere between
> > the steps the abstract machine would do, but not within a step.  This
> > would be satisfied under the requirement you assumed for memset and
> > strcpy implementations, I believe.
> Yeah, with the caveat that the order of steps of the abstract machine
> that may be used to carry out the required behavior is not specified.
> So, interrupting memset, you might observe that dest[i+1] is modified
> while dest[i] wasn't yet, or vice-versa.

The problem with that is that it still would need to be specified what
the actual steps are.  This is done explicitly for memset_s, but not for
memset, IMO.  It's not only the order that the standard mentions
additionally for memset_s, though -- it also explicitly requires that
the implementation is strictly equivalent to the abstract machine
"Unlike memset, any call to the memset_s function shall be evaluated
strictly according to the rules of the abstract machine as described in

Why would the standard add these requirements for memset_s if memset
already had them?

> > For 3), it could be cancellation between any of the atomic steps, unless
> > otherwise specified.  For condvar wait, for example, this could be one
> > of the three parts: lock release, wakeup, lock acquisition.
> Eeek, it would be Really Bad (TM) IMHO if a condvar wait could be
> canceled while the lock is not held: this could mess with enclosing
> cleanup handlers that, among other things, release the lock.

condvar wait was just an example.  (It's a bad one, because it's not a
cancellation point.)

> What states can cancellation cleanup handlers reliably inspect, anyway?
> Are they to be regarded as running in async signal context, so that they
> can't reliably access local state and are very limited in global state?
> Or are they allowed to access local state, plus any global state that
> could be accessed after pthread_join()ing the canceled thread?

I don't know.  I'd focus on what you call the global state first.

> >> Does this mean it is safe to access the variables
> >> that were partially modified by the interrupted memcpy/strcpy/whatever,
> >> and that this provides means to safely inspect intermediate states?
> > For normal memcpy, strcpy, and other functions in group 1), the
> > intermediate states aren't defined
> Again, not defined or not specified?

Again, what's your definition of "unspecified"?

Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]