dlopen and memory order guarantees with threads

edA-qa mort-ora-y eda-qa@disemia.com
Sun Mar 4 07:30:00 GMT 2012

On 03/03/2012 10:43 PM, Carlos O'Donell wrote:
> In general coherency it is an expected guarantee that userspace
> has from the kernel and the underlying hardware.

This is why I ask however, since with the new C++11 standard and C1X we
are being told that full and immediate visibility should not be assumed.
We have some basic assumptions about coherency -- such that we can't
have conflicting writes -- but the timely visibility is still an issue.

As you said before, if you were to allocate memory on your own, then
read in the shared library directly, you wouldn't have the same
guarantees on visibility. You could, on some current hardware, actually
expose a pointer to a different thread where the backing memory is not
yet current.

I might have to go tracking through the linux kernel code now to see
what happens. What if the guarantee isn't actually there and we're all
just getting lucky?  For example, on x86 we know we don't have this
problem, as well as some other architectures. Also, since dlopen is such
a long series of functions, it's possible that there just isn't enough
time to get an invalid pointer out. That is, by the time my dlsym call
returns any modern processor has had more than enough time to sync the

Also related I guess is that if we always create a new mapping address,
none of the processors would have that mapped at all, thus they can't
have old data in that space. So again, in this case there can't really
be a visibility issue.

Again, thanks for all the information, it is quite helpful.

edA-qa mort-ora-y
-- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
Sign: Please digitally sign your emails.
Encrypt: I'm also happy to receive encrypted mail.

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 554 bytes
Desc: OpenPGP digital signature
URL: <http://sourceware.org/pipermail/libc-help/attachments/20120304/58fd4122/attachment.sig>

More information about the Libc-help mailing list