This is the mail archive of the
cygwin-patches@cygwin.com
mailing list for the Cygwin project.
Re: gcc4 and local statics
On Wed, May 18, 2005 at 11:51:55AM -0700, Brian Dessent wrote:
>Corinna Vinschen wrote:
>
>> While this might help to avoid... something, I'm seriously wondering
>> what's wrong with this expression. Why does each new version of gcc
>> add new incompatibilities?
>
>I think I've figured this out. PR/13684 added thread safety to
>initialization of local statics.[1] It does this by wrapping the call
>to the contructor around __cxa_guard_acquire and __cxa_guard_release,
>which are supposed to prevent two threads from both calling the
>constructor at the same time.
>
>The problem is that in Cygwin, these functions are defined as no-ops in
>cxx.cc. That means that GCC calls the function and expects a nonzero
>return value if it was able to acquire the mutex, but in our case the
>function always returns zero (or rather, it does nothing and eax
>contained zero before the function call) and so gcc never tries to call
>the constructor.
>
>There seem to be several possible ways to go here:
>
>1. Compile with -fno-threadsafe-statics.
>2. Implement an actual muto in __cxa_guard_*.
>3. Remove Cygwin's no-op __cxa_guard_* and rely on the libstdc++
>provided ones.
>4. Move the variable to file/global scope.
>
>This recently came up on an Apple list[2], apparently in the context of
>a vendor trying to compile their kernel driver against Tiger using
>gcc4. It looks like they're going with either #4 or #1.
>
>I tested #1 and it indeed cures the failing mmap testsuites.
>
>For Cygwin's purposes it seems that we need to decide if two threads
>could ever potentially call this function at the same time. If so, then
>#1 is out. Correct me if I'm wrong but Cygwin does not use anything
>from libstdc++ so #3 is out as well. In this particular case of
>'granularity' it seems rather trivial to spend much time implementing
>actual locking. But then you have to determine if there are any other
>local statics that will be suffering from the same fate, and if so then
>#2 starts to become reasonable, otherwise I'd say #4.
Now that we know what's causing this, I guess I'd have to say that 4.
is the way to go. The expense of a mutex for this case doesn't seem
worth it.
cgf