glibc defines clock_t as signed long rather than unsigned long. On 32-bit targets where the value can wrap, this makes it impossible to subtract clock_t values to measure intervals (signed overflow results in undefined behavior). This issue is easily fixed by changing the definition of clock_t on 32-bit targets to unsigned long, and doing so should not result in any API or ABI breakage.
(In reply to comment #0) > This issue is easily fixed by changing the definition of clock_t on 32-bit > targets to unsigned long, and doing so should not result in any API or ABI > breakage. Of course to breaks compatibility. All C++ interfaces with clock_t parameters are affected. And there is no reason to fear the underflows since it works fine on all supported platforms.
I haven't worked out an example yet, but I suspect you can construct a case where gcc will optimize out a necessary comparison due to the fact that signed arithmetic cannot overflow. I agree it's unfortunate that fixing this bug would break C++ functions using clock_t arguments, but this is a genuine bug and it will probably eventually have visible effects (possibly deadlock or random hour-long sleeps) as optimizers get more and more aggressive. And of course, code using clock_t will *always* break when compiling with trapping overflow mode.
You can easily cast yourself. Breaking the ABI is not an option.
Proposing that one "fix" strictly conforming ISO C applications to work around a buggy implementation that won't be fixed for the sake of bug-compatibility with existing binaries is a behavior I'd expect from major proprietary OS vendors... In any case, there's no such portable workaround anyway. To cast to an unsigned type, you'd have to know *which* unsigned type to use. Using one that's too large will break your app due to a large discontinuity at one of the points where the high bit changes. You'd have to guess the corresponding unsigned type, which is fairly easy on mainstream unix-like systems (it's going to be uint32_t or uint64_t; just check the size) but impossible to do in a general, portable way where your code would continue to work on exotic systems (there may not even be an unsigned type with the same width due to padding bits). Perhaps this could be fixed with symbol versioning or by having clock_t conditionally defined correctly (#ifndef __cplusplus)? Obviously the latter would not help C++ apps but at least the issue would be fixed for C.
The comment about strictly conforming code is of course nonsense, because ISO C99 doesn't state that the clock_t type must be unsigned, all it says is that it is an arithmetic type capable of representing times.
C99 specified for the clock() function: "If the processor time used is not available or its value cannot be represented, the function returns the value (clock_t)(-1).275) Footnotes 275) In order to measure the time spent in a program, the clock function should be called at the start of the program and its return value subtracted from the value returned by subsequent calls." I read the footnote as implying that, except in the case where (clock_t)-1 has been returned, programs can rely on well-defined behavior if they subtract the return values of two different invocations of clock(). Moreover, I believe returning a negative value other than (clock_t)-1 (which will only be negative if clock_t is a signed type) is non-conforming. If clock_t is to remain signed, then the clock() function should detect overflow and return -1 once overflow has occurred. Since CLOCKS_PER_SEC is required by SUS to be 1000000, overflow will occur very quickly on 32-bit systems, making the clock() function essentially useless...