Bug 13080 - clock() is unusable on 32-bit targets due to wrong type for clock_t
Summary: clock() is unusable on 32-bit targets due to wrong type for clock_t
Alias: None
Product: glibc
Classification: Unclassified
Component: time (show other bugs)
Version: unspecified
: P2 normal
Target Milestone: ---
Assignee: Ulrich Drepper
Depends on:
Reported: 2011-08-11 20:36 UTC by Rich Felker
Modified: 2015-08-27 22:06 UTC (History)
1 user (show)

See Also:
Last reconfirmed:
fweimer: security-


Note You need to log in before you can comment on or make changes to this bug.
Description Rich Felker 2011-08-11 20:36:26 UTC
glibc defines clock_t as signed long rather than unsigned long. On 32-bit targets where the value can wrap, this makes it impossible to subtract clock_t values to measure intervals (signed overflow results in undefined behavior). This issue is easily fixed by changing the definition of clock_t on 32-bit targets to unsigned long, and doing so should not result in any API or ABI breakage.
Comment 1 Ulrich Drepper 2011-08-29 18:43:24 UTC
(In reply to comment #0)
> This issue is easily fixed by changing the definition of clock_t on 32-bit
> targets to unsigned long, and doing so should not result in any API or ABI
> breakage.

Of course to breaks compatibility.  All C++ interfaces with clock_t parameters are affected.  And there is no reason to fear the underflows since it works fine on all supported platforms.
Comment 2 Rich Felker 2011-08-30 18:07:34 UTC
I haven't worked out an example yet, but I suspect you can construct a case where gcc will optimize out a necessary comparison due to the fact that signed arithmetic cannot overflow.  I agree it's unfortunate that fixing this bug would break C++ functions using clock_t arguments, but this is a genuine bug and it will probably eventually have visible effects (possibly deadlock or random hour-long sleeps) as optimizers get more and more aggressive. And of course, code using clock_t will *always* break when compiling with trapping overflow mode.
Comment 3 Andreas Schwab 2011-08-31 06:53:44 UTC
You can easily cast yourself.  Breaking the ABI is not an option.
Comment 4 Rich Felker 2011-08-31 12:25:31 UTC
Proposing that one "fix" strictly conforming ISO C applications to work around a buggy implementation that won't be fixed for the sake of bug-compatibility with existing binaries is a behavior I'd expect from major proprietary OS vendors...

In any case, there's no such portable workaround anyway. To cast to an unsigned type, you'd have to know *which* unsigned type to use. Using one that's too large will break your app due to a large discontinuity at one of the points where the high bit changes. You'd have to guess the corresponding unsigned type, which is fairly easy on mainstream unix-like systems (it's going to be uint32_t or uint64_t; just check the size) but impossible to do in a general, portable way where your code would continue to work on exotic systems (there may not even be an unsigned type with the same width due to padding bits).

Perhaps this could be fixed with symbol versioning or by having clock_t conditionally defined correctly (#ifndef __cplusplus)? Obviously the latter would not help C++ apps but at least the issue would be fixed for C.
Comment 5 Jakub Jelinek 2011-08-31 12:34:55 UTC
The comment about strictly conforming code is of course nonsense, because ISO C99 doesn't state that the clock_t type must be unsigned, all it says is that it is an arithmetic type capable of representing times.
Comment 6 Rich Felker 2013-05-01 22:01:59 UTC
C99 specified for the clock() function:

"If the processor time used is not available or its value cannot be represented, the function returns the value (clock_t)(-1).275)


275) In order to measure the time spent in a program, the clock function should be called at the start of the program and its return value subtracted from the value returned by subsequent calls."

I read the footnote as implying that, except in the case where (clock_t)-1 has been returned, programs can rely on well-defined behavior if they subtract the return values of two different invocations of clock(). Moreover, I believe returning a negative value other than (clock_t)-1 (which will only be negative if clock_t is a signed type) is non-conforming. If clock_t is to remain signed, then the clock() function should detect overflow and return -1 once overflow has occurred. Since CLOCKS_PER_SEC is required by SUS to be 1000000, overflow will occur very quickly on 32-bit systems, making the clock() function essentially useless...