Best practices in regard to -D_TIME_BITS=64

Kaz Kylheku kaz@kylheku.com
Tue Jan 4 23:33:36 GMT 2022


Hi all,

What should be the best practice with regard to selecting the width of 
time_t,
which is possible with newer glibc's -D_TIME_BITS=64?

My main concern is specifically with application developers. Should
individual applications be detecting that this is available and using 
it?

Or is this best left to the toolchain/distro people? If some 32 bit
platform (like an embedded system) decides to go to 64 bit time, that
can be rolled into the default compiler options.

If the application is some free software that is packaged into a distro,
then likewise the package maintainers can decide. They can put it into
the distro-wide CFLAGS, or whatever.

I have some apprehensions about just pulling the trigger and using it
individually; like what if the application is included in some system
where it is the only thing requesting that, and it causes some
interoperability issue with other pieces.

This feels different from _FILE_OFFSET_BITS, because _FILE_OFFSET_BITS
does not solve some impeding problem that becomes worse as we approach
a certain date. A program not manipulating large files never needs to
care, even if it manipulates files; whereas there will hardly be such
a thing as a program which manipulates time and which only needs to
handle "small time" such that it's perfectly fine with a 32 bit time_t
and related structures and functions thereof.

One can force downstream people to deal with this. If the program
detects that it's on a 32 bit platform where time_t is still 32 bits
if no special compiler options are used, and where -D_TIME_BITS=64
is available and working, the configure script can fail and make
the user explicitly decide what they want to do: stick with the
32 bit time, or go to 64.

/That/ could be a best practice.

Hmm, any thoughts?



More information about the Libc-alpha mailing list