This is the mail archive of the
mailing list for the glibc project.
Re: [PATCHv2 2/4] Cleanup hp-timing defines
On 07/02/2019 12:00, Wilco Dijkstra wrote:
> Hi Adhemerval,
>> I think since we are touching it, we should aim to clean up the hp-timing
>> usage not only for benchtests. Current internal hp-timing usage should be
>> aimed for time profiling and we can clean up its internal usage since:
> Yes it makes sense to remove all internal use of hp-timing except for
> benchmarking and rtld profiling.
>> - clock_gettime, clock_settime, clock_getres, clock_nanosleep support
>> for CLOCK_THREAD_CPUTIME_ID and CLOCK_PROCESS_CPUTIME_ID are not used
>> neither on Linux nor on Hurd. It also means that assuming clock_*
>> syscall always support such clock ids, pthread_clock_gettime.c
>> and pthread_clock_settime.c are also unused. Which also leads to
>> note that _dl_cpuclock_offset is also set and not used for none of
>> the hp-timing architectures.
> Getting rid of the badly hacked emulation of clock_gettime etc allows for a
> lot of code to be removed. It's clearly not needed for Linux, but what about Hurd?
Hurd currently does not use hp-timing i686 implementation and thus
does not support CLOCK_THREAD_CPUTIME_ID and CLOCK_PROCESS_CPUTIME_ID
for clock_gettime, clock_settime, clock_getres, clock_nanosleep.
I am not sure how complex it would be to implement this on hurd.
Mostly of the __pthread_clock_gettime/__pthread_clock_settime nptl
implementation is generic, but hurd will need at least to:
- Add the cpuclock_offset on 'struct __pthread' and states it is
correct initialized on thread creation.
- It will require to implement the __get_clockfreq which mostly
require kernel support (as a side note my proposed cleanup
we can remove Linux __get_clockfreq support).
- For CLOCK_THREAD_CPUTIME_ID on architectures that support hp-timing
a thread can 'encode' a tid on clock_id to set/get the timer for
a different thread (done using __find_thread_by_id). This is not
used anymore on Linux since it uses syscall for the clock_id, but
generic implementation still uses it.
Also taking in consideration that providing CLOCK_PROCESS_CPUTIME_ID
and CLOCK_THREAD_CPUTIME_ID have multiple issues (with some details
in man-pages for referred functions), I think if Hurd eventually
wants to support them it should either either implement in a kernel
facility (or something related due its architecture) or in system
specific implementation (on hurd/).
Using hp-timing for such clock_ids is just hacky and I see no sense
in using them on generic interface.
>> - same for sysconf, where assuming CLOCK_THREAD_CPUTIME_ID and
>> CLOCK_PROCESS_CPUTIME_ID support we can get rid of hp-timing usage.
>> - the remaining 3 usages of hp-timing for some sort of randomness
>> at resolv/res_mkquery.c, resolv/res_send.c, and sysdeps/posix/tempname.c
>> could be replaced to clock_gettime (CLOCK_MONOTONIC) instead.
>> If randomness is really required I think we should refactor to
>> use a better source (getrandom/getentropy).
> In all cases we don't need true randomness, just a fast random number to
> reduce collisions. It seems best to abstract these cases into an internal interface.
It seems reasonable, I will refactor my patchset to add this internal
>> Also, I understand why you refactor removes ld.so profiling for alpha
>> (since it is hp-timing support is sketchy at least and sit simplifies
>> the benchtests inclusion of generic implementation). However I think
>> we can still maintain it supports by enable the architecture hp-timing
>> support only for rtld.
> Yes that's feasible.
>> Based on that I created a branch  with the above refactor along with
>> your suggestion to add a generic hp-timing based on clock_gettime. What
>> do you think?
> It looks like a good cleanup. Are you planning to post patches for this, either
> subsuming my patches or on top of my cleanup?
I was planning to get your feedback first, if it is ok I plan to send
the patchset along with some more cleanups (__get_clockfreq removal
and internal interface for random number based on clock_gettime).