This is the mail archive of the mailing list for the glibc project.

Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: [PATCH] Count number of logical processors sharing L2 cache

On 05/24/2016 11:02 AM, H.J. Lu wrote:
> CAT applies to a specific thread/process. Cache sizes in glibc are applied
> to string/memory functions for all threads/processes.  They both try to avoid
> over-using shared cache by a single thread/process.  But they work at
> different levels and have different behaviors.  Glibc also uses the cache size
> to decide when to use non-temporal store to avoid cache pollution and speed
> up writing a large amount of data..
Don't you mean that CAT applies to a core (and all of its logical cores)?

Might it be the case that a thread or process could be migrated by the linux kernel
between various cores configured with different CAT values and the glibc heuristics
could be poorly tuned for some of those cores?

As I see it the values computed by init_cacheinfo() are only average heuristics for
the core.

I agree that Florian has a point, that these values may become less useful in the
presence of the dynamically changing L3<->core partitioning enabled by CAT.

It is silly though to think that you would allow a thread or process to migrate
away from the CAT-tuned core. The design of CAT is such that you want to isolate
the tuned application to one ore more cores and use CAT to control the L3 allocation
for those cores.

In the case where you have a process pinned to a core, and CAT is used to limit the
L3 of that core, do the glibc heuristics computed in init_cacheinfo() match the
reality of the L3<->core allocation? Or would a lower L3 CAT-tuned value mean that
glibc would be mis-tuned for that core?


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]