V6: [PATCH] x86: Install <sys/platform/x86.h> [BZ #26124]

H.J. Lu hjl.tools@gmail.com
Mon Jun 29 16:44:51 GMT 2020

On Mon, Jun 29, 2020 at 9:14 AM Florian Weimer <fweimer@redhat.com> wrote:
> * H. J. Lu via Libc-alpha:
> > Small update:
> >
> > const struct cpu_features *
> > __x86_get_cpu_features (unsigned int max, int cpuid)
> > {
> >   if (cpuid)
> >     {
> >       if (max > COMMON_CPUID_INDEX_MAX)
> >       return NULL;
> >     }
> >   else if (max > USABLE_FEATURE_INDEX_MAX)
> >     return NULL;
> >   return &GLRO(dl_x86_cpu_features);
> > }
> >
> > Don't return NULL when checking the cpuid array when COMMON_CPUID_INDEX_MAX
> > is unchanged, but USABLE_FEATURE_INDEX_MAX is changed.
> I think these changes address the fundamental technical issues, thanks.
> The patch needs to be rebased on top of
> 4fdd4d41a17dda26c854ed935658154a17d4b906 ("x86: Detect Intel Advanced
> Matrix Extensions").

Yes, I have done that in my local repo.

> One thing I still dislike (sorry) is the asymmetry between the usable
> and feature checks.  For example, why is there are usability check for
> VAES, but not for AES?  I believe the reason is that VAES depends on
> AVX/AVX2, but AES only depends on SSE2.  But even that suggests to me
> that for 32-bit, there should be a usable gate for that (which is false
> if SSE2 support has been masked).

CPU with AES must have SSE2.  I don't think we need explicit check
for SSE2.

> I think it would be more consistent to expose the usable/feature
> distinction for all features, and carry that over to the internal ABI,
> too.  This way, we can give accurate reporting in cases where the
> usability turned out to be firmware-dependent in the end (as it happened
> with RDRAND).  That would need additional feature-specific work; by
> default, we would still report such features as unsupported at the CPU
> level.  Having both bits exposed in all cases also protects us against
> cases where we need to change the usability detection logic in a later
> release.

We can flip the bit on the usable array with a parallel unusable array
 of the cpuid array.  We can set the unusable bit if OS doesn't support

> There is a bit of a tension here regarding agility because new usable
> bits will only become set after glibc update.  But I don't see a way to
> avoid this, not without teaching programmers to bypass the usable checks
> (which leads to bugs, of course).
> The interface with the max and cpuid arguments is quite close to the one
> I proposed further up-thread.  I still think it has quite a few
> advantages.  Should I implement it?  I could have something by the end
> of the week, so we should still be able to make the ABI freeze.

Let me rebase and change to the usable array.  We can go from there.



More information about the Libc-alpha mailing list