[PATCH] x86: Correctly optimize EVEX to 128-bit VEX/EVEX
Jan Beulich
JBeulich@suse.com
Mon Mar 18 11:31:00 GMT 2019
>>> On 16.03.19 at 23:48, <hjl.tools@gmail.com> wrote:
> We can optimize 512-bit EVEX to 128-bit EVEX encoding for upper 16
> vector registers only when AVX512VL is enabled. We can't optimize
> EVEX to 128-bit VEX encoding when AVX isn't enabled.
I don't understand the last sentence: AVX is a prereq to anything
that's EVEX-encoded, at least as of now. "-march=+noavx" should
really result in all of AVX512 to also get disabled.
> --- a/gas/config/tc-i386.c
> +++ b/gas/config/tc-i386.c
> @@ -3975,10 +3975,13 @@ optimize_encoding (void)
> && !i.rounding
> && is_evex_encoding (&i.tm)
> && (i.vec_encoding != vex_encoding_evex
> + || cpu_arch_flags.bitfield.cpuavx
> + || cpu_arch_isa_flags.bitfield.cpuavx
> + || cpu_arch_flags.bitfield.cpuavx512vl
> + || cpu_arch_isa_flags.bitfield.cpuavx512vl
cpu_arch_flags starts out with (almost) all bits set. It was for that
reason that ...
> || i.tm.cpu_flags.bitfield.cpuavx512vl
> || (i.tm.operand_types[2].bitfield.zmmword
> - && i.types[2].bitfield.ymmword)
> - || cpu_arch_isa_flags.bitfield.cpuavx512vl)))
> + && i.types[2].bitfield.ymmword))))
... originally only cpu_arch_isa_flags got checked here.
Jan
More information about the Binutils
mailing list