This is the mail archive of the
binutils@sourceware.org
mailing list for the binutils project.
Re: [PATCH] x86: Correctly optimize EVEX to 128-bit VEX/EVEX
- From: "Jan Beulich" <JBeulich at suse dot com>
- To: "H.J. Lu" <hjl dot tools at gmail dot com>
- Cc: <binutils at sourceware dot org>
- Date: Mon, 18 Mar 2019 05:31:18 -0600
- Subject: Re: [PATCH] x86: Correctly optimize EVEX to 128-bit VEX/EVEX
- References: <20190316224846.7527-1-hjl.tools@gmail.com>
>>> On 16.03.19 at 23:48, <hjl.tools@gmail.com> wrote:
> We can optimize 512-bit EVEX to 128-bit EVEX encoding for upper 16
> vector registers only when AVX512VL is enabled. We can't optimize
> EVEX to 128-bit VEX encoding when AVX isn't enabled.
I don't understand the last sentence: AVX is a prereq to anything
that's EVEX-encoded, at least as of now. "-march=+noavx" should
really result in all of AVX512 to also get disabled.
> --- a/gas/config/tc-i386.c
> +++ b/gas/config/tc-i386.c
> @@ -3975,10 +3975,13 @@ optimize_encoding (void)
> && !i.rounding
> && is_evex_encoding (&i.tm)
> && (i.vec_encoding != vex_encoding_evex
> + || cpu_arch_flags.bitfield.cpuavx
> + || cpu_arch_isa_flags.bitfield.cpuavx
> + || cpu_arch_flags.bitfield.cpuavx512vl
> + || cpu_arch_isa_flags.bitfield.cpuavx512vl
cpu_arch_flags starts out with (almost) all bits set. It was for that
reason that ...
> || i.tm.cpu_flags.bitfield.cpuavx512vl
> || (i.tm.operand_types[2].bitfield.zmmword
> - && i.types[2].bitfield.ymmword)
> - || cpu_arch_isa_flags.bitfield.cpuavx512vl)))
> + && i.types[2].bitfield.ymmword))))
... originally only cpu_arch_isa_flags got checked here.
Jan