[PATCH 08/12] x86: template-ize vector packed dword/qword integer insns

H.J. Lu hjl.tools@gmail.com
Tue Aug 16 16:32:08 GMT 2022


On Tue, Aug 16, 2022 at 9:20 AM Jan Beulich <jbeulich@suse.com> wrote:
>
> On 16.08.2022 17:53, H.J. Lu wrote:
> > On Tue, Aug 16, 2022 at 12:37 AM Jan Beulich <jbeulich@suse.com> wrote:
> >>
> >> On 11.08.2022 19:23, H.J. Lu wrote:
> >>> On Fri, Aug 5, 2022 at 5:26 AM Jan Beulich <jbeulich@suse.com> wrote:
> >>>>
> >>>> Many of the vector integer insns come in dword/qword element pairs. Most
> >>>> of these pairs follow certain encoding patterns. Introduce a "dq"
> >>>> template to reduce redundancy.
> >>>>
> >>>> Note that in the course of the conversion
> >>>> - a few otherwise untouched templates are moved, so they end up next to
> >>>>   their siblings),
> >>>> - drop an unhelpful Cpu64 from the GPR form of VPBROADCASTQ, matching
> >>>>   what we already have for KMOVQ - the diagnostic is better this way for
> >>>>   insns with multiple forms (i.e. the same Cpu64 attributes on {,V}MOVQ,
> >>>>   {,V}PEXTRQ, and  {,V}PINSRQ are useful to keep),
> >>>> - this adds benign/meaningless IgnoreSize attributes to the GPR forms of
> >>>>   KMOVD and VPBROADCASTD; it didn't seem worth avoiding this.
> >>>> ---
> >>>> For VPCOMPRESS{D,Q} and VPEXPAND{D,Q} the conversion could only be done
> >>>> if we allowed Dword/Qword on the memory operands. Imo permitting this
> >>>> makes sense anyway (as the memory operands aren't full [XYZ]mmword
> >>>> ones), but such a functional change should probably be a separate patch.
> >>
> >> Do you have any view on this and the similar remarks in two other of the
> >> patches in this series?
> >
> > Since these instructions don't take Dword/Qword memory operands, please
> > leave them alone.
>
> But they also don't really take [XYZ]mmword operands. They're rather similar
> to the S/G insns, don't you agree?

They are special cases.  I don't think they apply here.

-- 
H.J.


More information about the Binutils mailing list