[AArch64][SVE 27/32] Add SVE integer immediate operands

Richard Earnshaw (lists) Richard.Earnshaw@arm.com
Thu Aug 25 14:51:00 GMT 2016


On 23/08/16 10:24, Richard Sandiford wrote:
> This patch adds the new SVE integer immediate operands.  There are
> three kinds:
> 
> - simple signed and unsigned ranges, but with new widths and positions.
> 
> - 13-bit logical immediates.  These have the same form as in base AArch64,
>   but at a different bit position.
> 
>   In the case of the "MOV Zn.<T>, #<limm>" alias of DUPM, the logical
>   immediate <limm> is not allowed to be a valid DUP immediate, since DUP
>   is preferred over DUPM for constants that both instructions can handle.
> 
> - a new 9-bit arithmetic immediate, of the form "<imm8>{, LSL #8}".
>   In some contexts the operand is signed and in others it's unsigned.
>   As an extension, we allow shifted immediates to be written as a single
>   integer, e.g. "#256" is equivalent to "#1, LSL #8".  We also use the
>   shiftless form as the preferred disassembly, except for the special
>   case of "#0, LSL #8" (a redundant encoding of 0).
> 
> OK to install?
> 
> Thanks,
> Richard
> 
> 
> include/opcode/
> 	* aarch64.h (AARCH64_OPND_SIMM5): New aarch64_opnd.
> 	(AARCH64_OPND_SVE_AIMM, AARCH64_OPND_SVE_ASIMM)
> 	(AARCH64_OPND_SVE_INV_LIMM, AARCH64_OPND_SVE_LIMM)
> 	(AARCH64_OPND_SVE_LIMM_MOV, AARCH64_OPND_SVE_SHLIMM_PRED)
> 	(AARCH64_OPND_SVE_SHLIMM_UNPRED, AARCH64_OPND_SVE_SHRIMM_PRED)
> 	(AARCH64_OPND_SVE_SHRIMM_UNPRED, AARCH64_OPND_SVE_SIMM5)
> 	(AARCH64_OPND_SVE_SIMM5B, AARCH64_OPND_SVE_SIMM6)
> 	(AARCH64_OPND_SVE_SIMM8, AARCH64_OPND_SVE_UIMM3)
> 	(AARCH64_OPND_SVE_UIMM7, AARCH64_OPND_SVE_UIMM8)
> 	(AARCH64_OPND_SVE_UIMM8_53): Likewise.
> 	(aarch64_sve_dupm_mov_immediate_p): Declare.
> 
> opcodes/
> 	* aarch64-tbl.h (AARCH64_OPERANDS): Add entries for the new SVE
> 	integer immediate operands.
> 	* aarch64-opc.h (FLD_SVE_immN, FLD_SVE_imm3, FLD_SVE_imm5)
> 	(FLD_SVE_imm5b, FLD_SVE_imm7, FLD_SVE_imm8, FLD_SVE_imm9)
> 	(FLD_SVE_immr, FLD_SVE_imms, FLD_SVE_tszh): New aarch64_field_kinds.
> 	* aarch64-opc.c (fields): Add corresponding entries.
> 	(operand_general_constraint_met_p): Handle the new SVE integer
> 	immediate operands.
> 	(aarch64_print_operand): Likewise.
> 	(aarch64_sve_dupm_mov_immediate_p): New function.
> 	* aarch64-opc-2.c: Regenerate.
> 	* aarch64-asm.h (ins_inv_limm, ins_sve_aimm, ins_sve_asimm)
> 	(ins_sve_limm_mov, ins_sve_shlimm, ins_sve_shrimm): New inserters.
> 	* aarch64-asm.c (aarch64_ins_limm_1): New function, split out from...
> 	(aarch64_ins_limm): ...here.
> 	(aarch64_ins_inv_limm): New function.
> 	(aarch64_ins_sve_aimm): Likewise.
> 	(aarch64_ins_sve_asimm): Likewise.
> 	(aarch64_ins_sve_limm_mov): Likewise.
> 	(aarch64_ins_sve_shlimm): Likewise.
> 	(aarch64_ins_sve_shrimm): Likewise.
> 	* aarch64-asm-2.c: Regenerate.
> 	* aarch64-dis.h (ext_inv_limm, ext_sve_aimm, ext_sve_asimm)
> 	(ext_sve_limm_mov, ext_sve_shlimm, ext_sve_shrimm): New extractors.
> 	* aarch64-dis.c (decode_limm): New function, split out from...
> 	(aarch64_ext_limm): ...here.
> 	(aarch64_ext_inv_limm): New function.
> 	(decode_sve_aimm): Likewise.
> 	(aarch64_ext_sve_aimm): Likewise.
> 	(aarch64_ext_sve_asimm): Likewise.
> 	(aarch64_ext_sve_limm_mov): Likewise.
> 	(aarch64_top_bit): Likewise.
> 	(aarch64_ext_sve_shlimm): Likewise.
> 	(aarch64_ext_sve_shrimm): Likewise.
> 	* aarch64-dis-2.c: Regenerate.
> 
> gas/
> 	* config/tc-aarch64.c (parse_operands): Handle the new SVE integer
> 	immediate operands.

+		  set_other_error (mismatch_detail, idx,
+				   _("shift amount should be 0 or 8"));

I think the error message should use 'must' rather than 'should'.
'Should' implies a degree of optionality that just doesn't apply here.

OK with that change.

R.

> 
> diff --git a/gas/config/tc-aarch64.c b/gas/config/tc-aarch64.c
> index 37fce5b..cb39cf8 100644
> --- a/gas/config/tc-aarch64.c
> +++ b/gas/config/tc-aarch64.c
> @@ -5553,6 +5553,7 @@ parse_operands (char *str, const aarch64_opcode *opcode)
>  	  break;
>  
>  	case AARCH64_OPND_CCMP_IMM:
> +	case AARCH64_OPND_SIMM5:
>  	case AARCH64_OPND_FBITS:
>  	case AARCH64_OPND_UIMM4:
>  	case AARCH64_OPND_UIMM3_OP1:
> @@ -5560,10 +5561,36 @@ parse_operands (char *str, const aarch64_opcode *opcode)
>  	case AARCH64_OPND_IMM_VLSL:
>  	case AARCH64_OPND_IMM:
>  	case AARCH64_OPND_WIDTH:
> +	case AARCH64_OPND_SVE_INV_LIMM:
> +	case AARCH64_OPND_SVE_LIMM:
> +	case AARCH64_OPND_SVE_LIMM_MOV:
> +	case AARCH64_OPND_SVE_SHLIMM_PRED:
> +	case AARCH64_OPND_SVE_SHLIMM_UNPRED:
> +	case AARCH64_OPND_SVE_SHRIMM_PRED:
> +	case AARCH64_OPND_SVE_SHRIMM_UNPRED:
> +	case AARCH64_OPND_SVE_SIMM5:
> +	case AARCH64_OPND_SVE_SIMM5B:
> +	case AARCH64_OPND_SVE_SIMM6:
> +	case AARCH64_OPND_SVE_SIMM8:
> +	case AARCH64_OPND_SVE_UIMM3:
> +	case AARCH64_OPND_SVE_UIMM7:
> +	case AARCH64_OPND_SVE_UIMM8:
> +	case AARCH64_OPND_SVE_UIMM8_53:
>  	  po_imm_nc_or_fail ();
>  	  info->imm.value = val;
>  	  break;
>  
> +	case AARCH64_OPND_SVE_AIMM:
> +	case AARCH64_OPND_SVE_ASIMM:
> +	  po_imm_nc_or_fail ();
> +	  info->imm.value = val;
> +	  skip_whitespace (str);
> +	  if (skip_past_comma (&str))
> +	    po_misc_or_fail (parse_shift (&str, info, SHIFTED_LSL));
> +	  else
> +	    inst.base.operands[i].shifter.kind = AARCH64_MOD_LSL;
> +	  break;
> +
>  	case AARCH64_OPND_SVE_PATTERN:
>  	  po_enum_or_fail (aarch64_sve_pattern_array);
>  	  info->imm.value = val;
> diff --git a/include/opcode/aarch64.h b/include/opcode/aarch64.h
> index 837d6bd..36e95b4 100644
> --- a/include/opcode/aarch64.h
> +++ b/include/opcode/aarch64.h
> @@ -200,6 +200,7 @@ enum aarch64_opnd
>    AARCH64_OPND_BIT_NUM,	/* Immediate.  */
>    AARCH64_OPND_EXCEPTION,/* imm16 operand in exception instructions.  */
>    AARCH64_OPND_CCMP_IMM,/* Immediate in conditional compare instructions.  */
> +  AARCH64_OPND_SIMM5,	/* 5-bit signed immediate in the imm5 field.  */
>    AARCH64_OPND_NZCV,	/* Flag bit specifier giving an alternative value for
>  			   each condition flag.  */
>  
> @@ -289,6 +290,11 @@ enum aarch64_opnd
>    AARCH64_OPND_SVE_ADDR_ZZ_LSL,     /* SVE [Zn.<T>, Zm,<T>, LSL #<msz>].  */
>    AARCH64_OPND_SVE_ADDR_ZZ_SXTW,    /* SVE [Zn.<T>, Zm,<T>, SXTW #<msz>].  */
>    AARCH64_OPND_SVE_ADDR_ZZ_UXTW,    /* SVE [Zn.<T>, Zm,<T>, UXTW #<msz>].  */
> +  AARCH64_OPND_SVE_AIMM,	/* SVE unsigned arithmetic immediate.  */
> +  AARCH64_OPND_SVE_ASIMM,	/* SVE signed arithmetic immediate.  */
> +  AARCH64_OPND_SVE_INV_LIMM,	/* SVE inverted logical immediate.  */
> +  AARCH64_OPND_SVE_LIMM,	/* SVE logical immediate.  */
> +  AARCH64_OPND_SVE_LIMM_MOV,	/* SVE logical immediate for MOV.  */
>    AARCH64_OPND_SVE_PATTERN,	/* SVE vector pattern enumeration.  */
>    AARCH64_OPND_SVE_PATTERN_SCALED, /* Likewise, with additional MUL factor.  */
>    AARCH64_OPND_SVE_PRFOP,	/* SVE prefetch operation.  */
> @@ -300,6 +306,18 @@ enum aarch64_opnd
>    AARCH64_OPND_SVE_Pm,		/* SVE p0-p15 in Pm.  */
>    AARCH64_OPND_SVE_Pn,		/* SVE p0-p15 in Pn.  */
>    AARCH64_OPND_SVE_Pt,		/* SVE p0-p15 in Pt.  */
> +  AARCH64_OPND_SVE_SHLIMM_PRED,	  /* SVE shift left amount (predicated).  */
> +  AARCH64_OPND_SVE_SHLIMM_UNPRED, /* SVE shift left amount (unpredicated).  */
> +  AARCH64_OPND_SVE_SHRIMM_PRED,	  /* SVE shift right amount (predicated).  */
> +  AARCH64_OPND_SVE_SHRIMM_UNPRED, /* SVE shift right amount (unpredicated).  */
> +  AARCH64_OPND_SVE_SIMM5,	/* SVE signed 5-bit immediate.  */
> +  AARCH64_OPND_SVE_SIMM5B,	/* SVE secondary signed 5-bit immediate.  */
> +  AARCH64_OPND_SVE_SIMM6,	/* SVE signed 6-bit immediate.  */
> +  AARCH64_OPND_SVE_SIMM8,	/* SVE signed 8-bit immediate.  */
> +  AARCH64_OPND_SVE_UIMM3,	/* SVE unsigned 3-bit immediate.  */
> +  AARCH64_OPND_SVE_UIMM7,	/* SVE unsigned 7-bit immediate.  */
> +  AARCH64_OPND_SVE_UIMM8,	/* SVE unsigned 8-bit immediate.  */
> +  AARCH64_OPND_SVE_UIMM8_53,	/* SVE split unsigned 8-bit immediate.  */
>    AARCH64_OPND_SVE_Za_5,	/* SVE vector register in Za, bits [9,5].  */
>    AARCH64_OPND_SVE_Za_16,	/* SVE vector register in Za, bits [20,16].  */
>    AARCH64_OPND_SVE_Zd,		/* SVE vector register in Zd.  */
> @@ -1065,6 +1083,9 @@ aarch64_get_operand_name (enum aarch64_opnd);
>  extern const char *
>  aarch64_get_operand_desc (enum aarch64_opnd);
>  
> +extern bfd_boolean
> +aarch64_sve_dupm_mov_immediate_p (uint64_t, int);
> +
>  #ifdef DEBUG_AARCH64
>  extern int debug_dump;
>  
> diff --git a/opcodes/aarch64-asm-2.c b/opcodes/aarch64-asm-2.c
> index da590ca..491ea53 100644
> --- a/opcodes/aarch64-asm-2.c
> +++ b/opcodes/aarch64-asm-2.c
> @@ -480,12 +480,6 @@ aarch64_insert_operand (const aarch64_operand *self,
>      case 27:
>      case 35:
>      case 36:
> -    case 129:
> -    case 130:
> -    case 131:
> -    case 132:
> -    case 133:
> -    case 134:
>      case 135:
>      case 136:
>      case 137:
> @@ -494,7 +488,13 @@ aarch64_insert_operand (const aarch64_operand *self,
>      case 140:
>      case 141:
>      case 142:
> -    case 145:
> +    case 155:
> +    case 156:
> +    case 157:
> +    case 158:
> +    case 159:
> +    case 160:
> +    case 163:
>        return aarch64_ins_regno (self, info, code, inst);
>      case 12:
>        return aarch64_ins_reg_extended (self, info, code, inst);
> @@ -527,12 +527,21 @@ aarch64_insert_operand (const aarch64_operand *self,
>      case 56:
>      case 57:
>      case 58:
> -    case 67:
> +    case 59:
>      case 68:
>      case 69:
>      case 70:
> -    case 126:
> -    case 128:
> +    case 71:
> +    case 132:
> +    case 134:
> +    case 147:
> +    case 148:
> +    case 149:
> +    case 150:
> +    case 151:
> +    case 152:
> +    case 153:
> +    case 154:
>        return aarch64_ins_imm (self, info, code, inst);
>      case 38:
>      case 39:
> @@ -543,61 +552,61 @@ aarch64_insert_operand (const aarch64_operand *self,
>        return aarch64_ins_advsimd_imm_modified (self, info, code, inst);
>      case 46:
>        return aarch64_ins_fpimm (self, info, code, inst);
> -    case 59:
> -      return aarch64_ins_limm (self, info, code, inst);
>      case 60:
> -      return aarch64_ins_aimm (self, info, code, inst);
> +    case 130:
> +      return aarch64_ins_limm (self, info, code, inst);
>      case 61:
> -      return aarch64_ins_imm_half (self, info, code, inst);
> +      return aarch64_ins_aimm (self, info, code, inst);
>      case 62:
> +      return aarch64_ins_imm_half (self, info, code, inst);
> +    case 63:
>        return aarch64_ins_fbits (self, info, code, inst);
> -    case 64:
>      case 65:
> +    case 66:
>        return aarch64_ins_cond (self, info, code, inst);
> -    case 71:
> -    case 77:
> -      return aarch64_ins_addr_simple (self, info, code, inst);
>      case 72:
> -      return aarch64_ins_addr_regoff (self, info, code, inst);
> +    case 78:
> +      return aarch64_ins_addr_simple (self, info, code, inst);
>      case 73:
> +      return aarch64_ins_addr_regoff (self, info, code, inst);
>      case 74:
>      case 75:
> -      return aarch64_ins_addr_simm (self, info, code, inst);
>      case 76:
> +      return aarch64_ins_addr_simm (self, info, code, inst);
> +    case 77:
>        return aarch64_ins_addr_uimm12 (self, info, code, inst);
> -    case 78:
> -      return aarch64_ins_simd_addr_post (self, info, code, inst);
>      case 79:
> -      return aarch64_ins_sysreg (self, info, code, inst);
> +      return aarch64_ins_simd_addr_post (self, info, code, inst);
>      case 80:
> -      return aarch64_ins_pstatefield (self, info, code, inst);
> +      return aarch64_ins_sysreg (self, info, code, inst);
>      case 81:
> +      return aarch64_ins_pstatefield (self, info, code, inst);
>      case 82:
>      case 83:
>      case 84:
> -      return aarch64_ins_sysins_op (self, info, code, inst);
>      case 85:
> +      return aarch64_ins_sysins_op (self, info, code, inst);
>      case 86:
> -      return aarch64_ins_barrier (self, info, code, inst);
>      case 87:
> -      return aarch64_ins_prfop (self, info, code, inst);
> +      return aarch64_ins_barrier (self, info, code, inst);
>      case 88:
> -      return aarch64_ins_hint (self, info, code, inst);
> +      return aarch64_ins_prfop (self, info, code, inst);
>      case 89:
> +      return aarch64_ins_hint (self, info, code, inst);
>      case 90:
>      case 91:
>      case 92:
> -      return aarch64_ins_sve_addr_ri_s4xvl (self, info, code, inst);
>      case 93:
> -      return aarch64_ins_sve_addr_ri_s6xvl (self, info, code, inst);
> +      return aarch64_ins_sve_addr_ri_s4xvl (self, info, code, inst);
>      case 94:
> -      return aarch64_ins_sve_addr_ri_s9xvl (self, info, code, inst);
> +      return aarch64_ins_sve_addr_ri_s6xvl (self, info, code, inst);
>      case 95:
> +      return aarch64_ins_sve_addr_ri_s9xvl (self, info, code, inst);
>      case 96:
>      case 97:
>      case 98:
> -      return aarch64_ins_sve_addr_ri_u6 (self, info, code, inst);
>      case 99:
> +      return aarch64_ins_sve_addr_ri_u6 (self, info, code, inst);
>      case 100:
>      case 101:
>      case 102:
> @@ -609,8 +618,8 @@ aarch64_insert_operand (const aarch64_operand *self,
>      case 108:
>      case 109:
>      case 110:
> -      return aarch64_ins_sve_addr_rr_lsl (self, info, code, inst);
>      case 111:
> +      return aarch64_ins_sve_addr_rr_lsl (self, info, code, inst);
>      case 112:
>      case 113:
>      case 114:
> @@ -618,24 +627,39 @@ aarch64_insert_operand (const aarch64_operand *self,
>      case 116:
>      case 117:
>      case 118:
> -      return aarch64_ins_sve_addr_rz_xtw (self, info, code, inst);
>      case 119:
> +      return aarch64_ins_sve_addr_rz_xtw (self, info, code, inst);
>      case 120:
>      case 121:
>      case 122:
> -      return aarch64_ins_sve_addr_zi_u5 (self, info, code, inst);
>      case 123:
> -      return aarch64_ins_sve_addr_zz_lsl (self, info, code, inst);
> +      return aarch64_ins_sve_addr_zi_u5 (self, info, code, inst);
>      case 124:
> -      return aarch64_ins_sve_addr_zz_sxtw (self, info, code, inst);
> +      return aarch64_ins_sve_addr_zz_lsl (self, info, code, inst);
>      case 125:
> +      return aarch64_ins_sve_addr_zz_sxtw (self, info, code, inst);
> +    case 126:
>        return aarch64_ins_sve_addr_zz_uxtw (self, info, code, inst);
>      case 127:
> +      return aarch64_ins_sve_aimm (self, info, code, inst);
> +    case 128:
> +      return aarch64_ins_sve_asimm (self, info, code, inst);
> +    case 129:
> +      return aarch64_ins_inv_limm (self, info, code, inst);
> +    case 131:
> +      return aarch64_ins_sve_limm_mov (self, info, code, inst);
> +    case 133:
>        return aarch64_ins_sve_scale (self, info, code, inst);
>      case 143:
> -      return aarch64_ins_sve_index (self, info, code, inst);
>      case 144:
> +      return aarch64_ins_sve_shlimm (self, info, code, inst);
> +    case 145:
>      case 146:
> +      return aarch64_ins_sve_shrimm (self, info, code, inst);
> +    case 161:
> +      return aarch64_ins_sve_index (self, info, code, inst);
> +    case 162:
> +    case 164:
>        return aarch64_ins_sve_reglist (self, info, code, inst);
>      default: assert (0); abort ();
>      }
> diff --git a/opcodes/aarch64-asm.c b/opcodes/aarch64-asm.c
> index 944a9eb..61d0d95 100644
> --- a/opcodes/aarch64-asm.c
> +++ b/opcodes/aarch64-asm.c
> @@ -452,17 +452,18 @@ aarch64_ins_aimm (const aarch64_operand *self, const aarch64_opnd_info *info,
>    return NULL;
>  }
>  
> -/* Insert logical/bitmask immediate for e.g. the last operand in
> -     ORR <Wd|WSP>, <Wn>, #<imm>.  */
> -const char *
> -aarch64_ins_limm (const aarch64_operand *self, const aarch64_opnd_info *info,
> -		  aarch64_insn *code, const aarch64_inst *inst ATTRIBUTE_UNUSED)
> +/* Common routine shared by aarch64_ins{,_inv}_limm.  INVERT_P says whether
> +   the operand should be inverted before encoding.  */
> +static const char *
> +aarch64_ins_limm_1 (const aarch64_operand *self,
> +		    const aarch64_opnd_info *info, aarch64_insn *code,
> +		    const aarch64_inst *inst, bfd_boolean invert_p)
>  {
>    aarch64_insn value;
>    uint64_t imm = info->imm.value;
>    int esize = aarch64_get_qualifier_esize (inst->operands[0].qualifier);
>  
> -  if (inst->opcode->op == OP_BIC)
> +  if (invert_p)
>      imm = ~imm;
>    if (aarch64_logical_immediate_p (imm, esize, &value) == FALSE)
>      /* The constraint check should have guaranteed this wouldn't happen.  */
> @@ -473,6 +474,25 @@ aarch64_ins_limm (const aarch64_operand *self, const aarch64_opnd_info *info,
>    return NULL;
>  }
>  
> +/* Insert logical/bitmask immediate for e.g. the last operand in
> +     ORR <Wd|WSP>, <Wn>, #<imm>.  */
> +const char *
> +aarch64_ins_limm (const aarch64_operand *self, const aarch64_opnd_info *info,
> +		  aarch64_insn *code, const aarch64_inst *inst)
> +{
> +  return aarch64_ins_limm_1 (self, info, code, inst,
> +			     inst->opcode->op == OP_BIC);
> +}
> +
> +/* Insert a logical/bitmask immediate for the BIC alias of AND (etc.).  */
> +const char *
> +aarch64_ins_inv_limm (const aarch64_operand *self,
> +		      const aarch64_opnd_info *info, aarch64_insn *code,
> +		      const aarch64_inst *inst)
> +{
> +  return aarch64_ins_limm_1 (self, info, code, inst, TRUE);
> +}
> +
>  /* Encode Ft for e.g. STR <Qt>, [<Xn|SP>, <R><m>{, <extend> {<amount>}}]
>     or LDP <Qt1>, <Qt2>, [<Xn|SP>], #<imm>.  */
>  const char *
> @@ -903,6 +923,30 @@ aarch64_ins_sve_addr_zz_uxtw (const aarch64_operand *self,
>    return aarch64_ext_sve_addr_zz (self, info, code);
>  }
>  
> +/* Encode an SVE ADD/SUB immediate.  */
> +const char *
> +aarch64_ins_sve_aimm (const aarch64_operand *self,
> +		      const aarch64_opnd_info *info, aarch64_insn *code,
> +		      const aarch64_inst *inst ATTRIBUTE_UNUSED)
> +{
> +  if (info->shifter.amount == 8)
> +    insert_all_fields (self, code, (info->imm.value & 0xff) | 256);
> +  else if (info->imm.value != 0 && (info->imm.value & 0xff) == 0)
> +    insert_all_fields (self, code, ((info->imm.value / 256) & 0xff) | 256);
> +  else
> +    insert_all_fields (self, code, info->imm.value & 0xff);
> +  return NULL;
> +}
> +
> +/* Encode an SVE CPY/DUP immediate.  */
> +const char *
> +aarch64_ins_sve_asimm (const aarch64_operand *self,
> +		       const aarch64_opnd_info *info, aarch64_insn *code,
> +		       const aarch64_inst *inst)
> +{
> +  return aarch64_ins_sve_aimm (self, info, code, inst);
> +}
> +
>  /* Encode Zn[MM], where MM has a 7-bit triangular encoding.  The fields
>     array specifies which field to use for Zn.  MM is encoded in the
>     concatenation of imm5 and SVE_tszh, with imm5 being the less
> @@ -919,6 +963,15 @@ aarch64_ins_sve_index (const aarch64_operand *self,
>    return NULL;
>  }
>  
> +/* Encode a logical/bitmask immediate for the MOV alias of SVE DUPM.  */
> +const char *
> +aarch64_ins_sve_limm_mov (const aarch64_operand *self,
> +			  const aarch64_opnd_info *info, aarch64_insn *code,
> +			  const aarch64_inst *inst)
> +{
> +  return aarch64_ins_limm (self, info, code, inst);
> +}
> +
>  /* Encode {Zn.<T> - Zm.<T>}.  The fields array specifies which field
>     to use for Zn.  */
>  const char *
> @@ -943,6 +996,38 @@ aarch64_ins_sve_scale (const aarch64_operand *self,
>    return NULL;
>  }
>  
> +/* Encode an SVE shift left immediate.  */
> +const char *
> +aarch64_ins_sve_shlimm (const aarch64_operand *self,
> +			const aarch64_opnd_info *info, aarch64_insn *code,
> +			const aarch64_inst *inst)
> +{
> +  const aarch64_opnd_info *prev_operand;
> +  unsigned int esize;
> +
> +  assert (info->idx > 0);
> +  prev_operand = &inst->operands[info->idx - 1];
> +  esize = aarch64_get_qualifier_esize (prev_operand->qualifier);
> +  insert_all_fields (self, code, 8 * esize + info->imm.value);
> +  return NULL;
> +}
> +
> +/* Encode an SVE shift right immediate.  */
> +const char *
> +aarch64_ins_sve_shrimm (const aarch64_operand *self,
> +			const aarch64_opnd_info *info, aarch64_insn *code,
> +			const aarch64_inst *inst)
> +{
> +  const aarch64_opnd_info *prev_operand;
> +  unsigned int esize;
> +
> +  assert (info->idx > 0);
> +  prev_operand = &inst->operands[info->idx - 1];
> +  esize = aarch64_get_qualifier_esize (prev_operand->qualifier);
> +  insert_all_fields (self, code, 16 * esize - info->imm.value);
> +  return NULL;
> +}
> +
>  /* Miscellaneous encoding functions.  */
>  
>  /* Encode size[0], i.e. bit 22, for
> diff --git a/opcodes/aarch64-asm.h b/opcodes/aarch64-asm.h
> index 5e13de0..bbd320e 100644
> --- a/opcodes/aarch64-asm.h
> +++ b/opcodes/aarch64-asm.h
> @@ -54,6 +54,7 @@ AARCH64_DECL_OPD_INSERTER (ins_fpimm);
>  AARCH64_DECL_OPD_INSERTER (ins_fbits);
>  AARCH64_DECL_OPD_INSERTER (ins_aimm);
>  AARCH64_DECL_OPD_INSERTER (ins_limm);
> +AARCH64_DECL_OPD_INSERTER (ins_inv_limm);
>  AARCH64_DECL_OPD_INSERTER (ins_ft);
>  AARCH64_DECL_OPD_INSERTER (ins_addr_simple);
>  AARCH64_DECL_OPD_INSERTER (ins_addr_regoff);
> @@ -79,9 +80,14 @@ AARCH64_DECL_OPD_INSERTER (ins_sve_addr_zi_u5);
>  AARCH64_DECL_OPD_INSERTER (ins_sve_addr_zz_lsl);
>  AARCH64_DECL_OPD_INSERTER (ins_sve_addr_zz_sxtw);
>  AARCH64_DECL_OPD_INSERTER (ins_sve_addr_zz_uxtw);
> +AARCH64_DECL_OPD_INSERTER (ins_sve_aimm);
> +AARCH64_DECL_OPD_INSERTER (ins_sve_asimm);
>  AARCH64_DECL_OPD_INSERTER (ins_sve_index);
> +AARCH64_DECL_OPD_INSERTER (ins_sve_limm_mov);
>  AARCH64_DECL_OPD_INSERTER (ins_sve_reglist);
>  AARCH64_DECL_OPD_INSERTER (ins_sve_scale);
> +AARCH64_DECL_OPD_INSERTER (ins_sve_shlimm);
> +AARCH64_DECL_OPD_INSERTER (ins_sve_shrimm);
>  
>  #undef AARCH64_DECL_OPD_INSERTER
>  
> diff --git a/opcodes/aarch64-dis-2.c b/opcodes/aarch64-dis-2.c
> index 48d6ce7..4527456 100644
> --- a/opcodes/aarch64-dis-2.c
> +++ b/opcodes/aarch64-dis-2.c
> @@ -10426,12 +10426,6 @@ aarch64_extract_operand (const aarch64_operand *self,
>      case 27:
>      case 35:
>      case 36:
> -    case 129:
> -    case 130:
> -    case 131:
> -    case 132:
> -    case 133:
> -    case 134:
>      case 135:
>      case 136:
>      case 137:
> @@ -10440,7 +10434,13 @@ aarch64_extract_operand (const aarch64_operand *self,
>      case 140:
>      case 141:
>      case 142:
> -    case 145:
> +    case 155:
> +    case 156:
> +    case 157:
> +    case 158:
> +    case 159:
> +    case 160:
> +    case 163:
>        return aarch64_ext_regno (self, info, code, inst);
>      case 8:
>        return aarch64_ext_regrt_sysins (self, info, code, inst);
> @@ -10477,13 +10477,22 @@ aarch64_extract_operand (const aarch64_operand *self,
>      case 56:
>      case 57:
>      case 58:
> -    case 66:
> +    case 59:
>      case 67:
>      case 68:
>      case 69:
>      case 70:
> -    case 126:
> -    case 128:
> +    case 71:
> +    case 132:
> +    case 134:
> +    case 147:
> +    case 148:
> +    case 149:
> +    case 150:
> +    case 151:
> +    case 152:
> +    case 153:
> +    case 154:
>        return aarch64_ext_imm (self, info, code, inst);
>      case 38:
>      case 39:
> @@ -10496,61 +10505,61 @@ aarch64_extract_operand (const aarch64_operand *self,
>        return aarch64_ext_shll_imm (self, info, code, inst);
>      case 46:
>        return aarch64_ext_fpimm (self, info, code, inst);
> -    case 59:
> -      return aarch64_ext_limm (self, info, code, inst);
>      case 60:
> -      return aarch64_ext_aimm (self, info, code, inst);
> +    case 130:
> +      return aarch64_ext_limm (self, info, code, inst);
>      case 61:
> -      return aarch64_ext_imm_half (self, info, code, inst);
> +      return aarch64_ext_aimm (self, info, code, inst);
>      case 62:
> +      return aarch64_ext_imm_half (self, info, code, inst);
> +    case 63:
>        return aarch64_ext_fbits (self, info, code, inst);
> -    case 64:
>      case 65:
> +    case 66:
>        return aarch64_ext_cond (self, info, code, inst);
> -    case 71:
> -    case 77:
> -      return aarch64_ext_addr_simple (self, info, code, inst);
>      case 72:
> -      return aarch64_ext_addr_regoff (self, info, code, inst);
> +    case 78:
> +      return aarch64_ext_addr_simple (self, info, code, inst);
>      case 73:
> +      return aarch64_ext_addr_regoff (self, info, code, inst);
>      case 74:
>      case 75:
> -      return aarch64_ext_addr_simm (self, info, code, inst);
>      case 76:
> +      return aarch64_ext_addr_simm (self, info, code, inst);
> +    case 77:
>        return aarch64_ext_addr_uimm12 (self, info, code, inst);
> -    case 78:
> -      return aarch64_ext_simd_addr_post (self, info, code, inst);
>      case 79:
> -      return aarch64_ext_sysreg (self, info, code, inst);
> +      return aarch64_ext_simd_addr_post (self, info, code, inst);
>      case 80:
> -      return aarch64_ext_pstatefield (self, info, code, inst);
> +      return aarch64_ext_sysreg (self, info, code, inst);
>      case 81:
> +      return aarch64_ext_pstatefield (self, info, code, inst);
>      case 82:
>      case 83:
>      case 84:
> -      return aarch64_ext_sysins_op (self, info, code, inst);
>      case 85:
> +      return aarch64_ext_sysins_op (self, info, code, inst);
>      case 86:
> -      return aarch64_ext_barrier (self, info, code, inst);
>      case 87:
> -      return aarch64_ext_prfop (self, info, code, inst);
> +      return aarch64_ext_barrier (self, info, code, inst);
>      case 88:
> -      return aarch64_ext_hint (self, info, code, inst);
> +      return aarch64_ext_prfop (self, info, code, inst);
>      case 89:
> +      return aarch64_ext_hint (self, info, code, inst);
>      case 90:
>      case 91:
>      case 92:
> -      return aarch64_ext_sve_addr_ri_s4xvl (self, info, code, inst);
>      case 93:
> -      return aarch64_ext_sve_addr_ri_s6xvl (self, info, code, inst);
> +      return aarch64_ext_sve_addr_ri_s4xvl (self, info, code, inst);
>      case 94:
> -      return aarch64_ext_sve_addr_ri_s9xvl (self, info, code, inst);
> +      return aarch64_ext_sve_addr_ri_s6xvl (self, info, code, inst);
>      case 95:
> +      return aarch64_ext_sve_addr_ri_s9xvl (self, info, code, inst);
>      case 96:
>      case 97:
>      case 98:
> -      return aarch64_ext_sve_addr_ri_u6 (self, info, code, inst);
>      case 99:
> +      return aarch64_ext_sve_addr_ri_u6 (self, info, code, inst);
>      case 100:
>      case 101:
>      case 102:
> @@ -10562,8 +10571,8 @@ aarch64_extract_operand (const aarch64_operand *self,
>      case 108:
>      case 109:
>      case 110:
> -      return aarch64_ext_sve_addr_rr_lsl (self, info, code, inst);
>      case 111:
> +      return aarch64_ext_sve_addr_rr_lsl (self, info, code, inst);
>      case 112:
>      case 113:
>      case 114:
> @@ -10571,24 +10580,39 @@ aarch64_extract_operand (const aarch64_operand *self,
>      case 116:
>      case 117:
>      case 118:
> -      return aarch64_ext_sve_addr_rz_xtw (self, info, code, inst);
>      case 119:
> +      return aarch64_ext_sve_addr_rz_xtw (self, info, code, inst);
>      case 120:
>      case 121:
>      case 122:
> -      return aarch64_ext_sve_addr_zi_u5 (self, info, code, inst);
>      case 123:
> -      return aarch64_ext_sve_addr_zz_lsl (self, info, code, inst);
> +      return aarch64_ext_sve_addr_zi_u5 (self, info, code, inst);
>      case 124:
> -      return aarch64_ext_sve_addr_zz_sxtw (self, info, code, inst);
> +      return aarch64_ext_sve_addr_zz_lsl (self, info, code, inst);
>      case 125:
> +      return aarch64_ext_sve_addr_zz_sxtw (self, info, code, inst);
> +    case 126:
>        return aarch64_ext_sve_addr_zz_uxtw (self, info, code, inst);
>      case 127:
> +      return aarch64_ext_sve_aimm (self, info, code, inst);
> +    case 128:
> +      return aarch64_ext_sve_asimm (self, info, code, inst);
> +    case 129:
> +      return aarch64_ext_inv_limm (self, info, code, inst);
> +    case 131:
> +      return aarch64_ext_sve_limm_mov (self, info, code, inst);
> +    case 133:
>        return aarch64_ext_sve_scale (self, info, code, inst);
>      case 143:
> -      return aarch64_ext_sve_index (self, info, code, inst);
>      case 144:
> +      return aarch64_ext_sve_shlimm (self, info, code, inst);
> +    case 145:
>      case 146:
> +      return aarch64_ext_sve_shrimm (self, info, code, inst);
> +    case 161:
> +      return aarch64_ext_sve_index (self, info, code, inst);
> +    case 162:
> +    case 164:
>        return aarch64_ext_sve_reglist (self, info, code, inst);
>      default: assert (0); abort ();
>      }
> diff --git a/opcodes/aarch64-dis.c b/opcodes/aarch64-dis.c
> index ba6befd..ed050cd 100644
> --- a/opcodes/aarch64-dis.c
> +++ b/opcodes/aarch64-dis.c
> @@ -734,32 +734,21 @@ aarch64_ext_aimm (const aarch64_operand *self ATTRIBUTE_UNUSED,
>    return 1;
>  }
>  
> -/* Decode logical immediate for e.g. ORR <Wd|WSP>, <Wn>, #<imm>.  */
> -
> -int
> -aarch64_ext_limm (const aarch64_operand *self ATTRIBUTE_UNUSED,
> -		  aarch64_opnd_info *info, const aarch64_insn code,
> -		  const aarch64_inst *inst ATTRIBUTE_UNUSED)
> +/* Return true if VALUE is a valid logical immediate encoding, storing the
> +   decoded value in *RESULT if so.  ESIZE is the number of bytes in the
> +   decoded immediate.  */
> +static int
> +decode_limm (uint32_t esize, aarch64_insn value, int64_t *result)
>  {
>    uint64_t imm, mask;
> -  uint32_t sf;
>    uint32_t N, R, S;
>    unsigned simd_size;
> -  aarch64_insn value;
> -
> -  value = extract_fields (code, 0, 3, FLD_N, FLD_immr, FLD_imms);
> -  assert (inst->operands[0].qualifier == AARCH64_OPND_QLF_W
> -	  || inst->operands[0].qualifier == AARCH64_OPND_QLF_X);
> -  sf = aarch64_get_qualifier_esize (inst->operands[0].qualifier) != 4;
>  
>    /* value is N:immr:imms.  */
>    S = value & 0x3f;
>    R = (value >> 6) & 0x3f;
>    N = (value >> 12) & 0x1;
>  
> -  if (sf == 0 && N == 1)
> -    return 0;
> -
>    /* The immediate value is S+1 bits to 1, left rotated by SIMDsize - R
>       (in other words, right rotated by R), then replicated.  */
>    if (N != 0)
> @@ -782,6 +771,10 @@ aarch64_ext_limm (const aarch64_operand *self ATTRIBUTE_UNUSED,
>        /* Top bits are IGNORED.  */
>        R &= simd_size - 1;
>      }
> +
> +  if (simd_size > esize * 8)
> +    return 0;
> +
>    /* NOTE: if S = simd_size - 1 we get 0xf..f which is rejected.  */
>    if (S == simd_size - 1)
>      return 0;
> @@ -803,8 +796,35 @@ aarch64_ext_limm (const aarch64_operand *self ATTRIBUTE_UNUSED,
>      default: assert (0); return 0;
>      }
>  
> -  info->imm.value = sf ? imm : imm & 0xffffffff;
> +  *result = imm & ~((uint64_t) -1 << (esize * 4) << (esize * 4));
> +
> +  return 1;
> +}
> +
> +/* Decode a logical immediate for e.g. ORR <Wd|WSP>, <Wn>, #<imm>.  */
> +int
> +aarch64_ext_limm (const aarch64_operand *self,
> +		  aarch64_opnd_info *info, const aarch64_insn code,
> +		  const aarch64_inst *inst)
> +{
> +  uint32_t esize;
> +  aarch64_insn value;
> +
> +  value = extract_fields (code, 0, 3, self->fields[0], self->fields[1],
> +			  self->fields[2]);
> +  esize = aarch64_get_qualifier_esize (inst->operands[0].qualifier);
> +  return decode_limm (esize, value, &info->imm.value);
> +}
>  
> +/* Decode a logical immediate for the BIC alias of AND (etc.).  */
> +int
> +aarch64_ext_inv_limm (const aarch64_operand *self,
> +		      aarch64_opnd_info *info, const aarch64_insn code,
> +		      const aarch64_inst *inst)
> +{
> +  if (!aarch64_ext_limm (self, info, code, inst))
> +    return 0;
> +  info->imm.value = ~info->imm.value;
>    return 1;
>  }
>  
> @@ -1404,6 +1424,47 @@ aarch64_ext_sve_addr_zz_uxtw (const aarch64_operand *self,
>    return aarch64_ext_sve_addr_zz (self, info, code, AARCH64_MOD_UXTW);
>  }
>  
> +/* Finish decoding an SVE arithmetic immediate, given that INFO already
> +   has the raw field value and that the low 8 bits decode to VALUE.  */
> +static int
> +decode_sve_aimm (aarch64_opnd_info *info, int64_t value)
> +{
> +  info->shifter.kind = AARCH64_MOD_LSL;
> +  info->shifter.amount = 0;
> +  if (info->imm.value & 0x100)
> +    {
> +      if (value == 0)
> +	/* Decode 0x100 as #0, LSL #8.  */
> +	info->shifter.amount = 8;
> +      else
> +	value *= 256;
> +    }
> +  info->shifter.operator_present = (info->shifter.amount != 0);
> +  info->shifter.amount_present = (info->shifter.amount != 0);
> +  info->imm.value = value;
> +  return 1;
> +}
> +
> +/* Decode an SVE ADD/SUB immediate.  */
> +int
> +aarch64_ext_sve_aimm (const aarch64_operand *self,
> +		      aarch64_opnd_info *info, const aarch64_insn code,
> +		      const aarch64_inst *inst)
> +{
> +  return (aarch64_ext_imm (self, info, code, inst)
> +	  && decode_sve_aimm (info, (uint8_t) info->imm.value));
> +}
> +
> +/* Decode an SVE CPY/DUP immediate.  */
> +int
> +aarch64_ext_sve_asimm (const aarch64_operand *self,
> +		       aarch64_opnd_info *info, const aarch64_insn code,
> +		       const aarch64_inst *inst)
> +{
> +  return (aarch64_ext_imm (self, info, code, inst)
> +	  && decode_sve_aimm (info, (int8_t) info->imm.value));
> +}
> +
>  /* Decode Zn[MM], where MM has a 7-bit triangular encoding.  The fields
>     array specifies which field to use for Zn.  MM is encoded in the
>     concatenation of imm5 and SVE_tszh, with imm5 being the less
> @@ -1425,6 +1486,17 @@ aarch64_ext_sve_index (const aarch64_operand *self,
>    return 1;
>  }
>  
> +/* Decode a logical immediate for the MOV alias of SVE DUPM.  */
> +int
> +aarch64_ext_sve_limm_mov (const aarch64_operand *self,
> +			  aarch64_opnd_info *info, const aarch64_insn code,
> +			  const aarch64_inst *inst)
> +{
> +  int esize = aarch64_get_qualifier_esize (inst->operands[0].qualifier);
> +  return (aarch64_ext_limm (self, info, code, inst)
> +	  && aarch64_sve_dupm_mov_immediate_p (info->imm.value, esize));
> +}
> +
>  /* Decode {Zn.<T> - Zm.<T>}.  The fields array specifies which field
>     to use for Zn.  The opcode-dependent value specifies the number
>     of registers in the list.  */
> @@ -1457,6 +1529,44 @@ aarch64_ext_sve_scale (const aarch64_operand *self,
>    info->shifter.amount_present = (val != 0);
>    return 1;
>  }
> +
> +/* Return the top set bit in VALUE, which is expected to be relatively
> +   small.  */
> +static uint64_t
> +get_top_bit (uint64_t value)
> +{
> +  while ((value & -value) != value)
> +    value -= value & -value;
> +  return value;
> +}
> +
> +/* Decode an SVE shift-left immediate.  */
> +int
> +aarch64_ext_sve_shlimm (const aarch64_operand *self,
> +			aarch64_opnd_info *info, const aarch64_insn code,
> +			const aarch64_inst *inst)
> +{
> +  if (!aarch64_ext_imm (self, info, code, inst)
> +      || info->imm.value == 0)
> +    return 0;
> +
> +  info->imm.value -= get_top_bit (info->imm.value);
> +  return 1;
> +}
> +
> +/* Decode an SVE shift-right immediate.  */
> +int
> +aarch64_ext_sve_shrimm (const aarch64_operand *self,
> +			aarch64_opnd_info *info, const aarch64_insn code,
> +			const aarch64_inst *inst)
> +{
> +  if (!aarch64_ext_imm (self, info, code, inst)
> +      || info->imm.value == 0)
> +    return 0;
> +
> +  info->imm.value = get_top_bit (info->imm.value) * 2 - info->imm.value;
> +  return 1;
> +}
>  
>  /* Bitfields that are commonly used to encode certain operands' information
>     may be partially used as part of the base opcode in some instructions.
> diff --git a/opcodes/aarch64-dis.h b/opcodes/aarch64-dis.h
> index 5619877..10983d1 100644
> --- a/opcodes/aarch64-dis.h
> +++ b/opcodes/aarch64-dis.h
> @@ -76,6 +76,7 @@ AARCH64_DECL_OPD_EXTRACTOR (ext_fpimm);
>  AARCH64_DECL_OPD_EXTRACTOR (ext_fbits);
>  AARCH64_DECL_OPD_EXTRACTOR (ext_aimm);
>  AARCH64_DECL_OPD_EXTRACTOR (ext_limm);
> +AARCH64_DECL_OPD_EXTRACTOR (ext_inv_limm);
>  AARCH64_DECL_OPD_EXTRACTOR (ext_ft);
>  AARCH64_DECL_OPD_EXTRACTOR (ext_addr_simple);
>  AARCH64_DECL_OPD_EXTRACTOR (ext_addr_regoff);
> @@ -101,9 +102,14 @@ AARCH64_DECL_OPD_EXTRACTOR (ext_sve_addr_zi_u5);
>  AARCH64_DECL_OPD_EXTRACTOR (ext_sve_addr_zz_lsl);
>  AARCH64_DECL_OPD_EXTRACTOR (ext_sve_addr_zz_sxtw);
>  AARCH64_DECL_OPD_EXTRACTOR (ext_sve_addr_zz_uxtw);
> +AARCH64_DECL_OPD_EXTRACTOR (ext_sve_aimm);
> +AARCH64_DECL_OPD_EXTRACTOR (ext_sve_asimm);
>  AARCH64_DECL_OPD_EXTRACTOR (ext_sve_index);
> +AARCH64_DECL_OPD_EXTRACTOR (ext_sve_limm_mov);
>  AARCH64_DECL_OPD_EXTRACTOR (ext_sve_reglist);
>  AARCH64_DECL_OPD_EXTRACTOR (ext_sve_scale);
> +AARCH64_DECL_OPD_EXTRACTOR (ext_sve_shlimm);
> +AARCH64_DECL_OPD_EXTRACTOR (ext_sve_shrimm);
>  
>  #undef AARCH64_DECL_OPD_EXTRACTOR
>  
> diff --git a/opcodes/aarch64-opc-2.c b/opcodes/aarch64-opc-2.c
> index a72f577..d86e7dc 100644
> --- a/opcodes/aarch64-opc-2.c
> +++ b/opcodes/aarch64-opc-2.c
> @@ -82,6 +82,7 @@ const struct aarch64_operand aarch64_operands[] =
>    {AARCH64_OPND_CLASS_IMMEDIATE, "BIT_NUM", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_b5, FLD_b40}, "the bit number to be tested"},
>    {AARCH64_OPND_CLASS_IMMEDIATE, "EXCEPTION", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_imm16}, "a 16-bit unsigned immediate"},
>    {AARCH64_OPND_CLASS_IMMEDIATE, "CCMP_IMM", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_imm5}, "a 5-bit unsigned immediate"},
> +  {AARCH64_OPND_CLASS_IMMEDIATE, "SIMM5", OPD_F_SEXT | OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_imm5}, "a 5-bit signed immediate"},
>    {AARCH64_OPND_CLASS_IMMEDIATE, "NZCV", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_nzcv}, "a flag bit specifier giving an alternative value for each flag"},
>    {AARCH64_OPND_CLASS_IMMEDIATE, "LIMM", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_N,FLD_immr,FLD_imms}, "Logical immediate"},
>    {AARCH64_OPND_CLASS_IMMEDIATE, "AIMM", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_shift,FLD_imm12}, "a 12-bit unsigned immediate with optional left shift of 12 bits"},
> @@ -150,6 +151,11 @@ const struct aarch64_operand aarch64_operands[] =
>    {AARCH64_OPND_CLASS_ADDRESS, "SVE_ADDR_ZZ_LSL", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_Zn,FLD_SVE_Zm_16}, "an address with a vector register offset"},
>    {AARCH64_OPND_CLASS_ADDRESS, "SVE_ADDR_ZZ_SXTW", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_Zn,FLD_SVE_Zm_16}, "an address with a vector register offset"},
>    {AARCH64_OPND_CLASS_ADDRESS, "SVE_ADDR_ZZ_UXTW", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_Zn,FLD_SVE_Zm_16}, "an address with a vector register offset"},
> +  {AARCH64_OPND_CLASS_IMMEDIATE, "SVE_AIMM", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_imm9}, "a 9-bit unsigned arithmetic operand"},
> +  {AARCH64_OPND_CLASS_IMMEDIATE, "SVE_ASIMM", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_imm9}, "a 9-bit signed arithmetic operand"},
> +  {AARCH64_OPND_CLASS_IMMEDIATE, "SVE_INV_LIMM", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_N,FLD_SVE_immr,FLD_SVE_imms}, "an inverted 13-bit logical immediate"},
> +  {AARCH64_OPND_CLASS_IMMEDIATE, "SVE_LIMM", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_N,FLD_SVE_immr,FLD_SVE_imms}, "a 13-bit logical immediate"},
> +  {AARCH64_OPND_CLASS_IMMEDIATE, "SVE_LIMM_MOV", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_N,FLD_SVE_immr,FLD_SVE_imms}, "a 13-bit logical move immediate"},
>    {AARCH64_OPND_CLASS_IMMEDIATE, "SVE_PATTERN", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_pattern}, "an enumeration value such as POW2"},
>    {AARCH64_OPND_CLASS_IMMEDIATE, "SVE_PATTERN_SCALED", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_pattern}, "an enumeration value such as POW2"},
>    {AARCH64_OPND_CLASS_IMMEDIATE, "SVE_PRFOP", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_prfop}, "an enumeration value such as PLDL1KEEP"},
> @@ -161,6 +167,18 @@ const struct aarch64_operand aarch64_operands[] =
>    {AARCH64_OPND_CLASS_PRED_REG, "SVE_Pm", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_Pm}, "an SVE predicate register"},
>    {AARCH64_OPND_CLASS_PRED_REG, "SVE_Pn", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_Pn}, "an SVE predicate register"},
>    {AARCH64_OPND_CLASS_PRED_REG, "SVE_Pt", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_Pt}, "an SVE predicate register"},
> +  {AARCH64_OPND_CLASS_IMMEDIATE, "SVE_SHLIMM_PRED", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_tszh,FLD_SVE_imm5}, "a shift-left immediate operand"},
> +  {AARCH64_OPND_CLASS_IMMEDIATE, "SVE_SHLIMM_UNPRED", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_tszh,FLD_imm5}, "a shift-left immediate operand"},
> +  {AARCH64_OPND_CLASS_IMMEDIATE, "SVE_SHRIMM_PRED", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_tszh,FLD_SVE_imm5}, "a shift-right immediate operand"},
> +  {AARCH64_OPND_CLASS_IMMEDIATE, "SVE_SHRIMM_UNPRED", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_tszh,FLD_imm5}, "a shift-right immediate operand"},
> +  {AARCH64_OPND_CLASS_IMMEDIATE, "SVE_SIMM5", OPD_F_SEXT | OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_imm5}, "a 5-bit signed immediate"},
> +  {AARCH64_OPND_CLASS_IMMEDIATE, "SVE_SIMM5B", OPD_F_SEXT | OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_imm5b}, "a 5-bit signed immediate"},
> +  {AARCH64_OPND_CLASS_IMMEDIATE, "SVE_SIMM6", OPD_F_SEXT | OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_imms}, "a 6-bit signed immediate"},
> +  {AARCH64_OPND_CLASS_IMMEDIATE, "SVE_SIMM8", OPD_F_SEXT | OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_imm8}, "an 8-bit signed immediate"},
> +  {AARCH64_OPND_CLASS_IMMEDIATE, "SVE_UIMM3", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_imm3}, "a 3-bit unsigned immediate"},
> +  {AARCH64_OPND_CLASS_IMMEDIATE, "SVE_UIMM7", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_imm7}, "a 7-bit unsigned immediate"},
> +  {AARCH64_OPND_CLASS_IMMEDIATE, "SVE_UIMM8", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_imm8}, "an 8-bit unsigned immediate"},
> +  {AARCH64_OPND_CLASS_IMMEDIATE, "SVE_UIMM8_53", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_imm5,FLD_imm3}, "an 8-bit unsigned immediate"},
>    {AARCH64_OPND_CLASS_SVE_REG, "SVE_Za_5", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_Za_5}, "an SVE vector register"},
>    {AARCH64_OPND_CLASS_SVE_REG, "SVE_Za_16", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_Za_16}, "an SVE vector register"},
>    {AARCH64_OPND_CLASS_SVE_REG, "SVE_Zd", OPD_F_HAS_INSERTER | OPD_F_HAS_EXTRACTOR, {FLD_SVE_Zd}, "an SVE vector register"},
> diff --git a/opcodes/aarch64-opc.c b/opcodes/aarch64-opc.c
> index d0959b5..dec7e06 100644
> --- a/opcodes/aarch64-opc.c
> +++ b/opcodes/aarch64-opc.c
> @@ -264,6 +264,7 @@ const aarch64_field fields[] =
>      { 31,  1 },	/* b5: in the test bit and branch instructions.  */
>      { 19,  5 },	/* b40: in the test bit and branch instructions.  */
>      { 10,  6 },	/* scale: in the fixed-point scalar to fp converting inst.  */
> +    { 17,  1 }, /* SVE_N: SVE equivalent of N.  */
>      {  0,  4 }, /* SVE_Pd: p0-p15, bits [3,0].  */
>      { 10,  3 }, /* SVE_Pg3: p0-p7, bits [12,10].  */
>      {  5,  4 }, /* SVE_Pg4_5: p0-p15, bits [8,5].  */
> @@ -279,8 +280,16 @@ const aarch64_field fields[] =
>      { 16,  5 }, /* SVE_Zm_16: SVE vector register, bits [20,16]. */
>      {  5,  5 }, /* SVE_Zn: SVE vector register, bits [9,5].  */
>      {  0,  5 }, /* SVE_Zt: SVE vector register, bits [4,0].  */
> +    { 16,  3 }, /* SVE_imm3: 3-bit immediate field.  */
>      { 16,  4 }, /* SVE_imm4: 4-bit immediate field.  */
> +    {  5,  5 }, /* SVE_imm5: 5-bit immediate field.  */
> +    { 16,  5 }, /* SVE_imm5b: secondary 5-bit immediate field.  */
>      { 16,  6 }, /* SVE_imm6: 6-bit immediate field.  */
> +    { 14,  7 }, /* SVE_imm7: 7-bit immediate field.  */
> +    {  5,  8 }, /* SVE_imm8: 8-bit immediate field.  */
> +    {  5,  9 }, /* SVE_imm9: 9-bit immediate field.  */
> +    { 11,  6 }, /* SVE_immr: SVE equivalent of immr.  */
> +    {  5,  6 }, /* SVE_imms: SVE equivalent of imms.  */
>      { 10,  2 }, /* SVE_msz: 2-bit shift amount for ADR.  */
>      {  5,  5 }, /* SVE_pattern: vector pattern enumeration.  */
>      {  0,  4 }, /* SVE_prfop: prefetch operation for SVE PRF[BHWD].  */
> @@ -1374,9 +1383,10 @@ operand_general_constraint_met_p (const aarch64_opnd_info *opnds, int idx,
>  				  const aarch64_opcode *opcode,
>  				  aarch64_operand_error *mismatch_detail)
>  {
> -  unsigned num, modifiers;
> +  unsigned num, modifiers, shift;
>    unsigned char size;
>    int64_t imm, min_value, max_value;
> +  uint64_t uvalue, mask;
>    const aarch64_opnd_info *opnd = opnds + idx;
>    aarch64_opnd_qualifier_t qualifier = opnd->qualifier;
>  
> @@ -1977,6 +1987,10 @@ operand_general_constraint_met_p (const aarch64_opnd_info *opnds, int idx,
>  	case AARCH64_OPND_UIMM7:
>  	case AARCH64_OPND_UIMM3_OP1:
>  	case AARCH64_OPND_UIMM3_OP2:
> +	case AARCH64_OPND_SVE_UIMM3:
> +	case AARCH64_OPND_SVE_UIMM7:
> +	case AARCH64_OPND_SVE_UIMM8:
> +	case AARCH64_OPND_SVE_UIMM8_53:
>  	  size = get_operand_fields_width (get_operand_from_code (type));
>  	  assert (size < 32);
>  	  if (!value_fit_unsigned_field_p (opnd->imm.value, size))
> @@ -1987,6 +2001,22 @@ operand_general_constraint_met_p (const aarch64_opnd_info *opnds, int idx,
>  	    }
>  	  break;
>  
> +	case AARCH64_OPND_SIMM5:
> +	case AARCH64_OPND_SVE_SIMM5:
> +	case AARCH64_OPND_SVE_SIMM5B:
> +	case AARCH64_OPND_SVE_SIMM6:
> +	case AARCH64_OPND_SVE_SIMM8:
> +	  size = get_operand_fields_width (get_operand_from_code (type));
> +	  assert (size < 32);
> +	  if (!value_fit_signed_field_p (opnd->imm.value, size))
> +	    {
> +	      set_imm_out_of_range_error (mismatch_detail, idx,
> +					  -(1 << (size - 1)),
> +					  (1 << (size - 1)) - 1);
> +	      return 0;
> +	    }
> +	  break;
> +
>  	case AARCH64_OPND_WIDTH:
>  	  assert (idx > 1 && opnds[idx-1].type == AARCH64_OPND_IMM
>  		  && opnds[0].type == AARCH64_OPND_Rd);
> @@ -2001,6 +2031,7 @@ operand_general_constraint_met_p (const aarch64_opnd_info *opnds, int idx,
>  	  break;
>  
>  	case AARCH64_OPND_LIMM:
> +	case AARCH64_OPND_SVE_LIMM:
>  	  {
>  	    int esize = aarch64_get_qualifier_esize (opnds[0].qualifier);
>  	    uint64_t uimm = opnd->imm.value;
> @@ -2171,6 +2202,90 @@ operand_general_constraint_met_p (const aarch64_opnd_info *opnds, int idx,
>  	    }
>  	  break;
>  
> +	case AARCH64_OPND_SVE_AIMM:
> +	  min_value = 0;
> +	sve_aimm:
> +	  assert (opnd->shifter.kind == AARCH64_MOD_LSL);
> +	  size = aarch64_get_qualifier_esize (opnds[0].qualifier);
> +	  mask = ~((uint64_t) -1 << (size * 4) << (size * 4));
> +	  uvalue = opnd->imm.value;
> +	  shift = opnd->shifter.amount;
> +	  if (size == 1)
> +	    {
> +	      if (shift != 0)
> +		{
> +		  set_other_error (mismatch_detail, idx,
> +				   _("no shift amount allowed for"
> +				     " 8-bit constants"));
> +		  return 0;
> +		}
> +	    }
> +	  else
> +	    {
> +	      if (shift != 0 && shift != 8)
> +		{
> +		  set_other_error (mismatch_detail, idx,
> +				   _("shift amount should be 0 or 8"));
> +		  return 0;
> +		}
> +	      if (shift == 0 && (uvalue & 0xff) == 0)
> +		{
> +		  shift = 8;
> +		  uvalue = (int64_t) uvalue / 256;
> +		}
> +	    }
> +	  mask >>= shift;
> +	  if ((uvalue & mask) != uvalue && (uvalue | ~mask) != uvalue)
> +	    {
> +	      set_other_error (mismatch_detail, idx,
> +			       _("immediate too big for element size"));
> +	      return 0;
> +	    }
> +	  uvalue = (uvalue - min_value) & mask;
> +	  if (uvalue > 0xff)
> +	    {
> +	      set_other_error (mismatch_detail, idx,
> +			       _("invalid arithmetic immediate"));
> +	      return 0;
> +	    }
> +	  break;
> +
> +	case AARCH64_OPND_SVE_ASIMM:
> +	  min_value = -128;
> +	  goto sve_aimm;
> +
> +	case AARCH64_OPND_SVE_INV_LIMM:
> +	  {
> +	    int esize = aarch64_get_qualifier_esize (opnds[0].qualifier);
> +	    uint64_t uimm = ~opnd->imm.value;
> +	    if (!aarch64_logical_immediate_p (uimm, esize, NULL))
> +	      {
> +		set_other_error (mismatch_detail, idx,
> +				 _("immediate out of range"));
> +		return 0;
> +	      }
> +	  }
> +	  break;
> +
> +	case AARCH64_OPND_SVE_LIMM_MOV:
> +	  {
> +	    int esize = aarch64_get_qualifier_esize (opnds[0].qualifier);
> +	    uint64_t uimm = opnd->imm.value;
> +	    if (!aarch64_logical_immediate_p (uimm, esize, NULL))
> +	      {
> +		set_other_error (mismatch_detail, idx,
> +				 _("immediate out of range"));
> +		return 0;
> +	      }
> +	    if (!aarch64_sve_dupm_mov_immediate_p (uimm, esize))
> +	      {
> +		set_other_error (mismatch_detail, idx,
> +				 _("invalid replicated MOV immediate"));
> +		return 0;
> +	      }
> +	  }
> +	  break;
> +
>  	case AARCH64_OPND_SVE_PATTERN_SCALED:
>  	  assert (opnd->shifter.kind == AARCH64_MOD_MUL);
>  	  if (!value_in_range_p (opnd->shifter.amount, 1, 16))
> @@ -2180,6 +2295,27 @@ operand_general_constraint_met_p (const aarch64_opnd_info *opnds, int idx,
>  	    }
>  	  break;
>  
> +	case AARCH64_OPND_SVE_SHLIMM_PRED:
> +	case AARCH64_OPND_SVE_SHLIMM_UNPRED:
> +	  size = aarch64_get_qualifier_esize (opnds[idx - 1].qualifier);
> +	  if (!value_in_range_p (opnd->imm.value, 0, 8 * size - 1))
> +	    {
> +	      set_imm_out_of_range_error (mismatch_detail, idx,
> +					  0, 8 * size - 1);
> +	      return 0;
> +	    }
> +	  break;
> +
> +	case AARCH64_OPND_SVE_SHRIMM_PRED:
> +	case AARCH64_OPND_SVE_SHRIMM_UNPRED:
> +	  size = aarch64_get_qualifier_esize (opnds[idx - 1].qualifier);
> +	  if (!value_in_range_p (opnd->imm.value, 1, 8 * size))
> +	    {
> +	      set_imm_out_of_range_error (mismatch_detail, idx, 1, 8 * size);
> +	      return 0;
> +	    }
> +	  break;
> +
>  	default:
>  	  break;
>  	}
> @@ -2953,6 +3089,19 @@ aarch64_print_operand (char *buf, size_t size, bfd_vma pc,
>      case AARCH64_OPND_IMMR:
>      case AARCH64_OPND_IMMS:
>      case AARCH64_OPND_FBITS:
> +    case AARCH64_OPND_SIMM5:
> +    case AARCH64_OPND_SVE_SHLIMM_PRED:
> +    case AARCH64_OPND_SVE_SHLIMM_UNPRED:
> +    case AARCH64_OPND_SVE_SHRIMM_PRED:
> +    case AARCH64_OPND_SVE_SHRIMM_UNPRED:
> +    case AARCH64_OPND_SVE_SIMM5:
> +    case AARCH64_OPND_SVE_SIMM5B:
> +    case AARCH64_OPND_SVE_SIMM6:
> +    case AARCH64_OPND_SVE_SIMM8:
> +    case AARCH64_OPND_SVE_UIMM3:
> +    case AARCH64_OPND_SVE_UIMM7:
> +    case AARCH64_OPND_SVE_UIMM8:
> +    case AARCH64_OPND_SVE_UIMM8_53:
>        snprintf (buf, size, "#%" PRIi64, opnd->imm.value);
>        break;
>  
> @@ -3021,6 +3170,9 @@ aarch64_print_operand (char *buf, size_t size, bfd_vma pc,
>      case AARCH64_OPND_LIMM:
>      case AARCH64_OPND_AIMM:
>      case AARCH64_OPND_HALF:
> +    case AARCH64_OPND_SVE_INV_LIMM:
> +    case AARCH64_OPND_SVE_LIMM:
> +    case AARCH64_OPND_SVE_LIMM_MOV:
>        if (opnd->shifter.amount)
>  	snprintf (buf, size, "#0x%" PRIx64 ", lsl #%" PRIi64, opnd->imm.value,
>  		  opnd->shifter.amount);
> @@ -3039,6 +3191,15 @@ aarch64_print_operand (char *buf, size_t size, bfd_vma pc,
>  		  opnd->shifter.amount);
>        break;
>  
> +    case AARCH64_OPND_SVE_AIMM:
> +    case AARCH64_OPND_SVE_ASIMM:
> +      if (opnd->shifter.amount)
> +	snprintf (buf, size, "#%" PRIi64 ", lsl #%" PRIi64, opnd->imm.value,
> +		  opnd->shifter.amount);
> +      else
> +	snprintf (buf, size, "#%" PRIi64, opnd->imm.value);
> +      break;
> +
>      case AARCH64_OPND_FPIMM:
>      case AARCH64_OPND_SIMD_FPIMM:
>        switch (aarch64_get_qualifier_esize (opnds[0].qualifier))
> @@ -3967,6 +4128,33 @@ verify_ldpsw (const struct aarch64_opcode * opcode ATTRIBUTE_UNUSED,
>    return TRUE;
>  }
>  
> +/* Return true if VALUE cannot be moved into an SVE register using DUP
> +   (with any element size, not just ESIZE) and if using DUPM would
> +   therefore be OK.  ESIZE is the number of bytes in the immediate.  */
> +
> +bfd_boolean
> +aarch64_sve_dupm_mov_immediate_p (uint64_t uvalue, int esize)
> +{
> +  int64_t svalue = uvalue;
> +  uint64_t upper = (uint64_t) -1 << (esize * 4) << (esize * 4);
> +
> +  if ((uvalue & ~upper) != uvalue && (uvalue | upper) != uvalue)
> +    return FALSE;
> +  if (esize <= 4 || (uint32_t) uvalue == (uint32_t) (uvalue >> 32))
> +    {
> +      svalue = (int32_t) uvalue;
> +      if (esize <= 2 || (uint16_t) uvalue == (uint16_t) (uvalue >> 16))
> +	{
> +	  svalue = (int16_t) uvalue;
> +	  if (esize == 1 || (uint8_t) uvalue == (uint8_t) (uvalue >> 8))
> +	    return FALSE;
> +	}
> +    }
> +  if ((svalue & 0xff) == 0)
> +    svalue /= 256;
> +  return svalue < -128 || svalue >= 128;
> +}
> +
>  /* Include the opcode description table as well as the operand description
>     table.  */
>  #define VERIFIER(x) verify_##x
> diff --git a/opcodes/aarch64-opc.h b/opcodes/aarch64-opc.h
> index e823146..087376e 100644
> --- a/opcodes/aarch64-opc.h
> +++ b/opcodes/aarch64-opc.h
> @@ -91,6 +91,7 @@ enum aarch64_field_kind
>    FLD_b5,
>    FLD_b40,
>    FLD_scale,
> +  FLD_SVE_N,
>    FLD_SVE_Pd,
>    FLD_SVE_Pg3,
>    FLD_SVE_Pg4_5,
> @@ -106,8 +107,16 @@ enum aarch64_field_kind
>    FLD_SVE_Zm_16,
>    FLD_SVE_Zn,
>    FLD_SVE_Zt,
> +  FLD_SVE_imm3,
>    FLD_SVE_imm4,
> +  FLD_SVE_imm5,
> +  FLD_SVE_imm5b,
>    FLD_SVE_imm6,
> +  FLD_SVE_imm7,
> +  FLD_SVE_imm8,
> +  FLD_SVE_imm9,
> +  FLD_SVE_immr,
> +  FLD_SVE_imms,
>    FLD_SVE_msz,
>    FLD_SVE_pattern,
>    FLD_SVE_prfop,
> diff --git a/opcodes/aarch64-tbl.h b/opcodes/aarch64-tbl.h
> index ac7ccf0..d743e3b 100644
> --- a/opcodes/aarch64-tbl.h
> +++ b/opcodes/aarch64-tbl.h
> @@ -2761,6 +2761,8 @@ struct aarch64_opcode aarch64_opcode_table[] =
>        "a 16-bit unsigned immediate")					\
>      Y(IMMEDIATE, imm, "CCMP_IMM", 0, F(FLD_imm5),			\
>        "a 5-bit unsigned immediate")					\
> +    Y(IMMEDIATE, imm, "SIMM5", OPD_F_SEXT, F(FLD_imm5),			\
> +      "a 5-bit signed immediate")					\
>      Y(IMMEDIATE, imm, "NZCV", 0, F(FLD_nzcv),				\
>        "a flag bit specifier giving an alternative value for each flag")	\
>      Y(IMMEDIATE, limm, "LIMM", 0, F(FLD_N,FLD_immr,FLD_imms),		\
> @@ -2925,6 +2927,19 @@ struct aarch64_opcode aarch64_opcode_table[] =
>      Y(ADDRESS, sve_addr_zz_uxtw, "SVE_ADDR_ZZ_UXTW", 0,			\
>        F(FLD_SVE_Zn,FLD_SVE_Zm_16),					\
>        "an address with a vector register offset")			\
> +    Y(IMMEDIATE, sve_aimm, "SVE_AIMM", 0, F(FLD_SVE_imm9),		\
> +      "a 9-bit unsigned arithmetic operand")				\
> +    Y(IMMEDIATE, sve_asimm, "SVE_ASIMM", 0, F(FLD_SVE_imm9),		\
> +      "a 9-bit signed arithmetic operand")				\
> +    Y(IMMEDIATE, inv_limm, "SVE_INV_LIMM", 0,				\
> +      F(FLD_SVE_N,FLD_SVE_immr,FLD_SVE_imms),				\
> +      "an inverted 13-bit logical immediate")				\
> +    Y(IMMEDIATE, limm, "SVE_LIMM", 0,					\
> +      F(FLD_SVE_N,FLD_SVE_immr,FLD_SVE_imms),				\
> +      "a 13-bit logical immediate")					\
> +    Y(IMMEDIATE, sve_limm_mov, "SVE_LIMM_MOV", 0,			\
> +      F(FLD_SVE_N,FLD_SVE_immr,FLD_SVE_imms),				\
> +      "a 13-bit logical move immediate")				\
>      Y(IMMEDIATE, imm, "SVE_PATTERN", 0, F(FLD_SVE_pattern),		\
>        "an enumeration value such as POW2")				\
>      Y(IMMEDIATE, sve_scale, "SVE_PATTERN_SCALED", 0,			\
> @@ -2947,6 +2962,30 @@ struct aarch64_opcode aarch64_opcode_table[] =
>        "an SVE predicate register")					\
>      Y(PRED_REG, regno, "SVE_Pt", 0, F(FLD_SVE_Pt),			\
>        "an SVE predicate register")					\
> +    Y(IMMEDIATE, sve_shlimm, "SVE_SHLIMM_PRED", 0,			\
> +      F(FLD_SVE_tszh,FLD_SVE_imm5), "a shift-left immediate operand")	\
> +    Y(IMMEDIATE, sve_shlimm, "SVE_SHLIMM_UNPRED", 0,			\
> +      F(FLD_SVE_tszh,FLD_imm5), "a shift-left immediate operand")	\
> +    Y(IMMEDIATE, sve_shrimm, "SVE_SHRIMM_PRED", 0,			\
> +      F(FLD_SVE_tszh,FLD_SVE_imm5), "a shift-right immediate operand")	\
> +    Y(IMMEDIATE, sve_shrimm, "SVE_SHRIMM_UNPRED", 0,			\
> +      F(FLD_SVE_tszh,FLD_imm5), "a shift-right immediate operand")	\
> +    Y(IMMEDIATE, imm, "SVE_SIMM5", OPD_F_SEXT, F(FLD_SVE_imm5),		\
> +      "a 5-bit signed immediate")					\
> +    Y(IMMEDIATE, imm, "SVE_SIMM5B", OPD_F_SEXT, F(FLD_SVE_imm5b),	\
> +      "a 5-bit signed immediate")					\
> +    Y(IMMEDIATE, imm, "SVE_SIMM6", OPD_F_SEXT, F(FLD_SVE_imms),		\
> +      "a 6-bit signed immediate")					\
> +    Y(IMMEDIATE, imm, "SVE_SIMM8", OPD_F_SEXT, F(FLD_SVE_imm8),		\
> +      "an 8-bit signed immediate")					\
> +    Y(IMMEDIATE, imm, "SVE_UIMM3", 0, F(FLD_SVE_imm3),			\
> +      "a 3-bit unsigned immediate")					\
> +    Y(IMMEDIATE, imm, "SVE_UIMM7", 0, F(FLD_SVE_imm7),			\
> +      "a 7-bit unsigned immediate")					\
> +    Y(IMMEDIATE, imm, "SVE_UIMM8", 0, F(FLD_SVE_imm8),			\
> +      "an 8-bit unsigned immediate")					\
> +    Y(IMMEDIATE, imm, "SVE_UIMM8_53", 0, F(FLD_imm5,FLD_imm3),		\
> +      "an 8-bit unsigned immediate")					\
>      Y(SVE_REG, regno, "SVE_Za_5", 0, F(FLD_SVE_Za_5),			\
>        "an SVE vector register")						\
>      Y(SVE_REG, regno, "SVE_Za_16", 0, F(FLD_SVE_Za_16),			\
> 



More information about the Binutils mailing list