This is the mail archive of the gdb-patches@sourceware.org mailing list for the GDB project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: [try 2nd 5/8] Displaced stepping for Thumb 32-bit insns


On 07/16/2011 02:47 AM, Ulrich Weigand wrote:
> Yao Qi wrote:
> 
>> On 05/18/2011 01:14 AM, Ulrich Weigand wrote:
>>> - However, you cannot just transform a PLD/PLI "literal" (i.e. PC + immediate)
>>>   into an "immediate" (i.e. register + immediate) version, since in Thumb
>>>   mode the "literal" version supports a 12-bit immediate, while the immediate
>>>   version only supports an 8-bit immediate.
>>>
>>>   I guess you could either add the immediate to the PC during preparation
>>>   stage and then use an "immediate" instruction with immediate zero, or
>>>   else load the immediate into a second register and use a "register"
>>>   version of the instruction.
>>>
>>
>> The former may not be correct.  PC should be set at the address of `copy
>> area' in displaced stepping, instead of any other arbitrary values.  The
>> alternative to the former approach is to compute the new immediate value
>> according to the new PC value we will set (new PC value is
>> dsc->scratch_base).  However, in this way, we have to worry about the
>> overflow of new computed 12-bit immediate.
>>
>> The latter one sounds better, because we don't have to worry about
>> overflow problem, and cleanup_preload can be still used as cleanup
>> routine in this case.
> 
> OK, this looks good to me now.
> 
>>> This doesn't look right: you're replacing the RN register if it is anything
>>> *but* 15 -- but those cases do not need to be replaced!
>>>
>>
>> Oh, sorry, it is a logic error.  The code should be like
>>
>> if (rn != ARM_PC_REGNUM)
>>   return thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, "copro
>> load/store", dsc);
> 
> Hmm, it's still the wrong way in this patch?
> 

Sorry, fixed.

> 
>>>> +    case 2: /* op1 = 2 */
>>>> +      if (op) /* Branch and misc control.  */
>>>> +	{
>>>> +	  if (bit (insn2, 14)) /* BLX/BL */
>>>> +	    err = thumb2_copy_b_bl_blx (gdbarch, insn1, insn2, regs, dsc);
>>>> +	  else if (!bits (insn2, 12, 14) && bits (insn1, 8, 10) != 0x7)
>>> I don't understand this condition, but it looks wrong to me ...
>>>
>>
>> This condition is about "Conditional Branch".  The 2nd half of condition
>> should be "bits (insn1, 7, 9) != 0x7", corresponding to the first line
>> of table A6-13 "op1 = 0x0, op is not x111xxx".
> 
> But "!bits (insn2, 12, 14)" doesn't say "op1 = 0x0" either ...  Since we
> already know bit 14 is 0, this should probably just check for bit 12.

OK, the condition checking is changed from "!bits (insn2, 12, 14)" to
"!bit (insn2, 12)".

> Some more comments on the latest patch.  There's a couple of issues I
> had overlooked in the previous review, in particular handling of the
> load/store instructions.  Most of the rest is just minor things ...
> 
> 
>> +static int
>> +thumb2_copy_copro_load_store (struct gdbarch *gdbarch, uint16_t insn1,
>> +			      uint16_t insn2, struct regcache *regs,
>> +			      struct displaced_step_closure *dsc)
>> +{
>> +  unsigned int rn = bits (insn1, 0, 3);
>> +
>> +  if (rn == ARM_PC_REGNUM)
>> +    return thumb_copy_unmodified_32bit (gdbarch, insn1, insn2,
>> +					"copro load/store", dsc);
> 
> This still needs to be rn != ARM_PC_REGNUM
> 

Oh, sorry for missing this one in last patch.  Fixed.

>> +  if (debug_displaced)
>> +    fprintf_unfiltered (gdb_stdlog, "displaced: copying coprocessor "
>> +			"load/store insn %.4x%.4x\n", insn1, insn2);
>> +
>> +  dsc->modinsn[0] = insn1 & 0xfff0;
>> +  dsc->modinsn[1] = insn2;
>> +  dsc->numinsns = 2;
>> +
>> +  install_copro_load_store (gdbarch, regs, dsc, bit (insn1, 9), rn);
> 
> Why bit 9?  Isn't the writeback bit bit 5 here?  But anyway, those
> instructions we support here in Thumb mode (LDC/LDC2, VLDR) don't
> support writeback anyway.  It's probably best to just pass 0.
> 

Right, let us pass 0 for writeback in this function.

>> +static int
>> +thumb2_copy_b_bl_blx (struct gdbarch *gdbarch, uint16_t insn1,
>> +		      uint16_t insn2, struct regcache *regs,
>> +		      struct displaced_step_closure *dsc)
>> +{
>> +  int link = bit (insn2, 14);
>> +  int exchange = link && !bit (insn2, 12);
>> +  int cond = INST_AL;
>> +  long offset =0;
> Space after =
> 

Fixed.

> 
>> +thumb2_copy_load_store (struct gdbarch *gdbarch, uint16_t insn1, uint16_t insn2,
>> +			struct regcache *regs,
>> +			struct displaced_step_closure *dsc, int load, int size,
>> +			int usermode, int writeback)
> 
> Looking at the store instructions (STR/STRB/STRH[T]), it would appear that none
> of them may use PC in Thumb mode.  Therefore, it seems that this routine should
> just handle loads (i.e. rename to thumb2_copy_load and remove the load argument).
> 
> There is another fundamental problem: The LDR "literal" Thumb encodings provide
> a "long-form" 12-bit immediate *and* an U bit.  However, the non-PC-relative
> "immediate" Thumb encodings only provide a U bit with the short 8-bit immediates;
> the 12-bit immediate form does not have a U bit (instead, bit 7 encodes whether
> the 12-bit or 8-bit form is in use).
> 
> This means that you cannot simply translate LDR literal into LDR immediate
> forms, but probably need to handle by loading the immediate into a register,
> similar to the preload case.
> 

OK, thumb2_copy_load_store is renamed to thumb2_copy_load_reg_imm, which
is to handle non-literal case, and thumb2_copy_load_literal is a new
function to handle LDR "literal" instruction.

>> +{
>> +  int immed = !bit (insn1, 9);
> 
> This check looks incorrect.  E.g. LDR (register) also has bit 9 equals zero.
> There needs to be a more complex decoding step somewhere, either in the caller,
> or directly in here.  (Note that decoding immediate, usermode, and writeback
> flags are closely coupled, so this should probably be done in the same location.)
> 
>> +  unsigned int rt = bits (insn2, 12, 15);
>> +  unsigned int rn = bits (insn1, 0, 3);
>> +  unsigned int rm = bits (insn2, 0, 3);  /* Only valid if !immed.  */
>> +
>> +  if (rt != ARM_PC_REGNUM && rn != ARM_PC_REGNUM
>> +      && (immed || rm != ARM_PC_REGNUM))
>> +    return thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, "load/store",
>> +					dsc);
>> +
>> +  if (debug_displaced)
>> +    fprintf_unfiltered (gdb_stdlog,
>> +			"displaced: copying %s%s r%d [r%d] insn %.4x%.4x\n",
>> +			load ? (size == 1 ? "ldrb" : (size == 2 ? "ldrh" : "ldr"))
>> +			: (size == 1 ? "strb" : (size == 2 ? "strh" : "str")),
>> +			usermode ? "t" : "",
>> +			rt, rn, insn1, insn2);
>> +
>> +  install_load_store (gdbarch, regs, dsc, load, immed, writeback, size,
>> +		      usermode, rt, rm, rn);
>> +
>> +  if (load || rt != ARM_PC_REGNUM)
>> +    {
>> +      dsc->u.ldst.restore_r4 = 0;
>> +
>> +      if (immed)
>> +	/* {ldr,str}[b]<cond> rt, [rn, #imm], etc.
>> +	   ->
>> +	   {ldr,str}[b]<cond> r0, [r2, #imm].  */
>> +	{
>> +	  dsc->modinsn[0] = (insn1 & 0xfff0) | 0x2;
>> +	  dsc->modinsn[1] = insn2 & 0x0fff;
>> +	}
>> +      else
>> +	/* {ldr,str}[b]<cond> rt, [rn, rm], etc.
>> +	   ->
>> +	   {ldr,str}[b]<cond> r0, [r2, r3].  */
>> +	{
>> +	  dsc->modinsn[0] = (insn1 & 0xfff0) | 0x2;
>> +	  dsc->modinsn[1] = (insn2 & 0x0ff0) | 0x3;
>> +	}
>> +
>> +      dsc->numinsns = 2;
>> +    }
>> +  else
>> +    {
>> +      /* In Thumb-32 instructions, the behavior is unpredictable when Rt is
>> +	 PC, while the behavior is undefined when Rn is PC.  Shortly, neither
>> +	 Rt nor Rn can be PC.  */
>> +
>> +      gdb_assert (0);
>> +    }
> 
> See above, this should only be used for loads.
> 

This is removed.

> 
>> +static int
>> +thumb2_copy_block_xfer (struct gdbarch *gdbarch, uint16_t insn1, uint16_t insn2,
>> +			struct regcache *regs,
>> +			struct displaced_step_closure *dsc)
>> +{
>> +  int rn = bits (insn1, 0, 3);
>> +  int load = bit (insn1, 4);
>> +  int writeback = bit (insn1, 5);
>> +
>> +  /* Block transfers which don't mention PC can be run directly
>> +     out-of-line.  */
>> +  if (rn != ARM_PC_REGNUM && (insn2 & 0x8000) == 0)
>> +    return thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, "ldm/stm", dsc);
>> +
>> +  if (rn == ARM_PC_REGNUM)
>> +    {
>> +      warning (_("displaced: Unpredictable LDM or STM with "
>> +		 "base register r15"));
>> +      return thumb_copy_unmodified_32bit (gdbarch, insn1, insn2,
>> +					  "unpredictable ldm/stm", dsc);
>> +    }
>> +
>> +  if (debug_displaced)
>> +    fprintf_unfiltered (gdb_stdlog, "displaced: copying block transfer insn "
>> +			"%.4x%.4x\n", insn1, insn2);
>> +
>> +  /* Clear bit 13, since it should be always zero.  */
>> +  dsc->u.block.regmask = (insn2 & 0xdfff);
>> +  dsc->u.block.rn = rn;
>> +
>> +  dsc->u.block.load = bit (insn1, 4);
> We've already read that bit into "load".
> 

Fixed.

>> +  dsc->u.block.user = bit (insn1, 6);
> This must always be 0 -- we're never called otherwise.
> 

Fixed.

>> +static int
>> +thumb2_decode_dp_shift_reg (struct gdbarch *gdbarch, uint16_t insn1,
>> +			    uint16_t insn2,  struct regcache *regs,
>> +			    struct displaced_step_closure *dsc)
>> +{
>> +  /* Data processing (shift register) instructions can be grouped according to
>> +     their encondings:
> Typo.
>> +
>> +     1. Insn X Rn :inst1,3-0 Rd: insn2,8-11, Rm: insn2,3-0. Rd=15 & S=1, Insn Y.
>> +     Rn != PC, Rm ! = PC.
>> +     X: AND, Y: TST (REG)
>> +     X: EOR, Y: TEQ (REG)
>> +     X: ADD, Y: CMN (REG)
>> +     X: SUB, Y: CMP (REG)
>> +
>> +     2. Insn X Rn : ins1,3-0, Rm: insn2, 3-0; Rm! = PC, Rn != PC
>> +     Insn X: TST, TEQ, PKH, CMN, and CMP.
>> +
>> +     3. Insn X Rn:inst1,3-0 Rd:insn2,8-11, Rm:insn2, 3-0. Rn != PC, Rd != PC,
>> +     Rm != PC.
>> +     X: BIC, ADC, SBC, and RSB.
>> +
>> +     4. Insn X Rn:inst1,3-0 Rd:insn2,8-11, Rm:insn2,3-0.  Rd = 15, Insn Y.
>> +     X: ORR, Y: MOV (REG).
>> +     X: ORN, Y: MVN (REG).
>> +
>> +     5.  Insn X Rd: insn2, 8-11, Rm: insn2, 3-0.
>> +     X: MVN, Rd != PC, Rm != PC
>> +     X: MOV: Rd/Rm can be PC.
>> +
>> +     PC is only allowed to be used in instruction MOV.
>> +*/
> 
> Do we need this comment at all (except for the last sentence)?
> 
> 

I leave this comment there to help me to remind what instruction is
still missing here.  It is somewhat redundant when it is done.  Removed
except for the last one.

>>  static int
>> +thumb_copy_pc_relative_32bit (struct gdbarch *gdbarch, uint16_t insn1,
>> +			      uint16_t insn2, struct regcache *regs,
>> +			      struct displaced_step_closure *dsc)
>> +{
>> +  unsigned int rd = bits (insn2, 8, 11);
>> +  /* Since immeidate has the same encoding in both ADR and ADD, so we simply
> Typo: immediate

Fixed.

>> +     extract raw immediate encoding rather than computing immediate.  When
>> +     generating ADD instruction, we can simply perform OR operation to set
>> +     immediate into ADD.  */
>> +  unsigned int imm_3_8 = insn2 & 0x70ff;
>> +  unsigned int imm_i = insn1 & 0x0400; /* Clear all bits except bit 10.  */
>> +
>> +  if (debug_displaced)
>> +    fprintf_unfiltered (gdb_stdlog,
>> +			"displaced: copying thumb adr r%d, #%d:%d insn %.4x%.4x\n",
>> +			rd, imm_i, imm_3_8, insn1, insn2);
>> +
>> +  /* Encoding T3: ADD Rd, Rd, #imm */
>> +  dsc->modinsn[0] = (0xf100 | rd | imm_i);
>> +  dsc->modinsn[1] = ((rd << 8) | imm_3_8);
> 
> Hmm.  So this handles the T3 encoding of ADR correctly.  However, in the T2
> encoding, we need to *subtract* the immediate from PC, so we really need to
> generate a SUB instead of an ADD as replacement ...
> 
> 

We generate SUB (immediate) Encoding T3 for ADR Encoding T2 in new patch.

>> +/* Copy Table Brach Byte/Halfword */
> Type: Branch
> 
> 

Fixed.

>> +static int
>> +decode_thumb_32bit_ld_mem_hints (struct gdbarch *gdbarch,
>> +				 uint16_t insn1, uint16_t insn2,
>> +				 struct regcache *regs,
>> +				 struct displaced_step_closure *dsc)
>> +{
>> +  int rt = bits (insn2, 12, 15);
>> +  int rn = bits (insn1, 0, 3);
>> +  int op1 = bits (insn1, 7, 8);
>> +  int user_mode = (bits (insn2, 8, 11) == 0xe);
> 
> This is too simplistic.  The "long immediate" forms may just accidentally
> have 0xe in those bits -- they're part of the immediate there.   See above
> for the comments about computing immediate/writeback/usermode flags at
> the same location.
> 

This part is re-written to decode immediate/writeback/usermode bits.

>> +  int err = 0;
>> +  int writeback = 0;
>> +
>> +  switch (bits (insn1, 5, 6))
>> +    {
>> +    case 0: /* Load byte and memory hints */
>> +      if (rt == 0xf) /* PLD/PLI */
>> +	{
>> +	  if (rn == 0xf)
>> +	    {
>> +	      /* PLD literal or Encoding T3 of PLI(immediate, literal).  */
>> +	      return thumb2_copy_preload (gdbarch, insn1, insn2, regs, dsc);
>> +	    }
> 
>> +	  else
>> +	    {
>> +	      switch (op1)
>> +		{
>> +		case 0: case 2:
>> +		  if (bits (insn2, 8, 11) == 0x1110
>> +		      || (bits (insn2, 8, 11) & 0x6) == 0x9)
>> +		    return thumb_32bit_copy_unpred (gdbarch, insn1, insn2, dsc);
>> +		  else
>> +		    /* PLI/PLD (reigster, immediate) doesn't use PC.  */
>> +		    return thumb_copy_unmodified_32bit (gdbarch, insn1, insn2,
>> +							"pli/pld", dsc);
>> +		  break;
>> +		case 1: /* PLD/PLDW (immediate) */
>> +		case 3: /* PLI (immediate, literal) */
>> +		  return thumb_copy_unmodified_32bit (gdbarch, insn1, insn2,
>> +						      "pli/pld", dsc);
>> +		  break;
>> +
>> +		}
> 
> I'd just make the whole block use copy_unmodified ...  That's some complexity
> here for no real gain.
> 

OK.  Replace them all with a single copy_unmodified routine.

>> +	    }
>> +	}
> 
> 
>> +      else
>> +	{
>> +	  if ((op1 == 0 || op1 == 2) && bit (insn2, 11))
>> +	    writeback = bit (insn2, 8);
>> +
>> +	  return thumb2_copy_load_store (gdbarch, insn1, insn2, regs, dsc, 1, 1,
>> +					 user_mode, writeback);
> 
> As discussed above, we'll have to distiguish the "literal" forms from
> the immediate forms.
> 

Fixed.

>> +	}
> 
> 
>> +    case 1: /* Load halfword and memory hints.  */
>> +      if (rt == 0xf) /* PLD{W} and Unalloc memory hint.  */
>> +	{
>> +	  if (rn == 0xf)
>> +	    {
>> +	      if (op1 == 0 || op1 == 1)
>> +		return thumb_32bit_copy_unpred (gdbarch, insn1, insn2, dsc);
>> +	      else
>> +		return thumb_copy_unmodified_32bit (gdbarch, insn1, insn2,
>> +						    "unalloc memhint", dsc);
>> +	    }
>> +	  else
>> +	    {
>> +	      if ((op1 == 0 || op1 == 2)
>> +		  && (bits (insn2, 8, 11) == 0xe
>> +		      || ((bits (insn2, 8, 11) & 0x9) == 0x9)))
>> +		return thumb_32bit_copy_unpred (gdbarch, insn1, insn2, dsc);
>> +	      else thumb_copy_unmodified_32bit (gdbarch, insn1, insn2,
>> +						"pld/unalloc memhint", dsc);
>> +	    }
> 
> See above, it's probably not worth to make such fine-grained distinctions
> when the result is effectively the same anyway.
> 

OK.  They are combined into a single call to copy_unmodified routine.

>> +	}
>> +      else
>> +	{
>> +	  int op1 = bits (insn1, 7, 8);
>> +
>> +	  if ((op1 == 0 || op1 == 2) && bit (insn2, 11))
>> +	    writeback = bit (insn2, 8);
>> +	  return thumb2_copy_load_store (gdbarch, insn1, insn2, regs, dsc, 1,
>> +					 2, user_mode, writeback);
> 
> See above for literal forms; computation of writeback etc. flags.
> 
>> +	}
>> +      break;
>> +    case 2: /* Load word */
>> +      {
>> +	int op1 = bits (insn1, 7, 8);
>> +
>> +	  if ((op1 == 0 || op1 == 2) && bit (insn2, 11))
>> +	    writeback = bit (insn2, 8);
>> +
>> +	return thumb2_copy_load_store (gdbarch, insn1, insn2, regs, dsc, 1, 4,
>> +				       user_mode, writeback);
>> +	break;
> 
> Likewise.
> 
> 
>> +static int
>> +decode_thumb_32bit_store_single_data_item (struct gdbarch *gdbarch,
>> +					   uint16_t insn1, uint16_t insn2,
>> +					   struct regcache *regs,
>> +					   struct displaced_step_closure *dsc)
>> +{
>> +  int user_mode = (bits (insn2, 8, 11) == 0xe);
>> +  int size = 0;
>> +  int writeback = 0;
>> +  int op1 = bits (insn1, 5, 7);
>> +
>> +  switch (op1)
>> +    {
>> +    case 0: case 4: size = 1; break;
>> +    case 1: case 5: size = 2; break;
>> +    case 2: case 6: size = 4; break;
>> +    }
>> +  if (bits (insn1, 5, 7) < 3 && bit (insn2, 11))
>> +    writeback = bit (insn2, 8);
>> +
>> +  return thumb2_copy_load_store (gdbarch, insn1, insn2, regs,
>> +				 dsc, 0, size, user_mode,
>> +				 writeback);
>> +
>> +}
> 
> As per the discussion above, this function is probably unnecessary,
> since stores cannot use the PC in Thumb mode.
> 
> 

Yes, it is removed.

>>  static void
>>  thumb_process_displaced_32bit_insn (struct gdbarch *gdbarch, uint16_t insn1,
>>  				    uint16_t insn2, struct regcache *regs,
>>  				    struct displaced_step_closure *dsc)
>>  {
> 
>> +    case 2: /* op1 = 2 */
>> +      if (op) /* Branch and misc control.  */
>> +	{
>> +	  if (bit (insn2, 14)) /* BLX/BL */
>> +	    err = thumb2_copy_b_bl_blx (gdbarch, insn1, insn2, regs, dsc);
>> +	  else if (!bits (insn2, 12, 14) && bits (insn1, 7, 9) != 0x7)
>> +	    /* Conditional Branch */
>> +	    err = thumb2_copy_b_bl_blx (gdbarch, insn1, insn2, regs, dsc);
> 
> See above for the problems with this condition.  Also, you're missing
> (some) *unconditional* branch instructions (B) here; those have bit 12
> equal to 1.
> 
> Maybe the checks should be combined into:
>   if (bit (insn2, 14)			/* BLX/BL */
>       || bit (insn2, 12)		/* Unconditional branch */
>       || bits (insn1, 7, 9) != 0x7))	/* Conditional branch */
>     err = thumb2_copy_b_bl_blx (gdbarch, insn1, insn2, regs, dsc);
> 

Yeah, it looks right.

>> +    case 3: /* op1 = 3 */
>> +      switch (bits (insn1, 9, 10))
>> +	{
>> +	case 0:
>> +	  if ((bits (insn1, 4, 6) & 0x5) == 0x1)
>> +	    err = decode_thumb_32bit_ld_mem_hints (gdbarch, insn1, insn2,
>> +						   regs, dsc);
> 
> This check misses the "Load word" instructions.  It should probably
> just be "if (bit (insn1, 4))" at this point.
> 

Fixed.

>> +	  else
>> +	    {
>> +	      if (bit (insn1, 8)) /* NEON Load/Store */
>> +		err = thumb_copy_unmodified_32bit (gdbarch, insn1, insn2,
>> +						   "neon elt/struct load/store",
>> +						   dsc);
>> +	      else /* Store single data item */
>> +		err = decode_thumb_32bit_store_single_data_item (gdbarch,
>> +								 insn1, insn2,
>> +								 regs, dsc);
> 
> As discussed above, I think those can all be copied unmodified.
> 

Done.

-- 
Yao (éå)
         Support displaced stepping for Thumb 32-bit insns.

         * arm-tdep.c (thumb_copy_unmodified_32bit): New.
         (thumb2_copy_preload): New.
         (thumb2_copy_copro_load_store): New.
         (thumb2_copy_b_bl_blx): New.
         (thumb2_copy_alu_imm): New.
         (thumb2_copy_load_reg_imm): New.
         (thumb2_copy_load_literal): New
         (thumb2_copy_block_xfer): New.
         (thumb_32bit_copy_undef): New.
         (thumb_32bit_copy_unpred): New.
         (thumb2_decode_ext_reg_ld_st): New.
         (thumb2_decode_svc_copro): New.
         (decode_thumb_32bit_store_single_data_item): New.
         (thumb_copy_pc_relative_32bit): New.
         (thumb_decode_pc_relative_32bit): New.
         (decode_thumb_32bit_ld_mem_hints): New.
         (thumb2_copy_table_branch): New
         (thumb_process_displaced_32bit_insn): Process Thumb 32-bit
         instructions.
---
 gdb/arm-tdep.c |  805 +++++++++++++++++++++++++++++++++++++++++++++++++++++++-
 1 files changed, 804 insertions(+), 1 deletions(-)

diff --git a/gdb/arm-tdep.c b/gdb/arm-tdep.c
index b0074bd..58c7c72 100644
--- a/gdb/arm-tdep.c
+++ b/gdb/arm-tdep.c
@@ -5341,6 +5341,23 @@ arm_copy_unmodified (struct gdbarch *gdbarch, uint32_t insn,
   return 0;
 }
 
+static int
+thumb_copy_unmodified_32bit (struct gdbarch *gdbarch, uint16_t insn1,
+			     uint16_t insn2, const char *iname,
+			     struct displaced_step_closure *dsc)
+{
+  if (debug_displaced)
+    fprintf_unfiltered (gdb_stdlog, "displaced: copying insn %.4x %.4x, "
+			"opcode/class '%s' unmodified\n", insn1, insn2,
+			iname);
+
+  dsc->modinsn[0] = insn1;
+  dsc->modinsn[1] = insn2;
+  dsc->numinsns = 2;
+
+  return 0;
+}
+
 /* Copy 16-bit Thumb(Thumb and 16-bit Thumb-2) instruction without any
    modification.  */
 static int
@@ -5408,6 +5425,54 @@ arm_copy_preload (struct gdbarch *gdbarch, uint32_t insn, struct regcache *regs,
   return 0;
 }
 
+static int
+thumb2_copy_preload (struct gdbarch *gdbarch, uint16_t insn1, uint16_t insn2,
+		     struct regcache *regs, struct displaced_step_closure *dsc)
+{
+  unsigned int rn = bits (insn1, 0, 3);
+  unsigned int u_bit = bit (insn1, 7);
+  int imm12 = bits (insn2, 0, 11);
+  ULONGEST pc_val;
+
+  if (rn != ARM_PC_REGNUM)
+    return thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, "preload", dsc);
+
+  /* PC is only allowed to use in PLI (immeidate,literal) Encoding T3, and
+     PLD (literal) Encoding T1.  */
+  if (debug_displaced)
+    fprintf_unfiltered (gdb_stdlog,
+			"displaced: copying pld/pli pc (0x%x) %c imm12 %.4x\n",
+			(unsigned int) dsc->insn_addr, u_bit ? '+' : '-',
+			imm12);
+
+  if (!u_bit)
+    imm12 = -1 * imm12;
+
+  /* Rewrite instruction {pli/pld} PC imm12 into:
+     Preapre: tmp[0] <- r0, tmp[1] <- r1, r0 <- pc, r1 <- imm12
+
+     {pli/pld} [r0, r1]
+
+     Cleanup: r0 <- tmp[0], r1 <- tmp[1].  */
+
+  dsc->tmp[0] = displaced_read_reg (regs, dsc, 0);
+  dsc->tmp[1] = displaced_read_reg (regs, dsc, 1);
+
+  pc_val = displaced_read_reg (regs, dsc, ARM_PC_REGNUM);
+
+  displaced_write_reg (regs, dsc, 0, pc_val, CANNOT_WRITE_PC);
+  displaced_write_reg (regs, dsc, 1, imm12, CANNOT_WRITE_PC);
+  dsc->u.preload.immed = 0;
+
+  /* {pli/pld} [r0, r1] */
+  dsc->modinsn[0] = insn1 & 0xff00;
+  dsc->modinsn[1] = 0xf001;
+  dsc->numinsns = 2;
+
+  dsc->cleanup = &cleanup_preload;
+  return 0;
+}
+
 /* Preload instructions with register offset.  */
 
 static void
@@ -5517,6 +5582,32 @@ arm_copy_copro_load_store (struct gdbarch *gdbarch, uint32_t insn,
   return 0;
 }
 
+static int
+thumb2_copy_copro_load_store (struct gdbarch *gdbarch, uint16_t insn1,
+			      uint16_t insn2, struct regcache *regs,
+			      struct displaced_step_closure *dsc)
+{
+  unsigned int rn = bits (insn1, 0, 3);
+
+  if (rn != ARM_PC_REGNUM)
+    return thumb_copy_unmodified_32bit (gdbarch, insn1, insn2,
+					"copro load/store", dsc);
+
+  if (debug_displaced)
+    fprintf_unfiltered (gdb_stdlog, "displaced: copying coprocessor "
+			"load/store insn %.4x%.4x\n", insn1, insn2);
+
+  dsc->modinsn[0] = insn1 & 0xfff0;
+  dsc->modinsn[1] = insn2;
+  dsc->numinsns = 2;
+
+  /* This function is called for copying instruction LDC/LDC2/VLDR, which
+     doesn't support writeback, so pass 0.  */
+  install_copro_load_store (gdbarch, regs, dsc, 0, rn);
+
+  return 0;
+}
+
 /* Clean up branch instructions (actually perform the branch, by setting
    PC).  */
 
@@ -5604,6 +5695,61 @@ arm_copy_b_bl_blx (struct gdbarch *gdbarch, uint32_t insn,
   return 0;
 }
 
+static int
+thumb2_copy_b_bl_blx (struct gdbarch *gdbarch, uint16_t insn1,
+		      uint16_t insn2, struct regcache *regs,
+		      struct displaced_step_closure *dsc)
+{
+  int link = bit (insn2, 14);
+  int exchange = link && !bit (insn2, 12);
+  int cond = INST_AL;
+  long offset = 0;
+  int j1 = bit (insn2, 13);
+  int j2 = bit (insn2, 11);
+  int s = sbits (insn1, 10, 10);
+  int i1 = !(j1 ^ bit (insn1, 10));
+  int i2 = !(j2 ^ bit (insn1, 10));
+
+  if (!link && !exchange) /* B */
+    {
+      offset = (bits (insn2, 0, 10) << 1);
+      if (bit (insn2, 12)) /* Encoding T4 */
+	{
+	  offset |= (bits (insn1, 0, 9) << 12)
+	    | (i2 << 22)
+	    | (i1 << 23)
+	    | (s << 24);
+	  cond = INST_AL;
+	}
+      else /* Encoding T3 */
+	{
+	  offset |= (bits (insn1, 0, 5) << 12)
+	    | (j1 << 18)
+	    | (j2 << 19)
+	    | (s << 20);
+	  cond = bits (insn1, 6, 9);
+	}
+    }
+  else
+    {
+      offset = (bits (insn1, 0, 9) << 12);
+      offset |= ((i2 << 22) | (i1 << 23) | (s << 24));
+      offset |= exchange ?
+	(bits (insn2, 1, 10) << 2) : (bits (insn2, 0, 10) << 1);
+    }
+
+  if (debug_displaced)
+    fprintf_unfiltered (gdb_stdlog, "displaced: copying %s insn "
+			"%.4x %.4x with offset %.8lx\n",
+			link ? (exchange) ? "blx" : "bl" : "b",
+			insn1, insn2, offset);
+
+  dsc->modinsn[0] = THUMB_NOP;
+
+  install_b_bl_blx (gdbarch, regs, dsc, cond, exchange, link, offset);
+  return 0;
+}
+
 /* Copy B Thumb instructions.  */
 static int
 thumb_copy_b (struct gdbarch *gdbarch, unsigned short insn,
@@ -5767,6 +5913,58 @@ arm_copy_alu_imm (struct gdbarch *gdbarch, uint32_t insn, struct regcache *regs,
   return 0;
 }
 
+static int
+thumb2_copy_alu_imm (struct gdbarch *gdbarch, uint16_t insn1,
+		     uint16_t insn2, struct regcache *regs,
+		     struct displaced_step_closure *dsc)
+{
+  unsigned int op = bits (insn1, 5, 8);
+  unsigned int rn, rm, rd;
+  ULONGEST rd_val, rn_val;
+
+  rn = bits (insn1, 0, 3); /* Rn */
+  rm = bits (insn2, 0, 3); /* Rm */
+  rd = bits (insn2, 8, 11); /* Rd */
+
+  /* This routine is only called for instruction MOV.  */
+  gdb_assert (op == 0x2 && rn == 0xf);
+
+  if (rm != ARM_PC_REGNUM && rd != ARM_PC_REGNUM)
+    return thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, "ALU imm", dsc);
+
+  if (debug_displaced)
+    fprintf_unfiltered (gdb_stdlog, "displaced: copying reg %s insn %.4x%.4x\n",
+			"ALU", insn1, insn2);
+
+  /* Instruction is of form:
+
+     <op><cond> rd, [rn,] #imm
+
+     Rewrite as:
+
+     Preparation: tmp1, tmp2 <- r0, r1;
+		  r0, r1 <- rd, rn
+     Insn: <op><cond> r0, r1, #imm
+     Cleanup: rd <- r0; r0 <- tmp1; r1 <- tmp2
+  */
+
+  dsc->tmp[0] = displaced_read_reg (regs, dsc, 0);
+  dsc->tmp[1] = displaced_read_reg (regs, dsc, 1);
+  rn_val = displaced_read_reg (regs, dsc, rn);
+  rd_val = displaced_read_reg (regs, dsc, rd);
+  displaced_write_reg (regs, dsc, 0, rd_val, CANNOT_WRITE_PC);
+  displaced_write_reg (regs, dsc, 1, rn_val, CANNOT_WRITE_PC);
+  dsc->rd = rd;
+
+  dsc->modinsn[0] = insn1;
+  dsc->modinsn[1] = ((insn2 & 0xf0f0) | 0x1);
+  dsc->numinsns = 2;
+
+  dsc->cleanup = &cleanup_alu_imm;
+
+  return 0;
+}
+
 /* Copy/cleanup arithmetic/logic insns with register RHS.  */
 
 static void
@@ -6134,6 +6332,113 @@ install_load_store (struct gdbarch *gdbarch, struct regcache *regs,
   dsc->cleanup = load ? &cleanup_load : &cleanup_store;
 }
 
+
+static int
+thumb2_copy_load_literal (struct gdbarch *gdbarch, uint16_t insn1,
+			  uint16_t insn2, struct regcache *regs,
+			  struct displaced_step_closure *dsc)
+{
+  unsigned int u_bit = bit (insn1, 7);
+  unsigned int rt = bits (insn2, 12, 15);
+  int imm12 = bits (insn2, 0, 11);
+  ULONGEST pc_val;
+
+  if (debug_displaced)
+    fprintf_unfiltered (gdb_stdlog,
+			"displaced: copying ldr pc (0x%x) R%d %c imm12 %.4x\n",
+			(unsigned int) dsc->insn_addr, rt, u_bit ? '+' : '-',
+			imm12);
+
+  if (!u_bit)
+    imm12 = -1 * imm12;
+
+  /* Rewrite instruction LDR Rt imm12 into:
+
+     Prepare: tmp[0] <- r0, tmp[1] <- r1, tmp[2] <- r2, r1 <- pc, r2 <- imm12
+
+     LDR R0, R1, R2,
+
+     Cleanup: rt <- r0, r0 <- tmp[0], r1 <- tmp[1], r2 <- tmp[2].  */
+
+
+  dsc->tmp[0] = displaced_read_reg (regs, dsc, 0);
+  dsc->tmp[1] = displaced_read_reg (regs, dsc, 1);
+  dsc->tmp[2] = displaced_read_reg (regs, dsc, 2);
+
+  pc_val = displaced_read_reg (regs, dsc, ARM_PC_REGNUM);
+
+  displaced_write_reg (regs, dsc, 1, pc_val, CANNOT_WRITE_PC);
+  displaced_write_reg (regs, dsc, 2, imm12, CANNOT_WRITE_PC);
+
+  dsc->rd = rt;
+
+  dsc->u.ldst.xfersize = 4;
+  dsc->u.ldst.immed = 0;
+  dsc->u.ldst.writeback = 0;
+  dsc->u.ldst.restore_r4 = 0;
+
+  /* LDR R0, R1, R2 */
+  dsc->modinsn[0] = 0xf851;
+  dsc->modinsn[1] = 0x2;
+  dsc->numinsns = 2;
+
+  dsc->cleanup = &cleanup_load;
+
+  return 0;
+}
+
+
+static int
+thumb2_copy_load_reg_imm (struct gdbarch *gdbarch, uint16_t insn1,
+			  uint16_t insn2, struct regcache *regs,
+			  struct displaced_step_closure *dsc,
+			  int size, int usermode, int writeback, int immed)
+{
+  unsigned int rt = bits (insn2, 12, 15);
+  unsigned int rn = bits (insn1, 0, 3);
+  unsigned int rm = bits (insn2, 0, 3);  /* Only valid if !immed.  */
+  /* In LDR (register), there is also a register Rm, which is not allowed to
+     be PC, so we don't have to check it.  */
+
+  if (rt != ARM_PC_REGNUM && rn != ARM_PC_REGNUM)
+    return thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, "load",
+					dsc);
+
+  if (debug_displaced)
+    fprintf_unfiltered (gdb_stdlog,
+			"displaced: copying %s%s r%d [r%d] insn %.4x%.4x\n",
+			(size == 1 ? "ldrb" : (size == 2 ? "ldrh" : "ldr")),
+			usermode ? "t" : "",
+			rt, rn, insn1, insn2);
+
+  install_load_store (gdbarch, regs, dsc, 1, immed, writeback, size,
+		      usermode, rt, rm, rn);
+
+  dsc->u.ldst.restore_r4 = 0;
+
+  if (immed)
+    /* ldr[b]<cond> rt, [rn, #imm], etc.
+       ->
+       ldr[b]<cond> r0, [r2, #imm].  */
+    {
+      dsc->modinsn[0] = (insn1 & 0xfff0) | 0x2;
+      dsc->modinsn[1] = insn2 & 0x0fff;
+    }
+  else
+    /* ldr[b]<cond> rt, [rn, rm], etc.
+       ->
+       ldr[b]<cond> r0, [r2, r3].  */
+    {
+      dsc->modinsn[0] = (insn1 & 0xfff0) | 0x2;
+      dsc->modinsn[1] = (insn2 & 0x0ff0) | 0x3;
+    }
+
+  dsc->numinsns = 2;
+
+  return 0;
+}
+
+
 static int
 arm_copy_ldr_str_ldrb_strb (struct gdbarch *gdbarch, uint32_t insn,
 			    struct regcache *regs,
@@ -6524,6 +6829,87 @@ arm_copy_block_xfer (struct gdbarch *gdbarch, uint32_t insn,
   return 0;
 }
 
+static int
+thumb2_copy_block_xfer (struct gdbarch *gdbarch, uint16_t insn1, uint16_t insn2,
+			struct regcache *regs,
+			struct displaced_step_closure *dsc)
+{
+  int rn = bits (insn1, 0, 3);
+  int load = bit (insn1, 4);
+  int writeback = bit (insn1, 5);
+
+  /* Block transfers which don't mention PC can be run directly
+     out-of-line.  */
+  if (rn != ARM_PC_REGNUM && (insn2 & 0x8000) == 0)
+    return thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, "ldm/stm", dsc);
+
+  if (rn == ARM_PC_REGNUM)
+    {
+      warning (_("displaced: Unpredictable LDM or STM with "
+		 "base register r15"));
+      return thumb_copy_unmodified_32bit (gdbarch, insn1, insn2,
+					  "unpredictable ldm/stm", dsc);
+    }
+
+  if (debug_displaced)
+    fprintf_unfiltered (gdb_stdlog, "displaced: copying block transfer insn "
+			"%.4x%.4x\n", insn1, insn2);
+
+  /* Clear bit 13, since it should be always zero.  */
+  dsc->u.block.regmask = (insn2 & 0xdfff);
+  dsc->u.block.rn = rn;
+
+  dsc->u.block.load = load;
+  dsc->u.block.user = 0;
+  dsc->u.block.increment = bit (insn1, 7);
+  dsc->u.block.before = bit (insn1, 8);
+  dsc->u.block.writeback = writeback;
+  dsc->u.block.cond = INST_AL;
+
+  if (load)
+    {
+      if (dsc->u.block.regmask == 0xffff)
+	{
+	  /* This branch is impossible to happen.  */
+	  gdb_assert (0);
+	}
+      else
+	{
+	  unsigned int regmask = dsc->u.block.regmask;
+	  unsigned int num_in_list = bitcount (regmask), new_regmask, bit = 1;
+	  unsigned int to = 0, from = 0, i, new_rn;
+
+	  for (i = 0; i < num_in_list; i++)
+	    dsc->tmp[i] = displaced_read_reg (regs, dsc, i);
+
+	  if (writeback)
+	    insn1 &= ~(1 << 5);
+
+	  new_regmask = (1 << num_in_list) - 1;
+
+	  if (debug_displaced)
+	    fprintf_unfiltered (gdb_stdlog, _("displaced: LDM r%d%s, "
+				"{..., pc}: original reg list %.4x, modified "
+				"list %.4x\n"), rn, writeback ? "!" : "",
+				(int) dsc->u.block.regmask, new_regmask);
+
+	  dsc->modinsn[0] = insn1;
+	  dsc->modinsn[1] = (new_regmask & 0xffff);
+	  dsc->numinsns = 2;
+
+	  dsc->cleanup = &cleanup_block_load_pc;
+	}
+    }
+  else
+    {
+      dsc->modinsn[0] = insn1;
+      dsc->modinsn[1] = insn2;
+      dsc->numinsns = 2;
+      dsc->cleanup = &cleanup_block_store_pc;
+    }
+  return 0;
+}
+
 /* Cleanup/copy SVC (SWI) instructions.  These two functions are overridden
    for Linux, where some SVC instructions must be treated specially.  */
 
@@ -6609,6 +6995,23 @@ arm_copy_undef (struct gdbarch *gdbarch, uint32_t insn,
   return 0;
 }
 
+static int
+thumb_32bit_copy_undef (struct gdbarch *gdbarch, uint16_t insn1, uint16_t insn2,
+                       struct displaced_step_closure *dsc)
+{
+
+  if (debug_displaced)
+    fprintf_unfiltered (gdb_stdlog, "displaced: copying undefined insn "
+                       "%.4x %.4x\n", (unsigned short) insn1,
+                       (unsigned short) insn2);
+
+  dsc->modinsn[0] = insn1;
+  dsc->modinsn[1] = insn2;
+  dsc->numinsns = 2;
+
+  return 0;
+}
+
 /* Copy unpredictable instructions.  */
 
 static int
@@ -7005,6 +7408,65 @@ arm_decode_ext_reg_ld_st (struct gdbarch *gdbarch, uint32_t insn,
   return 1;
 }
 
+/* Decode shifted register instructions.  */
+
+static int
+thumb2_decode_dp_shift_reg (struct gdbarch *gdbarch, uint16_t insn1,
+			    uint16_t insn2,  struct regcache *regs,
+			    struct displaced_step_closure *dsc)
+{
+  /* PC is only allowed to be used in instruction MOV.  */
+
+  unsigned int op = bits (insn1, 5, 8);
+  unsigned int rn = bits (insn1, 0, 3);
+
+  if (op == 0x2 && rn == 0xf) /* MOV */
+    return thumb2_copy_alu_imm (gdbarch, insn1, insn2, regs, dsc);
+  else
+    return thumb_copy_unmodified_32bit (gdbarch, insn1, insn2,
+					"dp (shift reg)", dsc);
+}
+
+
+/* Decode extension register load/store.  Exactly the same as
+   arm_decode_ext_reg_ld_st.  */
+
+static int
+thumb2_decode_ext_reg_ld_st (struct gdbarch *gdbarch, uint16_t insn1,
+			     uint16_t insn2,  struct regcache *regs,
+			     struct displaced_step_closure *dsc)
+{
+  unsigned int opcode = bits (insn1, 4, 8);
+
+  switch (opcode)
+    {
+    case 0x04: case 0x05:
+      return thumb_copy_unmodified_32bit (gdbarch, insn1, insn2,
+					  "vfp/neon vmov", dsc);
+
+    case 0x08: case 0x0c: /* 01x00 */
+    case 0x0a: case 0x0e: /* 01x10 */
+    case 0x12: case 0x16: /* 10x10 */
+      return thumb_copy_unmodified_32bit (gdbarch, insn1, insn2,
+					  "vfp/neon vstm/vpush", dsc);
+
+    case 0x09: case 0x0d: /* 01x01 */
+    case 0x0b: case 0x0f: /* 01x11 */
+    case 0x13: case 0x17: /* 10x11 */
+      return thumb_copy_unmodified_32bit (gdbarch, insn1, insn2,
+					  "vfp/neon vldm/vpop", dsc);
+
+    case 0x10: case 0x14: case 0x18: case 0x1c:  /* vstr.  */
+      return thumb_copy_unmodified_32bit (gdbarch, insn1, insn2,
+					  "vstr", dsc);
+    case 0x11: case 0x15: case 0x19: case 0x1d:  /* vldr.  */
+      return thumb2_copy_copro_load_store (gdbarch, insn1, insn2, regs, dsc);
+    }
+
+  /* Should be unreachable.  */
+  return 1;
+}
+
 static int
 arm_decode_svc_copro (struct gdbarch *gdbarch, uint32_t insn, CORE_ADDR to,
 		      struct regcache *regs, struct displaced_step_closure *dsc)
@@ -7051,6 +7513,49 @@ arm_decode_svc_copro (struct gdbarch *gdbarch, uint32_t insn, CORE_ADDR to,
     return arm_copy_undef (gdbarch, insn, dsc);  /* Possibly unreachable.  */
 }
 
+static int
+thumb2_decode_svc_copro (struct gdbarch *gdbarch, uint16_t insn1,
+			 uint16_t insn2, struct regcache *regs,
+			 struct displaced_step_closure *dsc)
+{
+  unsigned int coproc = bits (insn2, 8, 11);
+  unsigned int op1 = bits (insn1, 4, 9);
+  unsigned int bit_5_8 = bits (insn1, 5, 8);
+  unsigned int bit_9 = bit (insn1, 9);
+  unsigned int bit_4 = bit (insn1, 4);
+  unsigned int rn = bits (insn1, 0, 3);
+
+  if (bit_9 == 0)
+    {
+      if (bit_5_8 == 2)
+	return thumb_copy_unmodified_32bit (gdbarch, insn1, insn2,
+					    "neon 64bit xfer/mrrc/mrrc2/mcrr/mcrr2",
+					    dsc);
+      else if (bit_5_8 == 0) /* UNDEFINED.  */
+	return thumb_32bit_copy_undef (gdbarch, insn1, insn2, dsc);
+      else
+	{
+	   /*coproc is 101x.  SIMD/VFP, ext registers load/store.  */
+	  if ((coproc & 0xe) == 0xa)
+	    return thumb2_decode_ext_reg_ld_st (gdbarch, insn1, insn2, regs,
+						dsc);
+	  else /* coproc is not 101x.  */
+	    {
+	      if (bit_4 == 0) /* STC/STC2.  */
+		return thumb_copy_unmodified_32bit (gdbarch, insn1, insn2,
+						    "stc/stc2", dsc);
+	      else /* LDC/LDC2 {literal, immeidate}.  */
+		return thumb2_copy_copro_load_store (gdbarch, insn1, insn2,
+						     regs, dsc);
+	    }
+	}
+    }
+  else
+    return thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, "coproc", dsc);
+
+  return 0;
+}
+
 static void
 install_pc_relative (struct gdbarch *gdbarch, struct regcache *regs,
 		     struct displaced_step_closure *dsc, int rd)
@@ -7100,6 +7605,43 @@ thumb_decode_pc_relative_16bit (struct gdbarch *gdbarch, uint16_t insn,
 }
 
 static int
+thumb_copy_pc_relative_32bit (struct gdbarch *gdbarch, uint16_t insn1,
+			      uint16_t insn2, struct regcache *regs,
+			      struct displaced_step_closure *dsc)
+{
+  unsigned int rd = bits (insn2, 8, 11);
+  /* Since immediate has the same encoding in ADR ADD and SUB, so we simply
+     extract raw immediate encoding rather than computing immediate.  When
+     generating ADD or SUB instruction, we can simply perform OR operation to
+     set immediate into ADD.  */
+  unsigned int imm_3_8 = insn2 & 0x70ff;
+  unsigned int imm_i = insn1 & 0x0400; /* Clear all bits except bit 10.  */
+
+  if (debug_displaced)
+    fprintf_unfiltered (gdb_stdlog,
+			"displaced: copying thumb adr r%d, #%d:%d insn %.4x%.4x\n",
+			rd, imm_i, imm_3_8, insn1, insn2);
+
+  if (bit (insn1, 7)) /* Encoding T2 */
+    {
+      /* Encoding T3: SUB Rd, Rd, #imm */
+      dsc->modinsn[0] = (0xf1a0 | rd | imm_i);
+      dsc->modinsn[1] = ((rd << 8) | imm_3_8);
+    }
+  else /* Encoding T3 */
+    {
+      /* Encoding T3: ADD Rd, Rd, #imm */
+      dsc->modinsn[0] = (0xf100 | rd | imm_i);
+      dsc->modinsn[1] = ((rd << 8) | imm_3_8);
+    }
+  dsc->numinsns = 2;
+
+  install_pc_relative (gdbarch, regs, dsc, rd);
+
+  return 0;
+}
+
+static int
 thumb_copy_16bit_ldr_literal (struct gdbarch *gdbarch, unsigned short insn1,
 			      struct regcache *regs,
 			      struct displaced_step_closure *dsc)
@@ -7181,6 +7723,51 @@ thumb_copy_cbnz_cbz (struct gdbarch *gdbarch, uint16_t insn1,
   return 0;
 }
 
+/* Copy Table Branch Byte/Halfword */
+static int
+thumb2_copy_table_branch (struct gdbarch *gdbarch, uint16_t insn1,
+			  uint16_t insn2, struct regcache *regs,
+			  struct displaced_step_closure *dsc)
+{
+  ULONGEST rn_val, rm_val;
+  int is_tbh = bit (insn2, 4);
+  CORE_ADDR halfwords = 0;
+  enum bfd_endian byte_order = gdbarch_byte_order (gdbarch);
+
+  rn_val = displaced_read_reg (regs, dsc, bits (insn1, 0, 3));
+  rm_val = displaced_read_reg (regs, dsc, bits (insn2, 0, 3));
+
+  if (is_tbh)
+    {
+      gdb_byte buf[2];
+
+      target_read_memory (rn_val + 2 * rm_val, buf, 2);
+      halfwords = extract_unsigned_integer (buf, 2, byte_order);
+    }
+  else
+    {
+      gdb_byte buf[1];
+
+      target_read_memory (rn_val + rm_val, buf, 1);
+      halfwords = extract_unsigned_integer (buf, 1, byte_order);
+    }
+
+  if (debug_displaced)
+    fprintf_unfiltered (gdb_stdlog, "displaced: %s base 0x%x offset 0x%x"
+			" offset 0x%x\n", is_tbh ? "tbh" : "tbb",
+			(unsigned int) rn_val, (unsigned int) rm_val,
+			(unsigned int) halfwords);
+
+  dsc->u.branch.cond = INST_AL;
+  dsc->u.branch.link = 0;
+  dsc->u.branch.exchange = 0;
+  dsc->u.branch.dest = dsc->insn_addr + 4 + 2 * halfwords;
+
+  dsc->cleanup = &cleanup_branch;
+
+  return 0;
+}
+
 static void
 cleanup_pop_pc_16bit_all (struct gdbarch *gdbarch, struct regcache *regs,
 			  struct displaced_step_closure *dsc)
@@ -7374,12 +7961,228 @@ thumb_process_displaced_16bit_insn (struct gdbarch *gdbarch, uint16_t insn1,
 		    _("thumb_process_displaced_16bit_insn: Instruction decode error"));
 }
 
+static int
+decode_thumb_32bit_ld_mem_hints (struct gdbarch *gdbarch,
+				 uint16_t insn1, uint16_t insn2,
+				 struct regcache *regs,
+				 struct displaced_step_closure *dsc)
+{
+  int rt = bits (insn2, 12, 15);
+  int rn = bits (insn1, 0, 3);
+  int op1 = bits (insn1, 7, 8);
+  int err = 0;
+
+  switch (bits (insn1, 5, 6))
+    {
+    case 0: /* Load byte and memory hints */
+      if (rt == 0xf) /* PLD/PLI */
+	{
+	  if (rn == 0xf)
+	    /* PLD literal or Encoding T3 of PLI(immediate, literal).  */
+	    return thumb2_copy_preload (gdbarch, insn1, insn2, regs, dsc);
+	  else
+	    return thumb_copy_unmodified_32bit (gdbarch, insn1, insn2,
+						  "pli/pld", dsc);
+	}
+      else
+	return thumb_copy_unmodified_32bit (gdbarch, insn1, insn2,
+					    "ldrb{reg, immediate}/ldrbt",
+					    dsc);
+
+      break;
+    case 1: /* Load halfword and memory hints.  */
+      if (rt == 0xf) /* PLD{W} and Unalloc memory hint.  */
+	return thumb_copy_unmodified_32bit (gdbarch, insn1, insn2,
+					    "pld/unalloc memhint", dsc);
+      else
+	{
+	  int insn2_bit_8_11 = bits (insn2, 8, 11);
+
+	  if (rn == 0xf)
+	    return thumb2_copy_load_literal (gdbarch, insn1, insn2, regs, dsc);
+	  else
+	    {
+	      if (op1 == 0x1 || op1 == 0x3)
+		/* LDRH/LDRSH (immediate), in which bit 7 of insn1 is 1,
+		   PC is not used.  */
+		return thumb_copy_unmodified_32bit (gdbarch, insn1, insn2,
+						    "ldrh/ldrht", dsc);
+	      else if (insn2_bit_8_11 == 0xc
+		       || (insn2_bit_8_11 & 0x9) == 0x9)
+		/* LDRH/LDRSH (imediate), in which bit 7 of insn1 is 0, PC
+		   can be used.  */
+		return  thumb2_copy_load_reg_imm (gdbarch, insn1, insn2, regs,
+						  dsc, 2, 0, bit (insn2, 8), 1);
+	      else /* PC is not allowed to use in LDRH (register) and LDRHT.  */
+		return thumb_copy_unmodified_32bit (gdbarch, insn1, insn2,
+						    "ldrh/ldrht", dsc);
+	    }
+	}
+      break;
+    case 2: /* Load word */
+      {
+	int insn2_bit_8_11 = bits (insn2, 8, 11);
+
+	  if (rn == 0xf)
+	    return thumb2_copy_load_literal (gdbarch, insn1, insn2, regs, dsc);
+	  else if (op1 == 0x1) /* Encoding T3 */
+	    return thumb2_copy_load_reg_imm (gdbarch, insn1, insn2, regs,
+					     dsc, 4, 0, 0, 1);
+	  else /* op1 == 0x0 */
+	    {
+	      if (insn2_bit_8_11 == 0xc || (insn2_bit_8_11 & 0x9) == 0x9)
+		/* LDR (immediate) */
+		return thumb2_copy_load_reg_imm (gdbarch, insn1, insn2, regs,
+						 dsc, 4, 0,
+						 bit (insn2, 8), 1);
+	      else
+		/* LDRT and LDR (register) */
+		return thumb2_copy_load_reg_imm (gdbarch, insn1, insn2, regs,
+						 dsc, 4,
+						 bits (insn2, 8, 11) == 0xe,
+						 0, 0);
+	    }
+
+	break;
+      }
+    default:
+      return thumb_32bit_copy_undef (gdbarch, insn1, insn2, dsc);
+      break;
+    }
+  return 0;
+}
+
 static void
 thumb_process_displaced_32bit_insn (struct gdbarch *gdbarch, uint16_t insn1,
 				    uint16_t insn2, struct regcache *regs,
 				    struct displaced_step_closure *dsc)
 {
-  error (_("Displaced stepping is only supported in ARM mode and Thumb 16bit instructions"));
+  int err = 0;
+  unsigned short op = bit (insn2, 15);
+  unsigned int op1 = bits (insn1, 11, 12);
+
+  switch (op1)
+    {
+    case 1:
+      {
+	switch (bits (insn1, 9, 10))
+	  {
+	  case 0:
+	    if (bit (insn1, 6))
+	      {
+		/* Load/store {dual, execlusive}, table branch.  */
+		if (bits (insn1, 7, 8) == 1 && bits (insn1, 4, 5) == 1
+		    && bits (insn2, 5, 7) == 0)
+		  err = thumb2_copy_table_branch (gdbarch, insn1, insn2, regs,
+						  dsc);
+		else
+		  /* PC is not allowed to use in load/store {dual, exclusive}
+		     instructions.  */
+		  err = thumb_copy_unmodified_32bit (gdbarch, insn1, insn2,
+						     "load/store dual/ex", dsc);
+	      }
+	    else /* load/store multiple */
+	      {
+		switch (bits (insn1, 7, 8))
+		  {
+		  case 0: case 3: /* SRS, RFE */
+		    err = thumb_copy_unmodified_32bit (gdbarch, insn1, insn2,
+						       "srs/rfe", dsc);
+		    break;
+		  case 1: case 2: /* LDM/STM/PUSH/POP */
+		    err = thumb2_copy_block_xfer (gdbarch, insn1, insn2, regs, dsc);
+		    break;
+		  }
+	      }
+	    break;
+
+	  case 1:
+	    /* Data-processing (shift register).  */
+	    err = thumb2_decode_dp_shift_reg (gdbarch, insn1, insn2, regs,
+					      dsc);
+	    break;
+	  default: /* Coprocessor instructions.  */
+	    /* Thumb 32bit coprocessor instructions have the same encoding
+	       as ARM's.  */
+	    err = thumb2_decode_svc_copro (gdbarch, insn1, insn2, regs, dsc);
+	    break;
+	  }
+      break;
+      }
+    case 2: /* op1 = 2 */
+      if (op) /* Branch and misc control.  */
+	{
+	  if (bit (insn2, 14)  /* BLX/BL */
+	      || bit (insn2, 12) /* Unconditional branch */
+	      || (bits (insn1, 7, 9) != 0x7)) /* Conditional branch */
+	    err = thumb2_copy_b_bl_blx (gdbarch, insn1, insn2, regs, dsc);
+	  else if (!bit (insn2, 12) && bits (insn1, 7, 9) != 0x7)
+	    /* Conditional Branch */
+	    err = thumb2_copy_b_bl_blx (gdbarch, insn1, insn2, regs, dsc);
+	  else
+	    err = thumb_copy_unmodified_32bit (gdbarch, insn1, insn2,
+					       "misc ctrl", dsc);
+	}
+      else
+	{
+	  if (bit (insn1, 9)) /* Data processing (plain binary imm).  */
+	    {
+	      int op = bits (insn1, 4, 8);
+	      int rn = bits (insn1, 0, 4);
+	      if ((op == 0 || op == 0xa) && rn == 0xf)
+		err = thumb_copy_pc_relative_32bit (gdbarch, insn1, insn2,
+						    regs, dsc);
+	      else
+		err = thumb_copy_unmodified_32bit (gdbarch, insn1, insn2,
+						   "dp/pb", dsc);
+	    }
+	  else /* Data processing (modified immeidate) */
+	    err = thumb_copy_unmodified_32bit (gdbarch, insn1, insn2,
+					       "dp/mi", dsc);
+	}
+      break;
+    case 3: /* op1 = 3 */
+      switch (bits (insn1, 9, 10))
+	{
+	case 0:
+	  if (bit (insn1, 4))
+	    err = decode_thumb_32bit_ld_mem_hints (gdbarch, insn1, insn2,
+						   regs, dsc);
+	  else /* NEON Load/Store and Store single data item */
+	    err = thumb_copy_unmodified_32bit (gdbarch, insn1, insn2,
+					       "neon elt/struct load/store",
+					       dsc);
+	  break;
+	case 1: /* op1 = 3, bits (9, 10) == 1 */
+	  switch (bits (insn1, 7, 8))
+	    {
+	    case 0: case 1: /* Data processing (register) */
+	      err = thumb_copy_unmodified_32bit (gdbarch, insn1, insn2,
+						 "dp(reg)", dsc);
+	      break;
+	    case 2: /* Multiply and absolute difference */
+	      err = thumb_copy_unmodified_32bit (gdbarch, insn1, insn2,
+						 "mul/mua/diff", dsc);
+	      break;
+	    case 3: /* Long multiply and divide */
+	      err = thumb_copy_unmodified_32bit (gdbarch, insn1, insn2,
+						 "lmul/lmua", dsc);
+	      break;
+	    }
+	  break;
+	default: /* Coprocessor instructions */
+	  err = thumb2_decode_svc_copro (gdbarch, insn1, insn2, regs, dsc);
+	  break;
+	}
+      break;
+    default:
+      err = 1;
+    }
+
+  if (err)
+    internal_error (__FILE__, __LINE__,
+		    _("thumb_process_displaced_32bit_insn: Instruction decode error"));
+
 }
 
 static void
-- 
1.7.0.4


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]