This is the mail archive of the gdb-patches@sourceware.org mailing list for the GDB project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: [try 2nd 5/8] Displaced stepping for Thumb 32-bit insns


On 08/20/2011 12:39 AM, Ulrich Weigand wrote:
Since this is (together with the previous patches that are not yet committed)
is a significant change, I'm wondering a bit what additional testing we could
do to catch any possibly remaining issues ...

Did you try a testsuite run with a GDB build that forces displaced-stepping
on by default?  (I.e. change the initializer of can_use_displaced_stepping
in infrun.c to can_use_displaced_stepping_on.)  That would exercise the new
code a lot.

Yes, I run gdb testsuite with can_use_displaced_stepping set to can_use_displaced_stepping_on, and it does expose more problems in current patches. Three patches attached here to address these problems found so far. I don't combine them into one patch, because they belongs to different groups (thumb 16bit, thumb 32bit).


After applied these three patches, there are still some failures, which are caused by some reasons, so these three patches here can be regarded as WIP patches.

1. Failures in gdb.arch/thumb2-it.exp and gdb.base/gdb1555.exp. These failures are caused by missing IT support in thumb displaced stepping.

2. Failures in gdb.base/break-interp.exp and gdb.base/nostdlib.exp. They are appeared on i686-pc-linux-gnu as well.

3. Failures (timeout) in gdb.base/sigstep.exp. IIUC, it is incorrect to displaced step instructions in signal handler, so failures are expected.

4. Failures in gdb.base/watch-vfork.exp. Displaced stepping is not completed due to a VFORK event. Current displaced stepping infrastructure or infrun logic doesn't consider the case that executing instruction in scratch can be "interrupted". When displaced stepping an vfork syscall, VFORK event comes out earlier than TRAP event. GDB will be confused.

5. Timeout failures in gdb.threads/*.exp. Similarly to #4, when execution instructions in scratch, thread context switch may happen, and GDB will be confused as well. #4 and #5 are not arm-specific problem.

6. Failures in gdb.base/watchpoint-solib.exp gdb.mi/mi-simplerun.exp. They are caused by displaced stepping instruction `mov r12, #imm`. This instruction should be unmodified-copied to scratch, and execute, but experiment shows we can't. I have a local patch that can control displaced stepping on instructions' level. Once I turn it on for `mov r12, #imm`, these tests will fail. The reason is still unknown to me.

7. Accessing some high addresses. Some instructions (alu_imm) may set PC to a hight address, such as 0xffffxxxx, and displaced stepping of this kind instruction should be handled differently.

If my analysis above makes sense and is correct, we still have to fix #1 at least, to make displaced stepping really works. On the other hand, if current patches can be approved, I am happy as well, and can carry less local patches to move on. :)

--
Yao (éå)
	gdb/
	* arm-tdep.c (install_copro_load_store): PC is set 4-byte aligned.
	(install_b_bl_blx): Likewise.
---
 gdb/arm-tdep.c |   11 +++++++++--
 1 files changed, 9 insertions(+), 2 deletions(-)

diff --git a/gdb/arm-tdep.c b/gdb/arm-tdep.c
index 7df9958..67d41d2 100644
--- a/gdb/arm-tdep.c
+++ b/gdb/arm-tdep.c
@@ -5558,6 +5558,8 @@ install_copro_load_store (struct gdbarch *gdbarch, struct regcache *regs,
 
   dsc->tmp[0] = displaced_read_reg (regs, dsc, 0);
   rn_val = displaced_read_reg (regs, dsc, rn);
+  /* PC should be 4-byte aligned.  */
+  rn_val = rn_val & 0xfffffffc;
   displaced_write_reg (regs, dsc, 0, rn_val, CANNOT_WRITE_PC);
 
   dsc->u.ldst.writeback = writeback;
@@ -5664,10 +5666,15 @@ install_b_bl_blx (struct gdbarch *gdbarch, struct regcache *regs,
   dsc->u.branch.link = link;
   dsc->u.branch.exchange = exchange;
 
+  dsc->u.branch.dest = dsc->insn_addr;
+  if (link && exchange)
+    /* For BLX, offset is computed from the Align (PC, 4).  */
+    dsc->u.branch.dest = dsc->u.branch.dest & 0xfffffffc;
+
   if (dsc->is_thumb)
-    dsc->u.branch.dest = dsc->insn_addr + 4 + offset;
+    dsc->u.branch.dest += 4 + offset;
   else
-    dsc->u.branch.dest = dsc->insn_addr + 8 + offset;
+    dsc->u.branch.dest += 8 + offset;
 
   dsc->cleanup = &cleanup_branch;
 }
-- 
1.7.0.4

	gdb/
	* arm-tdep.c (thumb_copy_b): Extract correct offset.
	(thumb_copy_16bit_ldr_literal): Extract correct value for rt and imm8.
	Set pc 4-byte aligned.
	Set branch dest address correctly.
---
 gdb/arm-tdep.c |   26 +++++++++++++++-----------
 1 files changed, 15 insertions(+), 11 deletions(-)

diff --git a/gdb/arm-tdep.c b/gdb/arm-tdep.c
index 8f13b72..7df9958 100644
--- a/gdb/arm-tdep.c
+++ b/gdb/arm-tdep.c
@@ -5767,13 +5767,14 @@ thumb_copy_b (struct gdbarch *gdbarch, unsigned short insn,
 
   if (bit_12_15 == 0xd)
     {
-      offset = sbits (insn, 0, 7);
+      /* offset = SignExtend (imm8:0, 32) */
+      offset = sbits ((insn << 1), 0, 8);
       cond = bits (insn, 8, 11);
     }
   else if (bit_12_15 == 0xe) /* Encoding T2 */
     {
       offset = sbits ((insn << 1), 0, 11);
-       cond = INST_AL;
+      cond = INST_AL;
     }
 
   if (debug_displaced)
@@ -7648,29 +7649,32 @@ thumb_copy_16bit_ldr_literal (struct gdbarch *gdbarch, unsigned short insn1,
 			      struct regcache *regs,
 			      struct displaced_step_closure *dsc)
 {
-  unsigned int rt = bits (insn1, 8, 7);
+  unsigned int rt = bits (insn1, 8, 10);
   unsigned int pc;
-  int imm8 = sbits (insn1, 0, 7);
+  int imm8 = (bits (insn1, 0, 7) << 2);
   CORE_ADDR from = dsc->insn_addr;
 
   /* LDR Rd, #imm8
 
      Rwrite as:
 
-     Preparation: tmp2 <- R2, tmp3 <- R3, R2 <- PC, R3 <- #imm8;
-                  if (Rd is not R0) tmp0 <- R0;
+     Preparation: tmp0 <- R0, tmp2 <- R2, tmp3 <- R3, R2 <- PC, R3 <- #imm8;
+
      Insn: LDR R0, [R2, R3];
-     Cleanup: R2 <- tmp2, R3 <- tmp3,
-              if (Rd is not R0) Rd <- R0, R0 <- tmp0 */
+     Cleanup: R2 <- tmp2, R3 <- tmp3, Rd <- R0, R0 <- tmp0 */
 
   if (debug_displaced)
-    fprintf_unfiltered (gdb_stdlog, "displaced: copying thumb ldr literal "
-			"insn %.4x\n", insn1);
+    fprintf_unfiltered (gdb_stdlog,
+			"displaced: copying thumb ldr r%d [pc #%d]\n"
+			, rt, imm8);
 
   dsc->tmp[0] = displaced_read_reg (regs, dsc, 0);
   dsc->tmp[2] = displaced_read_reg (regs, dsc, 2);
   dsc->tmp[3] = displaced_read_reg (regs, dsc, 3);
   pc = displaced_read_reg (regs, dsc, ARM_PC_REGNUM);
+  /* The assembler calculates the required value of the offset from the
+     Align(PC,4) value of this instruction to the label.  */
+  pc = pc & 0xfffffffc;
 
   displaced_write_reg (regs, dsc, 2, pc, CANNOT_WRITE_PC);
   displaced_write_reg (regs, dsc, 3, imm8, CANNOT_WRITE_PC);
@@ -7712,7 +7716,7 @@ thumb_copy_cbnz_cbz (struct gdbarch *gdbarch, uint16_t insn1,
   dsc->u.branch.link = 0;
   dsc->u.branch.exchange = 0;
 
-  dsc->u.branch.dest = from + 2 + imm5;
+  dsc->u.branch.dest = from + 4 + imm5;
 
   if (debug_displaced)
     fprintf_unfiltered (gdb_stdlog, "displaced: copying %s [r%d = 0x%x]"
-- 
1.7.0.4

	gdb/
	* arm-tdep.c (thumb2_copy_load_literal): Use register r2 and r3.
	(thumb2_copy_block_xfer): Set dsc->u.block.xfer_addr.
	(thumb_process_displaced_32bit_insn): Extract correct value for rn.
---
 gdb/arm-tdep.c |   23 +++++++++++++----------
 1 files changed, 13 insertions(+), 10 deletions(-)

diff --git a/gdb/arm-tdep.c b/gdb/arm-tdep.c
index 67d41d2..6d76999 100644
--- a/gdb/arm-tdep.c
+++ b/gdb/arm-tdep.c
@@ -6367,21 +6367,23 @@ thumb2_copy_load_literal (struct gdbarch *gdbarch, uint16_t insn1,
 
   /* Rewrite instruction LDR Rt imm12 into:
 
-     Prepare: tmp[0] <- r0, tmp[1] <- r1, tmp[2] <- r2, r1 <- pc, r2 <- imm12
+     Prepare: tmp[0] <- r0, tmp[1] <- r2, tmp[2] <- r3, r2 <- pc, r3 <- imm12
 
-     LDR R0, R1, R2,
+     LDR R0, R2, R3,
 
-     Cleanup: rt <- r0, r0 <- tmp[0], r1 <- tmp[1], r2 <- tmp[2].  */
+     Cleanup: rt <- r0, r0 <- tmp[0], r2 <- tmp[1], r3 <- tmp[2].  */
 
 
   dsc->tmp[0] = displaced_read_reg (regs, dsc, 0);
-  dsc->tmp[1] = displaced_read_reg (regs, dsc, 1);
   dsc->tmp[2] = displaced_read_reg (regs, dsc, 2);
+  dsc->tmp[3] = displaced_read_reg (regs, dsc, 3);
 
   pc_val = displaced_read_reg (regs, dsc, ARM_PC_REGNUM);
 
-  displaced_write_reg (regs, dsc, 1, pc_val, CANNOT_WRITE_PC);
-  displaced_write_reg (regs, dsc, 2, imm12, CANNOT_WRITE_PC);
+  pc_val = pc_val & 0xfffffffc;
+
+  displaced_write_reg (regs, dsc, 2, pc_val, CANNOT_WRITE_PC);
+  displaced_write_reg (regs, dsc, 3, imm12, CANNOT_WRITE_PC);
 
   dsc->rd = rt;
 
@@ -6390,9 +6392,9 @@ thumb2_copy_load_literal (struct gdbarch *gdbarch, uint16_t insn1,
   dsc->u.ldst.writeback = 0;
   dsc->u.ldst.restore_r4 = 0;
 
-  /* LDR R0, R1, R2 */
-  dsc->modinsn[0] = 0xf851;
-  dsc->modinsn[1] = 0x2;
+  /* LDR R0, R2, R3 */
+  dsc->modinsn[0] = 0xf852;
+  dsc->modinsn[1] = 0x3;
   dsc->numinsns = 2;
 
   dsc->cleanup = &cleanup_load;
@@ -6875,6 +6877,7 @@ thumb2_copy_block_xfer (struct gdbarch *gdbarch, uint16_t insn1, uint16_t insn2,
   dsc->u.block.before = bit (insn1, 8);
   dsc->u.block.writeback = writeback;
   dsc->u.block.cond = INST_AL;
+  dsc->u.block.xfer_addr = displaced_read_reg (regs, dsc, rn);
 
   if (load)
     {
@@ -8126,7 +8129,7 @@ thumb_process_displaced_32bit_insn (struct gdbarch *gdbarch, uint16_t insn1,
 	  if (bit (insn1, 9)) /* Data processing (plain binary imm).  */
 	    {
 	      int op = bits (insn1, 4, 8);
-	      int rn = bits (insn1, 0, 4);
+	      int rn = bits (insn1, 0, 3);
 	      if ((op == 0 || op == 0xa) && rn == 0xf)
 		err = thumb_copy_pc_relative_32bit (gdbarch, insn1, insn2,
 						    regs, dsc);
-- 
1.7.0.4


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]