This is the mail archive of the gdb-patches@sourceware.org mailing list for the GDB project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: [PATCH 1/3 v2] Fast tracepoint for powerpc64le


Wei-cheng Wang wrote:

>I just found my mail client the it to the wrong address.
>Here are some detailed explanation in my previous mail,
>in cases you've not read yet.
>https://sourceware.org/ml/gdb-patches/2015-02/msg00604.html
>https://sourceware.org/ml/gdb-patches/2015-02/msg00605.html

Sorry for the late reply; I didn't find the time to do a
thorough review before now.   Thanks again for working on
this feature.  In general, the patch is looking good; I do
have a couple of comments below.

See also Pedro's comments on the patch here:
https://sourceware.org/ml/gdb-patches/2015-03/msg00131.html

I'll follow up on the outstanding questions in the other
patches shortly.


>2. Add testcases for bytecode compilation in ftrace.exp
>    It is used to testing various emit_OP functions.

Adding additional tests is good, but should be done as a separate patch
(can be done before the main ppc support patch).

>diff --git a/gdb/gdbserver/linux-ppc-low.c b/gdb/gdbserver/linux-ppc-low.c
>index 188fac0..0b47543 100644
>--- a/gdb/gdbserver/linux-ppc-low.c
>+++ b/gdb/gdbserver/linux-ppc-low.c
>
>
>+/* Put a 32-bit INSN instruction in BUF in target endian.  */
>+
>+static int
>+put_i32 (unsigned char *buf, uint32_t insn)
>+{
>+  if (__BYTE_ORDER == __LITTLE_ENDIAN)
>+    {
>+      buf[3] = (insn >> 24) & 0xff;
>+      buf[2] = (insn >> 16) & 0xff;
>+      buf[1] = (insn >> 8) & 0xff;
>+      buf[0] = insn & 0xff;
>+    }
>+  else
>+    {
>+      buf[0] = (insn >> 24) & 0xff;
>+      buf[1] = (insn >> 16) & 0xff;
>+      buf[2] = (insn >> 8) & 0xff;
>+      buf[3] = insn & 0xff;
>+    }
>+
>+  return 4;
>+}

This seems a bit overkill -- this is gdbserver code, which always
runs in the same endianness as the inferior.   So this could be
done via a simple copy.   (In order to avoid aliasing violations,
the copy should be done via memcpy -- which the compiler will
optimize away --, or even better, the type of buf could be changed
to uint32_t throughout, since all instructions are 4 bytes.)

Returning "number of bytes" from all these routines is likewise a
bit odd on PowerPC.  (Obviously, it makes sense on Intel, which is
where you've probably copied it from.)

Maybe all the GEN_ routines should just return an uint32_t instruction
on PowerPC, which the user could then place into the buffer via e.g.
   *p++ = GEN_... 
(if p is a uint32_t *)?

>+/* Generate a ds-form instruction in BUF and return the number of bytes written
>+
>+   0      6     11   16          30 32
>+   | OPCD | RST | RA |     DS    |XO|  */
>+
>+__attribute__((unused)) /* Maybe unused due to conditional compilation.  */
>+static int
>+gen_ds_form (unsigned char *buf, int opcd, int rst, int ra, int ds, int xo)
>+{
>+  uint32_t insn = opcd << 26;
>+
>+  insn |= (rst << 21) | (ra << 16) | (ds & 0xfffc) | (xo & 0x3);

Maybe mask off excess bits of rst and rs here too?  Just to make sure
you don't get completely random instructions if the macro is used
incorrectly?   Or just assert the values are in range?  (Similarly
with the other gen_ routines.)

>+#define GEN_LWARX(buf, rt, ra, rb)	gen_x_form (buf, 31, rt, ra, rb, 20, 0)
Depending on which synchronization primitives are needed, we might want
to expose the EH flag.

>+/* Generate a md-form instruction in BUF and return the number of bytes written.
>+
>+   0      6    11   16   21   27   30 31 32
>+   | OPCD | RS | RA | sh | mb | XO |sh|Rc|  */
>+
>+static int
>+gen_md_form (unsigned char *buf, int opcd, int rs, int ra, int sh, int mb,
>+	     int xo, int rc)
>+{
>+  uint32_t insn = opcd << 26;
>+  unsigned int n = ((mb & 0x1f) << 1) | ((mb >> 5) & 0x1);
>+  unsigned int sh0_4 = sh & 0x1f;
>+  unsigned int sh5 = (sh >> 5) & 1;
>+
>+  insn |= (rs << 21) | (ra << 16) | (sh0_4 << 11) | (n << 5) | (sh5 << 1)
>+	  | (xo << 2);

"rc" is missing here.  (Doesn't matter right now, but should still be
fixed.)

>+/* Generate a sequence of instructions to load IMM in the register REG.
>+   Write the instructions in BUF and return the number of bytes written.  */
>+
>+static int
>+gen_limm (unsigned char *buf, int reg, uint64_t imm)
>+{
>+  unsigned char *p = buf;
>+
>+  if ((imm >> 8) == 0)
>+    {
>+      /* li	reg, imm[7:0] */
>+      p += GEN_LI (p, reg, imm);

Actually, you can load values up to 32767 with a single LI.

>+    }
>+  else if ((imm >> 16) == 0)
>+    {
>+      /* li	reg, 0
>+	 ori	reg, reg, imm[15:0] */
>+      p += GEN_LI (p, reg, 0);
>+      p += GEN_ORI (p, reg, reg, imm);
>+    }
>+  else if ((imm >> 32) == 0)
>+    {
>+      /* lis	reg, imm[31:16]
>+	 ori	reg, reg, imm[15:0]
>+	 rldicl	reg, reg, 0, 32 */
>+      p += GEN_LIS (p, reg, (imm >> 16) & 0xffff);
>+      p += GEN_ORI (p, reg, reg, imm & 0xffff);
>+      p += GEN_RLDICL (p, reg, reg, 0, 32);

You really need the rldicl only if the top bit
was set; otherwise, lis already zeros out the
high bits.

>+    }
>+  else
>+    {
>+      /* lis    reg, <imm[63:48]>
>+	 ori    reg, reg, <imm[48:32]>
>+	 rldicr reg, reg, 32, 31
>+	 oris   reg, reg, <imm[31:16]>
>+	 ori    reg, reg, <imm[15:0]> */
>+      p += GEN_LIS (p, reg, ((imm >> 48) & 0xffff));
>+      p += GEN_ORI (p, reg, reg, ((imm >> 32) & 0xffff));
>+      p += GEN_RLDICR (p, reg, reg, 32, 31);
>+      p += GEN_ORIS (p, reg, reg, ((imm >> 16) & 0xffff));
>+      p += GEN_ORI (p, reg, reg, (imm & 0xffff));
>+    }
>+
>+  return p - buf;
>+}


>+/* Generate a sequence for atomically exchange at location LOCK.
>+   This code sequence clobbers r6, r7, r8, r9.  */
>+
>+static int
>+gen_atomic_xchg (unsigned char *buf, CORE_ADDR lock, int old_value, int new_value)
>+{
>+  const int r_lock = 6;
>+  const int r_old = 7;
>+  const int r_new = 8;
>+  const int r_tmp = 9;
>+  unsigned char *p = buf;
>+
>+  /*
>+  1: lwsync
>+  2: lwarx   TMP, 0, LOCK
>+     cmpwi   TMP, OLD
>+     bne     1b
>+     stwcx.  NEW, 0, LOCK
>+     bne     2b */
>+
>+  p += gen_limm (p, r_lock, lock);
>+  p += gen_limm (p, r_new, new_value);
>+  p += gen_limm (p, r_old, old_value);
>+
>+  p += put_i32 (p, 0x7c2004ac);	/* lwsync */
>+  p += GEN_LWARX (p, r_tmp, 0, r_lock);
>+  p += GEN_CMPW (p, r_tmp, r_old);
>+  p += GEN_BNE (p, -12);
>+  p += GEN_STWCX (p, r_new, 0, r_lock);
>+  p += GEN_BNE (p, -16);
>+
>+  return p - buf;
>+}

A generic compare-and-swap will be correct, but probably not the most
efficient way to implement a spinlock on PowerPC.  We might want to
look into implementing release/acquire semantics along the lines of
the sample code in B.2.1.1 / B 2.2.1 of the PowerISA.  (I guess this
doesn't need to be done in the initial version of the patch.)


>+/* Implement install_fast_tracepoint_jump_pad of target_ops.
>+   See target.h for details.  */
>+
>+static int
>+ppc_install_fast_tracepoint_jump_pad (CORE_ADDR tpoint, CORE_ADDR tpaddr,
>+				      CORE_ADDR collector,
>+				      CORE_ADDR lockaddr,
>+				      ULONGEST orig_size,
>+				      CORE_ADDR *jump_entry,
>+				      CORE_ADDR *trampoline,
>+				      ULONGEST *trampoline_size,
>+				      unsigned char *jjump_pad_insn,
>+				      ULONGEST *jjump_pad_insn_size,
>+				      CORE_ADDR *adjusted_insn_addr,
>+				      CORE_ADDR *adjusted_insn_addr_end,
>+				      char *err)
>+{
>+  unsigned char buf[1024];
>+  unsigned char *p = buf;
>+  int j, offset;
>+  CORE_ADDR buildaddr = *jump_entry;
>+  const CORE_ADDR entryaddr = *jump_entry;
>+#if __PPC64__
>+  const int rsz = 8;
>+#else
>+  const int rsz = 4;
>+#endif
>+  const int frame_size = (((37 * rsz) + 112) + 0xf) & ~0xf;

See below for comments of the frame size (112 byte constant) ...

>+
>+  /* Stack frame layout for this jump pad,
>+
>+     High	CTR   -8(sp)
>+		LR   -16(sp)
>+		XER
>+		CR
>+		R31
>+		R29
>+		...
>+		R1
>+		R0
>+     Low	PC/<tpaddr>
>+
>+     The code flow of this jump pad,
>+
>+     1. Save GPR and SPR
>+     3. Adjust SP
>+     4. Prepare argument
>+     5. Call gdb_collector
>+     6. Restore SP
>+     7. Restore GPR and SPR
>+     8. Build a jump for back to the program
>+     9. Copy/relocate original instruction
>+    10. Build a jump for replacing orignal instruction.  */
>+
>+  for (j = 0; j < 32; j++)
>+    p += GEN_STORE (p, j, 1, (-rsz * 36 + j * rsz));

This writes to below the SP, which is OK or ppc64 since there is a
stack "red zone", but may fail on ppc32.  You should (save and) update
SP before saving the other registers there.

>+  /* Save PC<tpaddr>  */
>+  p += gen_limm (p, 3, tpaddr);
>+  p += GEN_STORE (p, 3, 1, (-rsz * 37));

This is actually a problem even on ELFv1 ppc64 since the red zone size
is only 288 bytes.  (Only on ELFv2, the red zone size is 512 bytes.)

>+  /* Save CR, XER, LR, and CTR.  */
>+  p += put_i32 (p, 0x7c600026);			/* mfcr   r3 */
>+  p += GEN_MFSPR (p, 4, 1);			/* mfxer  r4 */
>+  p += GEN_MFSPR (p, 5, 8);			/* mflr   r5 */
>+  p += GEN_MFSPR (p, 6, 9);			/* mfctr  r6 */
>+  p += GEN_STORE (p, 3, 1, -4 * rsz);		/* std    r3, -32(r1) */
>+  p += GEN_STORE (p, 4, 1, -3 * rsz);		/* std    r4, -24(r1) */
>+  p += GEN_STORE (p, 5, 1, -2 * rsz);		/* std    r5, -16(r1) */
>+  p += GEN_STORE (p, 6, 1, -1 * rsz);		/* std    r6, -8(r1) */
>+
>+  /* Adjust stack pointer.  */
>+  p += GEN_ADDI (p, 1, 1, -frame_size);		/* subi   r1,r1,FRAME_SIZE */

This violates the ABI because the back chain link is not maintained.
At any point, r1 should point to a word that holds the back chain
to the next higher frame.

>+  /* Set r4 to collected registers.  */
>+  p += GEN_ADDI (p, 4, 1, frame_size - rsz * 37);
>+  /* Set r3 to TPOINT.  */
>+  p += gen_limm (p, 3, tpoint);
>+
>+  p += gen_atomic_xchg (p, lockaddr, 0, 1);

This seems wrong.  Shouldn't *lockaddr be set to the address
of a collecting_t object, and not just "1"?

>+  /* Restore stack and registers.  */
>+  p += GEN_ADDI (p, 1, 1, frame_size);	/* addi	r1,r1,FRAME_SIZE */

Similar to above, this doesn't work on ppc32.

>+  p += GEN_LOAD (p, 3, 1, -4 * rsz);	/* ld	r3, -32(r1) */
>+  p += GEN_LOAD (p, 4, 1, -3 * rsz);	/* ld	r4, -24(r1) */
>+  p += GEN_LOAD (p, 5, 1, -2 * rsz);	/* ld	r5, -16(r1) */
>+  p += GEN_LOAD (p, 6, 1, -1 * rsz);	/* ld	r6, -8(r1) */
>+  p += put_i32 (p, 0x7c6ff120);		/* mtcr	r3 */
>+  p += GEN_MTSPR (p, 4, 1);		/* mtxer  r4 */
>+  p += GEN_MTSPR (p, 5, 8);		/* mtlr   r5 */
>+  p += GEN_MTSPR (p, 6, 9);		/* mtctr  r6 */
>+  for (j = 0; j < 32; j++)
>+    p += GEN_LOAD (p, j, 1, (-rsz * 36 + j * rsz));

>+  /* Now, insert the original instruction to execute in the jump pad.  */
>+  *adjusted_insn_addr = buildaddr + (p - buf);
>+  *adjusted_insn_addr_end = *adjusted_insn_addr;
>+  relocate_instruction (adjusted_insn_addr_end, tpaddr);
>+
>+  /* Verify the relocation size.  If should be 4 for normal copy, or 8
>+     for some conditional branch.  */
>+  if ((*adjusted_insn_addr_end - *adjusted_insn_addr == 0)
>+      || (*adjusted_insn_addr_end - *adjusted_insn_addr > 8))
>+    {
>+      sprintf (err, "E.Unexpected instruction length = %d"
>+		    "when relocate instruction.",
>+		    (int) (*adjusted_insn_addr_end - *adjusted_insn_addr));
>+      return 1;
>+    }

Hmm.  This calls back to GDB to perform the relocation of the
original instruction.  On PowerPC, there are only a handful of
instructions that need to be relocated; I'm not sure it is really
necessary to call back to GDB.  Can't those just be relocated
directly here?   This might even make the code simpler overall.

>+  buildaddr = *adjusted_insn_addr_end;
>+  p = buf;
>+  /* Finally, write a jump back to the program.  */
>+  offset = (tpaddr + 4) - buildaddr;
>+  if (offset >= (1 << 26) || offset < -(1 << 26))
I guess this needs to check for (1 << 25) like below, since we have
a signed displacement.

>+/*
>+
>+  Bytecode execution stack frame
>+
>+	|  Parameter save area    (SP + 48) [8 doublewords]
>+	|  TOC save area          (SP + 40)
>+	|  link editor doubleword (SP + 32)
>+	|  compiler doubleword    (SP + 24)  save TOP here during call
>+	|  LR save area           (SP + 16)
>+	|  CR save area           (SP + 8)
>+ SP' -> +- Back chain             (SP + 0)
>+	|  Save r31
>+	|  Save r30
>+	|  Save r4    for *value
>+	|  Save r3    for CTX
>+ r30 -> +- Bytecode execution stack
>+	|
>+	|  64-byte (8 doublewords) at initial.  Expand stack as needed.
>+	|
>+ r31 -> +-

Note that the stack frame layout as above only applies to ELFv1, but
you're actually only supporting ELFv2 at the moment.  For ELFv2, there
is no parameter save area (for this specific call), there is no compiler
or linker doubleword, and the TOC save area is at SP + 24.  (So this
location probably shouldn't be used to save something else ...)

>+  initial frame size
>+  = (48 + 8 * 8) + (4 * 8) + 64
>+  = 112 + 96
>+  = 208

This is also a bit bigger than required for ELFv2.  On the other hand,
having a larger buffer doesn't hurt.


>+static void
>+ppc64_emit_reg (int reg)
>+{
>+  unsigned char buf[10 * 4];
>+  unsigned char *p = buf;
>+
>+  p += GEN_LD (p, 3, 31, bc_framesz - 32);
>+  p += GEN_LD (p, 3, 3, 48);	/* offsetof (fast_tracepoint_ctx, regs) */

This seems a bit fragile, it would be better to determine the offset
automatically ...   (I don't quite understand why the x86 code works
either, as it is right now ...)


>+static void
>+ppc64_emit_stack_flush (void)
>+{
>+  /* Make sure bytecode stack is big enough before push.
>+     Otherwise, expand 64-byte more.  */
>+
>+  EMIT_ASM ("  std   3, 0(30)		\n"
>+	    "  addi  4, 30, -(112 + 8)	\n"
>+	    "  cmpd  7, 4, 1		\n"
>+	    "  bgt   1f			\n"
>+	    "  ld    4, 0(1)		\n"
>+	    "  addi  1, 1, -64		\n"
>+	    "  std   4, 0(1)		\n"

For full ABI compliance, the back chain needs to be maintained
at every instruction, so you always have to update the stack
pointer using stdu.  Should be simple enough to do:

 	    "  ld    4, 0(1)		\n"
 	    "  stdu  4, -64(1)		\n"


>+/* Discard N elements in the stack.  */
>+
>+static void
>+ppc64_emit_stack_adjust (int n)
>+{
>+  unsigned char buf[4];
>+  unsigned char *p = buf;
>+
>+  p += GEN_ADDI (p, 30, 30, n << 3);	/* addi	r30, r30, (n << 3) */

"n" probably isnt't too big for addi here, but should better be
verified, just in case new callers are ever added ...


>+static void
>+ppc64_emit_int_call_1 (CORE_ADDR fn, int arg1)
>+{
>+  unsigned char buf[8 * 4];
>+  unsigned char *p = buf;
>+
>+  /* Setup argument.  arg1 is a 16-bit value.  */
>+  p += GEN_LI (p, 3, arg1);		/* li	r3, arg1 */

Well ... even so, you still cannot load values > 32767 with LI.
Either check, or just call gen_limm, which should always do
the right thing.

>+static void
>+ppc64_emit_void_call_2 (CORE_ADDR fn, int arg1)
>+{
>+  unsigned char buf[12 * 4];
>+  unsigned char *p = buf;
>+
>+  /* Save TOP */
>+  p += GEN_STD (p, 3, 31, bc_framesz + 24);

On ELFv2, that is really the TOC save slot, see above.  Why not
just save TOP at 0(30)?  That should be available ...

>+  /* Setup argument.  arg1 is a 16-bit value.  */
>+  p += GEN_MR (p, 4, 3);		/* mr	r4, r3 */
>+  p += GEN_LI (p, 3, arg1);	/* li	r3, arg1 */

See above.


>+static void
>+ppc64_emit_if_goto (int *offset_p, int *size_p)
>+{
>+  EMIT_ASM ("mr     4, 3	\n"
>+	    "ldu    3, 8(30)	\n"
>+	    "cmpdi  7, 4, 0	\n"
>+	    "1:bne  7, 1b	\n");

Why not just:
    cmpdi 7, 3, 0
    ldu 3, 8(30)
    1:bne 7, 1b

>+static void
>+ppc_write_goto_address (CORE_ADDR from, CORE_ADDR to, int size)
>+{
>+  int rel = to - from;
>+  uint32_t insn;
>+  int opcd;
>+  unsigned char buf[4];
>+
>+  read_inferior_memory (from, buf, 4);
>+  insn = get_i32 (buf);
>+  opcd = (insn >> 26) & 0x3f;
>+
>+  switch (size)
>+    {
>+    case 14:
>+      if (opcd != 16)
>+	emit_error = 1;
>+      insn = (insn & ~0xfffc) | (rel & 0xfffc);
>+      break;
>+    case 24:
>+      if (opcd != 18)
>+	emit_error = 1;
>+      insn = (insn & ~0x3fffffc) | (rel & 0x3fffffc);
>+      break;

So this really should check for overflow -- I guess usually the code
generated here shouldn't be too big, but if it is, we really should
detect that and fail cleanly instead of just jumping to random
locations ...


>diff --git a/gdb/rs6000-tdep.c b/gdb/rs6000-tdep.c
>index ef94bba..dc27cfb 100644
>--- a/gdb/rs6000-tdep.c
>+++ b/gdb/rs6000-tdep.c
>@@ -966,6 +969,21 @@ rs6000_breakpoint_from_pc (struct gdbarch *gdbarch, CORE_ADDR *bp_addr,
>      return little_breakpoint;
>  }
>
>+/* Return true if ADDR is a valid address for tracepoint.  Set *ISZIE
>+   to the number of bytes the target should copy elsewhere for the
>+   tracepoint.  */
>+
>+static int
>+ppc_fast_tracepoint_valid_at (struct gdbarch *gdbarch,
>+			      CORE_ADDR addr, int *isize, char **msg)
>+{
>+  if (isize)
>+    *isize = gdbarch_max_insn_length (gdbarch);
>+  if (msg)
>+    *msg = NULL;
>+  return 1;
>+}

Should/can we check here where the jump to the jump pad will be in
range?  Might be better to detect this early ...


>+/* Copy the instruction from OLDLOC to *TO, and update *TO to *TO + size
>+   of instruction.  This function is used to adjust pc-relative instructions
>+   when copying.  */
>+
>+static void
>+ppc_relocate_instruction (struct gdbarch *gdbarch,
>+			  CORE_ADDR *to, CORE_ADDR oldloc)

See above for whether we need this here; maybe all this should
be done directly in gdbserver.  Nothing in here seems to require
support from the full GDB code base.

>+    {
>+      /* conditional branch && AA = 0 */
>+
>+      rel = PPC_BD (insn);
>+      newrel = (oldloc - *to) + rel;
>+
>+      if (newrel >= (1 << 25) || newrel < -(1 << 25))
>+	return;
>+
>+      newrel -= 4;

Why is this correct?   If we fit in a conditional branch, the
value of newrel computed above should be correct.  Only if we
do the jump-over, we need to adjust newrel ...

>+      if (newrel >= (1 << 15) || newrel < -(1 << 15))
>+	{
>+	   /* The offset of to big for conditional-branch (16-bit).
>+	      Try to invert the condition and jump with 26-bit branch.
>+	      For example,
>+
>+		beq  .Lgoto
>+		INSN1
>+
>+	      =>
>+
>+		bne  1f
>+		b    .Lgoto
>+	      1:INSN1
>+
>+	    */
>+
>+	   /* Check whether BO is 001at or 011 at.  */
>+	   if ((PPC_BO (insn) & 0x14) != 0x4)
>+	     return;

Well, we really should handle the other cases too; there's no reason
to simply fail if this happens to be a branch on count or such ...

>+/* Implement gdbarch_gen_return_address.  Generate a bytecode expression
>+   to get the value of the saved PC.  SCOPE is the address we want to
>+   get return address for.  SCOPE maybe in the middle of a function.  */
>+
>+static void
>+ppc_gen_return_address (struct gdbarch *gdbarch,
>+			struct agent_expr *ax, struct axs_value *value,
>+			CORE_ADDR scope)
>+{
>+  struct rs6000_framedata frame;
>+  CORE_ADDR func_addr;
>+
>+  /* Try to find the start of the function and analyze the prologue.  */
>+  if (find_pc_partial_function (scope, NULL, &func_addr, NULL))
>+    {
>+      skip_prologue (gdbarch, func_addr, scope, &frame);
>+
>+      if (frame.lr_offset == 0)
>+	{
>+	  value->type = register_type (gdbarch, PPC_LR_REGNUM);
>+	  value->kind = axs_lvalue_register;
>+	  value->u.reg = PPC_LR_REGNUM;
>+	  return;
>+	}
>+    }
>+  else
>+    {
>+      /* If we don't where the function starts, we cannot analyze it.
>+	 Assuming it's not a leaf function, not frameless, and LR is
>+	 saved at back-chain + 16.  */
>+
>+      frame.frameless = 0;
>+      frame.lr_offset = 16;

This isn't correct for ppc32 ...

>+    }
>+
>+  /* if (frameless)
>+       load 16(SP)
>+     else
>+       BC = 0(SP)
>+       load 16(BC) */

In any case, this code makes many assumptions that may not always be
true.  But then again, the same is true for the i386 case, so maybe this
is OK for now ...   In general, if we have DWARF CFI for the function,
it would be much preferable to refer to that in order to determine the
exact stack layout.


Bye,
Ulrich

-- 
  Dr. Ulrich Weigand
  GNU/Linux compilers and toolchain
  Ulrich.Weigand@de.ibm.com


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]