This is the mail archive of the
gdb-patches@sourceware.org
mailing list for the GDB project.
[RFA-new version][gdbserver] x86 agent expression bytecode compiler (speed up conditional tracepoints)
> -----Message d'origine-----
> De?: gdb-patches-owner@sourceware.org [mailto:gdb-patches-
> owner@sourceware.org] De la part de Doug Evans
> Envoyé?: Saturday, June 19, 2010 7:26 PM
> À?: Hui Zhu
> Cc?: gdb-patches@sourceware.org; Pedro Alves; Stan Shebs; Eli
> Zaretskii; tromey@redhat.com; Michael Snyder
> Objet?: Re: [NEWS/RFA] Re: [gdbserver] x86 agent expression bytecode
> compiler (speed up conditional tracepoints)
>
> The fix to the compilation problem (for now) should be as trivial as
> applying Ian's suggested change.
> gcc doesn't optimize *inside* the asm statement.
As I said in a previous email, Ian's patch didn't work for me.
http://sourceware.org/ml/gdb-patches/2010-06/msg00424.html
I propose here another small patch that fixes the linking failure.
Using a volatile variable, it explicitly forbids the compiler
to optimize out code by forbidding the assumption that this value will
never change.
This works on gcc16, an the approach seems reasonable.
Pierre Muller
gdbserver/ChangeLog entry:
2010-06-20 Pierre Muller <muller@ics.u-strasbg.fr>
* linux-x86-low.c (always_true): Delete function.
(always_true): New volatile variable.
(EMIT_ASM, EMIT_ASM32): Adapt to always_true change.
Index: src/gdb/gdbserver/linux-x86-low.c
===================================================================
RCS file: /cvs/src/src/gdb/gdbserver/linux-x86-low.c,v
retrieving revision 1.19
diff -u -p -r1.19 linux-x86-low.c
--- src/gdb/gdbserver/linux-x86-low.c 15 Jun 2010 10:44:48 -0000
1.19
+++ src/gdb/gdbserver/linux-x86-low.c 20 Jun 2010 06:25:07 -0000
@@ -1484,13 +1484,12 @@ add_insns (unsigned char *start, int len
current_insn_ptr = buildaddr;
}
-/* A function used to trick optimizers. */
+/* A simple function returning the constant 1 is not enough
+ to trick modern optimizers anymore. Use a volatile variable
+ seems to force inclusion of the code, as the compiler is forced
+ to assume that that value could be changed by some external code. */
-int
-always_true (void)
-{
- return 1;
-}
+static volatile int always_true = 1;
/* Our general strategy for emitting code is to avoid specifying raw
bytes whenever possible, and instead copy a block of inline asm
@@ -1501,7 +1500,7 @@ always_true (void)
#define EMIT_ASM(NAME,INSNS) \
{ extern unsigned char start_ ## NAME, end_ ## NAME; \
add_insns (&start_ ## NAME, &end_ ## NAME - &start_ ## NAME); \
- if (always_true ()) \
+ if (always_true) \
goto skipover ## NAME; \
__asm__ ("start_" #NAME ":\n\t" INSNS "\n\tend_" #NAME ":\n\t"); \
skipover ## NAME: \
@@ -1513,7 +1512,7 @@ always_true (void)
#define EMIT_ASM32(NAME,INSNS) \
{ extern unsigned char start_ ## NAME, end_ ## NAME; \
add_insns (&start_ ## NAME, &end_ ## NAME - &start_ ## NAME); \
- if (always_true ()) \
+ if (always_true) \
goto skipover ## NAME; \
__asm__ (".code32\n\tstart_" #NAME ":\n\t" INSNS "\n\tend_" #NAME ":\n"
\
"\t.code64\n\t"); \