[NEWS/RFA] Re: [gdbserver] x86 agent expression bytecode compiler (speed up conditional tracepoints)
Doug Evans
dje@google.com
Wed Jun 16 19:21:00 GMT 2010
On Wed, Jun 16, 2010 at 12:16 PM, Pedro Alves <pedro@codesourcery.com> wrote:
> On Wednesday 16 June 2010 20:12:44, Stan Shebs wrote:
>> Doug Evans wrote:
>> > On Mon, Jun 14, 2010 at 3:19 PM, Pedro Alves <pedro@codesourcery.com> wrote:
>> >
>> >> Thanks. I've checked the whole thing in.
>> >>
>> >
>> > I'm getting build failures that go away when compiling linux-x86-low.c with -g.
>> >
>> > gcc is optimizing out the skipped over stuff I believe.
>> >
>>
>> Hmm, this particular trickery was my idea, but it always seemed
>> vulnerable to ever-smarter optimization. Does anybody have any better
>> strategy? Declaring the asm volatile didn't work, because the compiler
>> was whacking everything around it.
>>
>> The other approach is to build up instructions from bitfields, which is
>> reliable but needs a lot of setup and helper routines.
>
> Quick thought: can we stick a couple of __attribute__((used))'s in the
> macro, so the compiler doesn optimized things away, thinking they're
> unused (given the uses are behind asm)?
>
>>
>> Stan
>> >
>> > /* Our general strategy for emitting code is to avoid specifying raw
>> > bytes whenever possible, and instead copy a block of inline asm
>> > that is embedded in the function. This is a little messy, because
>> > we need to keep the compiler from discarding what looks like dead
>> > code, plus suppress various warnings. */
>> >
>> > #define EMIT_ASM(NAME,INSNS) \
>> > { extern unsigned char start_ ## NAME, end_ ## NAME; \
>> > add_insns (&start_ ## NAME, &end_ ## NAME - &start_ ## NAME); \
>> > if (always_true ()) \
>> > goto skipover ## NAME; \
>> > __asm__ ("start_" #NAME ":\n\t" INSNS "\n\tend_" #NAME ":\n\t"); \
>> > skipover ## NAME: \
>> > ; }
This worked with the gcc4.4.0 variant I was using.
Ian: Is there a Right way to do this?
2010-06-16 Doug Evans <dje@google.com>
* linux-x86-low.c (always_true): Change to global.
(EMIT_ASM, EMIT_ASM32): Update.
Index: linux-x86-low.c
===================================================================
RCS file: /cvs/src/src/gdb/gdbserver/linux-x86-low.c,v
retrieving revision 1.19
diff -u -p -r1.19 linux-x86-low.c
--- linux-x86-low.c 15 Jun 2010 10:44:48 -0000 1.19
+++ linux-x86-low.c 16 Jun 2010 19:15:21 -0000
@@ -1484,13 +1484,8 @@ add_insns (unsigned char *start, int len
current_insn_ptr = buildaddr;
}
-/* A function used to trick optimizers. */
-
-int
-always_true (void)
-{
- return 1;
-}
+/* Used to trick optimizers. */
+static volatile int always_true = 1;
/* Our general strategy for emitting code is to avoid specifying raw
bytes whenever possible, and instead copy a block of inline asm
@@ -1501,7 +1496,7 @@ always_true (void)
#define EMIT_ASM(NAME,INSNS) \
{ extern unsigned char start_ ## NAME, end_ ## NAME; \
add_insns (&start_ ## NAME, &end_ ## NAME - &start_ ## NAME); \
- if (always_true ()) \
+ if (always_true) \
goto skipover ## NAME; \
__asm__ ("start_" #NAME ":\n\t" INSNS "\n\tend_" #NAME ":\n\t"); \
skipover ## NAME: \
@@ -1513,7 +1508,7 @@ always_true (void)
#define EMIT_ASM32(NAME,INSNS) \
{ extern unsigned char start_ ## NAME, end_ ## NAME; \
add_insns (&start_ ## NAME, &end_ ## NAME - &start_ ## NAME); \
- if (always_true ()) \
+ if (always_true) \
goto skipover ## NAME; \
__asm__ (".code32\n\tstart_" #NAME ":\n\t" INSNS "\n\tend_" #NAME ":\n" \
"\t.code64\n\t"); \
More information about the Gdb-patches
mailing list