This is the mail archive of the gdb-patches@sourceware.org mailing list for the GDB project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: [PATCHv2] aarch64: detect atomic sequences like other ll/sc architectures


On Thu, Mar 27, 2014 at 02:07:35PM +0000, Marcus Shawcroft wrote:
> Are you sure these masks and patterns are accurate? Looks to me that
> this excludes many of the load exclusive instructions and includes
> part of the unallocated encoding space. There are several different
> encodings to match here covering ld[a]xr{b,h,} and ld[a]xp.  The masks
> and patterns will be something like:
> 
> 0xbfff7c00 0x085f7c00
> 0xbfff7c00 0x885f7c00
> 0xbfff0000 0x887f0000
> 
> > +      if (decode_masked_match (insn, 0x3fc00000, 0x08000000))
> 
> This also looks wrong.
> 

Eh... I tested all 24 possible ldxr/stxr opcodes...
https://github.com/jkkm/aarch64-ldxr-stxr-match/blob/master/example.txt
Maybe I'm missing something, but I think it's alright.

> > +  /* Test that we can step over ldxr/stxr. This sequence should step from
> > +     ldxr to the following __asm __volatile.  */
> > +  __asm __volatile ("1:     ldxr    %0,%2\n"                             \
> > +                    "       cmp     %0,#1\n"                             \
> > +                    "       b.eq    out\n"                               \
> > +                    "       add     %0,%0,1\n"                           \
> > +                    "       stxr    %w1,%0,%2\n"                         \
> > +                    "       cbnz    %w1,1b"                              \
> > +                    : "=&r" (tmp), "=&r" (cond), "+Q" (dword)            \
> > +                    : : "memory");
> > +
> > +  /* This sequence should take the conditional branch and step from ldxr
> > +     to the return dword line.  */
> > +  __asm __volatile ("1:     ldxr    %0,%2\n"                             \
> > +                    "       cmp     %0,#1\n"                             \
> > +                    "       b.eq    out\n"                               \
> > +                    "       add     %0,%0,1\n"                           \
> > +                    "       stxr    %w1,%0,%2\n"                         \
> > +                    "       cbnz    %w1,1b\n"                            \
> > +                    : "=&r" (tmp), "=&r" (cond), "+Q" (dword)            \
> > +                    : : "memory");
> > +
> > +  dword = -1;
> > +__asm __volatile ("out:\n");
> > +  return dword;
> > +}
> 
> How about testing at least one instruction from each group of load
> store exclusives?
> 

I'm just following what PPC64 did... I think the only thing they really
want to check is that it correctly steps over the sequence (the original
case didn't bother testing the conditional branch path.) I could add
further cases, but it seems a bit pointless... but if you're going to
block committing on that basis I guess I can cook it up.

regards, Kyle

> Cheers
> /Marcus


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]