This is the mail archive of the
systemtap@sourceware.org
mailing list for the systemtap project.
Re: tutorial draft checked in
On Thu, 2006-03-09 at 19:51 -0500, Frank Ch. Eigler wrote:
> Hi -
>
> hunt wrote:
>
> > >This operation is efficient (taking a shared lock) because the
> > >aggregate values are kept separately on each processor, and are only
> > >aggregated across processors on request.
> >
> > Surprised me. I checked and this accurately described the current
> > implementation, but the shared lock is unnecessary and should probably
> > not be mentioned.
> > [...]
>
> This is the subject of bug #2224. The runtime is taking locks, and
> the translator is also emitting locks. In my opinion, the runtime
> should leave the maximum possible locking discretion to the
> translator, since e.g. only the latter knows how to enforce locking
> timeouts over contentious data.
We have argued this again and again. I see no reason why you want the
translator to be more complicated and slower. Surely we have better
things to work on.
For the specific case of pmaps I am sure I spent more time arguing about
it than writing it. The disadvantages of what you want to do are
1. Reader locks are slow. They don't scale as well as per-cpu spinlocks.
2. The translator holds the lock during the whole probe vs the runtime
which holds the lock as short a time as possible.
3. Having the translator handle low-level locking eliminates the
possibility of switching the runtime to a more efficient lockless
solution later.
> Anyway, if the advantage of having unshared per-cpu locks for the <<<
> case was large, the translator could adopt the technique just as
> easily.
Obviously not true. It is already done and works in the runtime pmap
implementation.
I ran a few benchmarks to demonstrate pmaps scalability and measure the
additional overhead from the translator reader-writer locks.
Regular maps
probe TEST {
syscalls[probefunc()]++
}
Pmaps
probe TEST {
syscalls[probefunc()] <<< 1
}
Running on a dual-processor hyperthreaded machine.
I ran threads that were making syscalls as fast as possible.
Results are Kprobes/sec
1 thread 4 threads
Regular 340 500
Pmaps 340 940
Pmaps* 380 1040
Pmaps* is pmaps with the redundant reader-writer locks removed.
Measured overhead of those locks is approximately 10% of the cpu time
for this test case.