This is the mail archive of the
systemtap@sourceware.org
mailing list for the systemtap project.
Re: Performance analysis of Linux Kernel Markers 0.20 for 2.6.17
- From: Mathieu Desnoyers <compudj at krystal dot dyndns dot org>
- To: "Jose R. Santos" <jrs at us dot ibm dot com>
- Cc: Martin Bligh <mbligh at google dot com>, "Frank Ch. Eigler" <fche at redhat dot com>, Masami Hiramatsu <masami dot hiramatsu dot pt at hitachi dot com>, prasanna at in dot ibm dot com, Andrew Morton <akpm at osdl dot org>, Ingo Molnar <mingo at elte dot hu>, Paul Mundt <lethal at linux-sh dot org>, linux-kernel <linux-kernel at vger dot kernel dot org>, Jes Sorensen <jes at sgi dot com>, Tom Zanussi <zanussi at us dot ibm dot com>, Richard J Moore <richardj_moore at uk dot ibm dot com>, Michel Dagenais <michel dot dagenais at polymtl dot ca>, Christoph Hellwig <hch at infradead dot org>, Greg Kroah-Hartman <gregkh at suse dot de>, Thomas Gleixner <tglx at linutronix dot de>, William Cohen <wcohen at redhat dot com>, ltt-dev at shafik dot org, systemtap at sources dot redhat dot com, Alan Cox <alan at lxorguk dot ukuu dot org dot uk>, Jeremy Fitzhardinge <jeremy at goop dot org>, Karim Yaghmour <karim at opersys dot com>, Pavel Machek <pavel at suse dot cz>, Joe Perches <joe at perches dot com>, "Randy.Dunlap" <rdunlap at xenotime dot net>
- Date: Mon, 2 Oct 2006 11:38:49 -0400
- Subject: Re: Performance analysis of Linux Kernel Markers 0.20 for 2.6.17
- References: <20060930180157.GA25761@Krystal> <45212F1E.3080409@us.ibm.com>
Hi Jose,
* Jose R. Santos (jrs@us.ibm.com) wrote:
>
> The problem now is how do we define "high event rate". This is
> something that is highly dependent on the workload being run as well as
> the system configuration for such workload. There are a lot of places
> in the kernel that can be turned into high event rates with with the
> right workload even though the may not represent 99% of most user cases.
>
> I would guess that anything above 500 event/s per-CPU on several
> realistic workloads is a good place to start.
>
Yes, it seems like a good starting point. But besides the event rate, just
having the most widely used events marked in the code should also be the
target. The markup mechanism serves two purposes :
1 - identify important events in a way that follows code change.
2 - speed up instrumentation.
>
> >On the macro-benchmark side, no significant difference in performance has
> >been
> >found between the vanilla kernel and a kernel "marked" with the standard
> >LTTng
> >instrumentation.
> >
>
> Out of curiosity, how many cycles does it take to process a complete
> LTTng event up until the point were it has been completely stored into
> the trace buffer. Since this should take a lot more than 55.74 cycles,
> it would be interesting to know at what event rate would a static marker
> stop showing as big of a performance advantage compared to dynamic probing.
>
In my OLS paper, I pointed out that, in its current state, LTTng took about 278
cycles on the same Pentium 4. I think I could lower that by implementing per-cpu
atomic operations (removing the LOCK prefix, as the data is not shared between
the CPUs; the atomic operations are only useful to protect from higher priority
execution contexts on the same CPU).
Regards,
Mathieu
OpenPGP public key: http://krystal.dyndns.org:8080/key/compudj.gpg
Key fingerprint: 8CD5 52C3 8E3C 4140 715F BA06 3F25 A8FE 3BAE 9A68