This is the mail archive of the systemtap@sourceware.org mailing list for the systemtap project.
Index Nav: | [Date Index] [Subject Index] [Author Index] [Thread Index] | |
---|---|---|
Message Nav: | [Date Prev] [Date Next] | [Thread Prev] [Thread Next] |
Other format: | [Raw text] |
On 08/10/2016 07:47 PM, Frank Ch. Eigler wrote:
Hi - On Wed, Aug 10, 2016 at 06:40:02PM +0300, Avi Kivity wrote:[...] Yes. The problem is that if the function is called often (with a usual short running time), then systemtap will eat all of the cpu time spinning on an internal lock.Well, not just that ... trapping each function entry/exit has unavoidable kernel uprobes context-switchy-type overheads.
Like you say, those are unavoidable. But at least those costs are handled by scalable resources.
Your perf report may well be misattributing the cost.
I think it's unlikely. When perf says __raw_spin_lock is guilty, it usually is.
(Have you tried a stap script that merely traps all the same the function calls, and has empty probe handlers?)
I can try it.
Note though that such analysis probably cannot be performed based only upon PC samples - or even backtrace samples. We seem to require trapping individual function entry/exit events.That's why I tried systemtap. It worked well on my desktop, but very badly in production.It may be worth experimenting with "stap --runtime=dyninst" if your function analysis were restricted to basic Cish userspace that dyninst can handle.
Will timer.profile work with dyninst?
Index Nav: | [Date Index] [Subject Index] [Author Index] [Thread Index] | |
---|---|---|
Message Nav: | [Date Prev] [Date Next] | [Thread Prev] [Thread Next] |