This is the mail archive of the mailing list for the systemtap project.

Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: script from chris mason

Hi Mike,

That is a good improvement from the original script but i still see use of the kernel function to probe "schedule". We should use probes from tapsets as much as possible rather than probing straight kernel functions. Don't we have schedule function already covered in one of our tapsets, if not why not. If it is there but not useful for this purpose, what should we do to improve that probe point in the tapset to be more useful including this case.

Vara Prasad
Mike Mason wrote:

Hi Wenji,

Thanks for sending this to the mailing list. This is exactly what we want to encourage. I reviewed the script and made several changes:

- Uses system call tapset for io_submit probes.
- Uses tid() instead of pid(). That's the correct way to associate events within a thread.
- Removed separate variables for array indexes. They're unnecessary in this case.
- Uses the array membership check feature in the schedule probe.
- Uses the '-' operator on the traces array in the end probe. Eliminates the need to presort the array.

I confirmed that the script with my changes would build and run, but was unable to trigger the conditions that cause backtraces to be collected. Can you or Chris test this in your environment?

Once the script is finalized, it would be great to document it in the War Stories section of the wiki. We could also include it in Scripts & Tool on the wiki and the examples directory in cvs if we think it is of general interest.


Wenji Huang wrote:


This is the script from Chris Mason <>. It could find the most common causes of schedule during the AIO io_submit call. I think it's very good that a kernel developer can use systemtap and share his script.

Please review it and where it is putted? how about in examples?



#!/bin/env stap
# Copyright (C) 2007 Oracle Corp.
# This was implemented to find the most common causes of schedule during
# the AIO io_submit call.  It does this by recording which pids are inside
# AIO, and recording the current stack trace if one of those pids is
# inside schedule.
# When the probe exits, it prints out the 30 most common call stacks for
# schedule().

global in_iosubmit
global traces

* add a probe to sys_io_submit, on entry, record in the in_iosubmit
* hash table that this proc is in io_submit
probe syscall.io_submit {
	in_iosubmit[tid()] = 1

* when we return from sys_io_submit, record that we're no longer there
probe syscall.io_submit.return {
	/* this assumes a given proc will do lots of io_submit calls, and
	 * so doesn't do the more expensive "delete in_iosubmit[p]".  If
	 * there are lots of procs doing small number of io_submit calls,
	 * the hash may grow pretty big, so using delete may be better
	in_iosubmit[tid()] = 0

* every time we call schedule, check to see if we started off in
* io_submit.  If so, record our backtrace into the traces histogram
probe kernel.function("schedule") {
	if (tid() in in_iosubmit) {

		 * change this to if (1) if you want a backtrace every time
		 * you go into schedule from io_submit.  Unfortunately, the traces
		 * saved into the traces histogram above are truncated to just a
		 * few lines.  so the only way to see the full trace is via the
		 * more verbose print_backtrace() right here.
		if (0) {
			printf("schedule in io_submit!\n")

* when stap is done (via ctrl-c) go through the record of all the
* trace paths and print the 30 most common.
probe end {
	foreach (stack in traces- limit 30) {
		printf("%d:", traces[stack])

Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]