This is the mail archive of the systemtap@sourceware.org mailing list for the systemtap project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: Linux VFS cache hit rate script


On Mon, Apr 25, 2011 at 3:53 PM, Josh Stone <jistone@redhat.com> wrote:

> I was thinking about what could be causing such high N/A counts, even on
> my nearly-idle laptop. ?I'm pretty sure now that these MAJOR==0 are
> actually "emulated" filesystems, like sysfs, procfs, etc. ?So I don't
> think the N/A has anything to do with caching - it's just that there's
> literally no block device associated with the request.

Hmm, that would make a lot of sense. Can anyone verify this? Or, how
is it verifiable?

> Then I think the high counts here are because stap is getting into a
> feedback loop as it reads your printfs over debugfs. ?A request comes
> in, your script printfs it; then stapio reads that printf and copies it
> to be read again by a tty emulator or your |sort|uniq pipe -- creating
> even more vfs_read events, in a never ending chain. ?So you should
> probably at least filter stap's own events out of your results with a
> condition like: ?if (pid() != stp_pid()) { printf... }

Heh... that would make some sense, indeed, at least for one of the
dev: numbers in my output.

> Some of the other probes in the vfs tapset deal with the page cache
> directly, which I think you'll need to get true vfs caching rates.

Which probes are you thinking about? I see probes on when things get
added or removed from cache, but nothing jumps out at me as the ideal
way to see what was a hit or miss.

Jake


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]