This is the mail archive of the
systemtap@sourceware.org
mailing list for the systemtap project.
Re: Linux VFS cache hit rate script
On 04/21/2011 04:45 PM, Josh Stone wrote:
> On 04/21/2011 04:01 PM, Jake Maul wrote:
>> 2 dev: 0 devname: N/A
>> 762956 dev: 16 devname: N/A
>> 520 dev: 18 devname: N/A
>> 4183 dev: 22 devname: N/A
>> 4 dev: 23 devname: N/A
>> 1288 dev: 265289728 devname: dm-0
>> 1 dev: 27 devname: N/A
>> 872 dev: 3 devname: N/A
>> 3094 dev: 5 devname: N/A
>> 380875 dev: 6 devname: N/A
[...]
>> That bizarrely long dev number might be relevant... or maybe that's
>> just a normal quirk of LVM?
>
> It's not so bizarre - kernel device numbers are (MAJOR<<20)|MINOR, so
> this turns out to be device 253,0. That also means all those low dev
> numbers have MAJOR==0, which I think supports my theory that they are
> not normal.
I was thinking about what could be causing such high N/A counts, even on
my nearly-idle laptop. I'm pretty sure now that these MAJOR==0 are
actually "emulated" filesystems, like sysfs, procfs, etc. So I don't
think the N/A has anything to do with caching - it's just that there's
literally no block device associated with the request.
Then I think the high counts here are because stap is getting into a
feedback loop as it reads your printfs over debugfs. A request comes
in, your script printfs it; then stapio reads that printf and copies it
to be read again by a tty emulator or your |sort|uniq pipe -- creating
even more vfs_read events, in a never ending chain. So you should
probably at least filter stap's own events out of your results with a
condition like: if (pid() != stp_pid()) { printf... }
Some of the other probes in the vfs tapset deal with the page cache
directly, which I think you'll need to get true vfs caching rates.
Josh