When running with -t the statistics report only shows the statement probes that were derived from the mark probes. That makes it hard to see which is which. $ stap -t -e 'probe process("stap").mark("*_start") { log("start: " . $$name) } probe process("stap").mark("*_end") { log("end: " . $$name) }' -c "stap -e 'probe begin { exit() }'" start: pass0__start end: pass0__end start: pass1a__start start: pass1b__start end: pass1__end start: pass2__start start: pass5__start start: stap_system__start end: pass5__end start: pass6__start start: stap_system__start end: pass6__end probe process("stap").statement(877442) (<input>:1:1), hits: 8, cycles: 2640min/5039avg/10040max probe process("stap").statement(127589) (<input>:1:67), hits: 4, cycles: 6840min/7907avg/8980max Also I would have expected to see all statement probes for all resolved mark probes instead of one statement probe for each group of mark probes.
(In reply to comment #0) > When running with -t the statistics report only shows the statement probes that > were derived from the mark probes. That makes it hard to see which is which. > > $ stap -t -e 'probe process("stap").mark("*_start") { log("start: " . $$name) } > probe process("stap").mark("*_end") { log("end: " . $$name) }' -c "stap -e > 'probe begin { exit() }'" > [...] > probe process("stap").statement(877442) (<input>:1:1), hits: 8, cycles: > 2640min/5039avg/10040max > probe process("stap").statement(127589) (<input>:1:67), hits: 4, cycles: > 6840min/7907avg/8980max We currently kludge the .mark names back in for listing mode, but I've been thinking of ways to make this appear more naturally. I may take a stab at it to address your concern here as well. > Also I would have expected to see all statement probes for all resolved mark > probes instead of one statement probe for each group of mark probes. This is a "feature". The timing stats are deliberately merged for each probe body, which is why you see only two groups of timing info. If you do this on "probe begin,end,error {...}", you'll see "probe begin+2" in the timing stats. We should at least have this count the # of expanded wildcards as well. As an enhancement, maybe we could split up the timing info with additional "-t" options. I'm not sure how much granularity we need between the current "time each script-level probe body" and "time each resolved probe instance" -- maybe just those two levels are enough.
BTW, I saw your comment on IRC about wide timing variance in "probe syscall.*" -- if you're using the argstr, I'm not surprised at all that various syscalls take different times to compose it. Having more timer-stat granularity would allow us to see which ones were particularly high.
(In reply to comment #2) > BTW, I saw your comment on IRC about wide timing variance in "probe syscall.*" > -- if you're using the argstr, I'm not surprised at all that various syscalls > take different times to compose it. Having more timer-stat granularity would > allow us to see which ones were particularly high. Yes, if that was the case, then having more granular statistics would be nice to have. But in this case you can see the variance even with really simple probes like: stap -t -e 'probe syscall.* { printf(".") }' -c 'sleep 5' [... dots ...] probe syscall.* (<input>:1:1), hits: 1236852, cycles: 440min/533avg/20108max Maybe this deserves its own bug report if it doesn't exist yet.
(In reply to comment #3) > Yes, if that was the case, then having more granular statistics would be nice to > have. But in this case you can see the variance even with really simple probes > like: stap -t -e 'probe syscall.* { printf(".") }' -c 'sleep 5' > [... dots ...] > probe syscall.* (<input>:1:1), hits: 1236852, cycles: 440min/533avg/20108max > Maybe this deserves its own bug report if it doesn't exist yet. I don't think so -- the average here is close to the minimum, over a million+ hits. The maximum translates to a ~hundred cache misses, which is plausible considering the amount of code & data involved in printf().
*** This bug has been marked as a duplicate of 11736 ***