This is the mail archive of the
systemtap@sourceware.org
mailing list for the systemtap project.
[Bug translator/2293] confirm preemption/interrupt blocking in systemtap probes
- From: "prasadav at us dot ibm dot com" <sourceware-bugzilla at sourceware dot org>
- To: systemtap at sources dot redhat dot com
- Date: 5 May 2006 16:09:51 -0000
- Subject: [Bug translator/2293] confirm preemption/interrupt blocking in systemtap probes
- References: <20060207171449.2293.fche@redhat.com>
- Reply-to: sourceware-bugzilla at sourceware dot org
------- Additional Comments From prasadav at us dot ibm dot com 2006-05-05 16:09 -------
Subject: Re: confirm preemption/interrupt blocking in
systemtap probes
Martin Hunt wrote:
>On Wed, 2006-03-29 at 11:54 +0000, fche at redhat dot com wrote:
>
>
>
>>I recommend opening a new bug against the runtime, addressing specifically the
>>issue of I/O buffering near the time of shutdown. I recall suggesting looking
>>into whether stpd and the kernel-side runtime message handler can work together
>>to drain the buffers before starting the module_exit process, to provide the
>>maximum static space to the end probes. (That space amount would
>>uncoincidentally match the command line option "-s NUM" to the initial
>>compilation stage, and thus make some intuitive sense to the user.) Did you try
>>that?
>>
>>
>
>I think I originally suggested it, and I have thought further about it.
>I hoped to find a better solution than imposing another limit users have
>to compute. For collecting large amounts of data, MAXMAPENTRIES needs
>raised and then you have to calculate how much space that data will take
>up when "printed" into the output buffers. Another problem is that for
>relayfs the buffer is divided into per-cpu sub-buffers. So the maximum
>data that can be sent is NUM/cpus.
>
>Martin
>
>
>
There is a generic problem that we have to solve in SystemTap to support
long running or large number of probes. The common problem with these
scenarios is, they generate lot more data than the maps can hold. There
are two solutions i can think of to help address this area
1) We should have capability to say truncate the map by leaving only the
top "n" entries based on the key. If one is looking to get general
trends top few is more than enough hence this solution could be useful.
2) Second solution is able to periodically dump the maps to userspace
and then stpd during the post processing can reconstruct the full maps
from the dumps. I have not looked at if there are any maps that have
some specific mathematical properties that doesn't lend into post
aggregation, we have to look into that aspect.
Coming to relayfs i believe stpd has an option to specify the overall
buffer size but not the no. of sub buffers and size of each. As Frank
mentioned I think it may be a good idea able to specify the no. of sub
buffers as well along with the overall buffer size. I think script
writers are likely to have a better idea of how much data their script
collects rather than the executor of the script, that makes me think we
should also have an option for the script writer to specify the the type
of transport used procfs or relayfs and if it is relayfs they should
also have an option to specify the no. of sub buffers and size of the
total buffers. If the size is specified on the command line and as well
in the script, i think we should use the max of the two.
bye,
Vara Prasad
--
http://sourceware.org/bugzilla/show_bug.cgi?id=2293
------- You are receiving this mail because: -------
You are the assignee for the bug, or are watching the assignee.