This is the mail archive of the systemtap@sourceware.org mailing list for the systemtap project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: [pcp] suitability of PCP for event tracing


On Sun, 19 Sep 2010 00:21:34 +1000, Ken McDonell wrote:

 kenj> On 17/09/2010 9:18 AM, nathans@aconex.com wrote:

 kenj> For the existing tools, I think we'll probably end up adding a
 kenj> routine to libpcp to turn a PM_TYPE_EVENT "blob" into a
 kenj> pmResult ... this will work for pminfo and pmprobe where the
 kenj> timestamps are ignored.  For pmie, pmval, pmdumptext, pmchart,
 kenj> ... I'm not sure how they can make sense of the event trace
 kenj> data in real time, expecting data from time t, and getting a
 kenj> set of values with different timestamps smaller than t is going
 kenj> to be a bit odd for these ones.

I've been trying to come up with the use-case where event data and
statistical data could be useful in the live mode (current tools issue
aside) and I cannot really see one - Nathan or Frank, what did you
have in mind?

I can see how ability to combine events and statistical data into an
archive can be useful for the post-mortem analysis (assuming there is
a suitable tool for such task) but even there the usefulness is
limited to the fact that all the data is in one place.

 >> Main concerns center around the PMDA buffering scheme ... things like,
 >> how does a PMDA decide what a sensible timeframe for buffering data is
 >> (probably will need some kind of per-PMDA memory limit on buffer size,
 >> rather than time frame).  Also, will the PMDA have to keep track of
 >> which clients have been sent which (portions of?) buffered data?  (in
 >> case of multiple clients with different request frequencies ... might
 >> get a bit hairy?).

 kenj> I'm bald, so hairy is no threat ... 8^)>

Wouldn't it disqualify you from working on hairy problems?

 kenj> I _do_ think this is simple
 kenj> - doubly linked list (or similar) of events
 kenj> - reference count when event arrives based on number of matching client 
 kenj> registrations
 kenj> - scan list for each client gathering matching events, decrementing 
 kenj> reference counts
 kenj> - free event record when reference count is zero
 kenj> - tune buffer depth per client with pmStore
 kenj> - cull list if client is not keeping up and return PM_ERR_TOOSLOW

I think it should be even simpler - one list per pmda, fixed number of
entries (changable via pmStore), "cookie" per event, client gets all
the events and is responsible for figuring out which ones it has seen
or if it was too slow and there is a gap in the event list. 

 kenj> Plus several variants around lists per client or bit maps per client to 
 kenj> reduce matching overhead on each pmFetch.

How would per-client list entries be trimmed? Are you going to assume
a well-behaved client? And how does PM_CONTEXT_LOCAL and multiple clients
work?

max


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]