This is the mail archive of the systemtap@sources.redhat.com mailing list for the systemtap project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: more transport questions


On Thu, 2005-07-07 at 15:53 -0400, Frank Ch. Eigler wrote:
> Hi -
> 
> 
> On Wed, Jul 06, 2005 at 02:55:38PM -0500, Tom Zanussi wrote:
> > [...]
> >  > on-the-fly impractical?  On the transmission kernel-probe side, would
> >  > a relay_flush() after every probe handler be impractical?
> >
> > Merging on-the-fly would be practical if you flushed every per-cpu
> > channel at the same time i.e. forced each per-cpu buffer to finish
> > the current sub-buffer so it could be read but that would defeat the
> > purpose of using relayfs for high-speed logging.  [...]
> 
> OK, I didn't really mean relay_flush but rather stp_print_flush.
> Besides, with the presence of a globally unique message serial number,
> it would be straightforward to merge on the fly, even if the
> processors generate trace messages (apprx == run probe functions) at
> very different intervals/rates.

Currently, we use _stp_print_flush() to tell the runtime to put a
timestamp in front of accumulated data and send it to relayfs or
netlink.  I have wanted for a long time to make this more efficient. Tom
and I discussed it last month and I just haven't finished testing
(waiting for better netlink and relayfs tuning info from Tom).

What I would like to do is have each probed function that outputs
anything do _stp_entry() at the start.  That puts a sequence number in
the output buffer. No flush is done until a buffer fills.  We write data
sequentially, allowing us to possibly remove the internal buffer in
runtime and write directly to the relayfs buffers.

Does this make sense and is it too dosruptive to make such a change at
this date?

Martin



Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]