This is the mail archive of the systemtap@sourceware.org mailing list for the systemtap project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: [PATCH] kprobes for s390 architecture


On Thu, Jun 22, 2006 at 01:28:36PM +0200, Jan Glauber wrote:
> On Wed, 2006-06-21 at 10:34 -0700, Mike Grundy wrote:
> > On Wed, Jun 21, 2006 at 06:38:40PM +0200, Martin Schwidefsky wrote:
> > > You misunderstood me here. I'm not talking about storing the same piece
> > > of data to memory on each processor. I'm talking about isolating all
> > > other cpus so that the initiating cpu can store the breakpoint to memory
> > > without running into the danger that another cpu is trying to execute it
> > > at the same time. But probably the store should be atomic in regard to
> > > instruction fetching on the other cpus. It is only two bytes and it
> > > should be aligned.
> 
> Preemption disabling is not necessary around smp_call_function(), since
> smp_call_function() takes a spin lock. But smp_call_function() is wrong
> here, it calls the code on all other CPUs but not on our own. Please use
> on_each_cpu() instead.

But on_each_cpu() does:

        preempt_disable();
        ret = smp_call_function(func, info, retry, wait);
        local_irq_disable();
        func(info);
        local_irq_enable();
        preempt_enable();
 
I'm confused. I really don't need to swap the instruction on each cpu. I really
need to make sure each cpu is not fetching that instruction while I change it.
s390 doesn't have a flush_icache_range() (which the other arches use after the 
swap). I thought that the synchronization that smp_call_function() does was the
primary reason for using it here, not repeatedly changing the same area of 
memory.  If you'd prefer I use on_each_cpu() instead of smp_call_function(), 
no problem.  

Thanks
Mike


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]