This is the mail archive of the systemtap@sourceware.org mailing list for the systemtap project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: sleeping, locks and debug kernels


On 12/12/2011 09:21 AM, Mark Wielaard wrote:

> Hi,
> 

... sleeping issue described ...

> Reviews of commit 262f75 and commit ab8633 appreciated since any changes
> in the locking code is always a little scary.

I'll look at these later.

> There is one issue I don't know how to solve. That is
> stap_start_task_finder() this takes a rcu_read_lock() goes over every
> task, inspects each, calls utrace_attach on it if appropriate, gets the
> task->mm, adds the engines to some internal datastructures, checks that
> unprivileged users don't get access to utrace engines of task that
> aren't theirs and then after doing that for each task releases the lock.
> The problem is that utrace_attach_task() may sleep, since it must
> allocate memory to create a new enginer. Which is not nice while we have
> the rcu_read_lock. But I don't immediately see how to split up this loop
> so that we only hold the lock while doing non-sleepy things.


The task_finder2 and task_finder code is identical in this area - but
running with the task_finder2 won't cause the sleeping errors.

The task_finder2 uses our internal mini-utrace, which also allocates
memory.  But it uses GFP_IOFS instead of GFP_KERNEL when allocating
memory.  GFP_IOFS doesn't include __GFP_WAIT.  Using GFP_IOFS means
we'll never wait, but also means the memory allocation could fail.  I
was OK with that tradeoff.

I don't know how we could split up that loop.  Perhaps Oleg might have
some thoughts or might be persuaded to change the memory allocation
flags in utrace itself.

-- 
David Smith
dsmith@redhat.com
Red Hat
http://www.redhat.com
256.217.0141 (direct)
256.837.0057 (fax)


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]