This is the mail archive of the gdb-patches@sourceware.org mailing list for the GDB project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: [PATCH] Honour SIGILL and SIGSEGV in cancel breakpoint


On 09/23/2014 09:42 AM, Yao Qi wrote:
> Pedro Alves <palves@redhat.com> writes:
> 
>>> count_events_callback and select_event_lwp_callback in GDBServer need to
>>> honour SIGILL and SIGSEGV too.  I write a patch to call
>>> lp_status_is_sigtrap_like_event in them, but regression test result
>>> isn't changed, which is a surprise to me.  I thought some fails should
>>> be fixed.  I'll look into it deeply.
>>
>> Maybe you're getting lucky with scheduling.
>> pthreads.exp and schedlock.exp I think are the most sensitive to this.
> 
> I run them ten times, the results aren't changed.
> 
>>
>> See:
>>  https://www.sourceware.org/ml/gdb-patches/2001-06/msg00250.html
> 
> Randomly selecting event lwp was added in the url above you gave, to
> prevent the starvation of threads.  However, in my configuration
> (arm-linux with SIGILL), event lwp selection does nothing, but no test
> fails are caused.  GDBserver processes events like this:
> 
>  1. When GDBServer gets a breakpoint event from waitpid (-1, ),
>  2. GDBserver will stop_all_lwps, in which wait_for_sigstop will drain
>  all pending reports from kernel.
>  3. GDBserver selects one lwp and cancels the breakpoint on the rest.  If
>  event lwp selection does nothing, it is the lwp GDBserver gets in step 1.
>  4. GDBserver steps over the breakpoint, and resumes all the threads.
>  Go back to step 1, wait until any threads hit breakpoint,
> 
> As we can see, if waitpid (-1, ) (in step #1) returns event lwp randomly,
> we don't have to randomly select event lwp again in step #3.  IMO, it is
> naturally random that one thread hits the breakpoint first in a
> multi-thread program.

Depends on scheduling.  When the program is resumed, the thread that had
last hit the breakpoint may manage to be scheduled before other threads
manage to be scheduled and hit a breakpoint themselves.

> That is the reason why no test fails are caused
> without event lwp selection in my experiments.  IOW, on the platform
> that waitpid (-1, ) returns event lwp randomly, 
> we don't need such lwp
> random selection at all.  However, if waitpid kernel implementation
> always iterate over a list children in the fixed order, it is possible
> that event of the lwp in the front of the list is reported and the rest
> lwps may be starved.  In this case, we still have to reply on random
> selection inside GDB/GDBserver to avoid starvation.

I'm looking at kernel/exit.c on the Linux kernel's sources I have handy
(14186fea0cb06bc43181ce239efe0df6f1af260a), specifically at
do_wait() / do_wait_thread() / ptrace_do_wait() and it seems to me
that waitpid always walks the task list in the same order:

	set_current_state(TASK_INTERRUPTIBLE);
	read_lock(&tasklist_lock);
	tsk = current;
	do {
		retval = do_wait_thread(wo, tsk);
		if (retval)
			goto end;

		retval = ptrace_do_wait(wo, tsk);
		if (retval)
			goto end;

		if (wo->wo_flags & __WNOTHREAD)
			break;
	} while_each_thread(current, tsk);
	read_unlock(&tasklist_lock);

So seems like it's still like Michael said back then: "If more than one
LWP is currently stopped at a breakpoint, the highest-numbered one
will be returned.", and it's likely you're being lucky with scheduling.
E.g., multi-core vs single-core, or the scheduling algorithms in the
kernel improved and are masking the issue.  Or, simply the tests
don't really exercise the starvation issue properly.

Anyway,

> The patch below is updated to call lp_status_maybe_breakpoint in both
> breakpoint cancellation and event lwp selection.

This patch is OK.

Thanks,
Pedro Alves


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]