This is the mail archive of the
gdb-patches@sourceware.org
mailing list for the GDB project.
Re: [RFC] Allowing all threads of all|current process(es) to be resumed [new command + docs]
- From: Doug Evans <dje at google dot com>
- To: Pedro Alves <pedro at codesourcery dot com>
- Cc: gdb-patches at sourceware dot org
- Date: Sun, 31 May 2009 09:33:57 -0700
- Subject: Re: [RFC] Allowing all threads of all|current process(es) to be resumed [new command + docs]
- References: <200905301151.52892.pedro@codesourcery.com>
On Sat, May 30, 2009 at 3:51 AM, Pedro Alves <pedro@codesourcery.com> wrote:
> Currently, with the generic framework, if GDB is attached to
> multiple processes, issuing a "continue", "next", etc., makes GDB
> resume all threads of all processes. ?But, with the multi-forks
> framework, GDB only debugs one of the forks at a given time, while
> leaving the others stopped.
Except in non-stop mode when "c -a" is required to continue all
threads, "c" by itself just continues the current thread (right?).
[And IWBN if there were a way to continue a subgroup of threads though
I realize "c N" is already taken. "c [-a] [-t T1 T2 T3] [--] [N]"? I
realize that's perhaps not ideal, but short of adding another command
it's the first thing that came to me. :-) And no claim is made that
this hasn't been discussed before ...]
I wonder what the normal usage pattern will be of multiprocess debugging.
An alternative to "set scheduler-multiple on|off" is to add another
flag to the various commands.
"scheduler-multiple" may be The Right Thing To Do, but adding more
global state that controls command behaviour gives me pause
("exec-direction" anyone?). Another way to add scheduler-locking
would have been to add options to "step", etc. "s -l" or some such
["l" for "locking" though "locking" out of place here, it's just an
example anyway]. It's easier to script:
# This isn't implementable today, it's just for illustration.
define lstep
set $save_scheduler_locking [get scheduler-locking]
try
step
finally
set scheduler-locking $save_scheduler_locking
end
end
versus
define lstep
step -l
end
I'd be curious to hear what others think.