This is the mail archive of the
gdb@sourceware.org
mailing list for the GDB project.
Re: Single stepping and threads
- From: "Rob Quill" <rob dot quill at gmail dot com>
- To: "Michael Snyder" <Michael dot Snyder at palmsource dot com>
- Cc: "Joel Brobecker" <brobecker at adacore dot com>, gdb at sourceware dot org
- Date: Sat, 2 Dec 2006 16:27:12 +0000
- Subject: Re: Single stepping and threads
- References: <20061129052942.GA16029@nevyn.them.org> <20061129055915.GM9968@adacore.com> <20061129132535.GA28834@nevyn.them.org> <20061129163844.GN9968@adacore.com> <1164929776.14460.36.camel@localhost.localdomain>
On 30/11/06, Michael Snyder <Michael.Snyder@palmsource.com> wrote:
On Wed, 2006-11-29 at 08:38 -0800, Joel Brobecker wrote:
> > > I would say yes. A step should be a few instructions, while stepping
> > > over a call is potentially a much larger number of instructions.
> > > As a result, stepping over without letting the other threads go would
> > > more likely cause a lock.
> >
> > I think you mean "no" then?
>
> Oops, sorry, I meant "no".
>
> One of my coworkers expressed his opinion as follow:
>
> <<
> I would find it confusing if "step" and "next" behave differently with
> respect to threads, because they seem like basically the same thing.
> "Next is just like step, except that it goes over calls" seems simple to
> me. "Next is just like step, except that it goes over calls, and has
> some subtle difference regarding threads" seems more complicated to me.
>
> So I would suggest leaving the default as "off", or else changing it
> to "on".
Default on would be a disaster -- most threaded programs would
not behave even remotely the same under the debugger as they would
solo.
In fact, many would deadlock almost immediately.
I have a question regarding this. In concurrent programming (as we
were tuaght it), the principle was that the interleaving of
instructions from threads was random. So, if "on" were the default,
and a few steps were done in GDB, in fact, as many as it took to
deadlock the program, surely it is possible (although, however
unlikely) that when the program is run without GDB that the
interleaving is the same as that forced by GDB, and the code would
deadlock. Thus making the code bad, rather than the debugger.
What I'm trying to say is that it was my understanding that when doing
concurent programming the interleaving was random and that for the
program to be "corrent" it should not deadlock under any possible
interleaving.
I fail to see how stopping all threads and just going forward with one
should stop "correct" code from executiong properly.
Rob