This is the mail archive of the
gdb@sourceware.org
mailing list for the GDB project.
Re: [RFC] What to do on VM exhaustion
- From: Eli Zaretskii <eliz at gnu dot org>
- To: Michael Snyder <michsnyd at cisco dot com>
- Cc: gdb at sources dot redhat dot com, gdb-patches at sources dot redhat dot com, wendyp at cisco dot com
- Date: Thu, 05 Jan 2006 07:13:18 +0200
- Subject: Re: [RFC] What to do on VM exhaustion
- References: <43BC6F36.3050000@cisco.com>
- Reply-to: Eli Zaretskii <eliz at gnu dot org>
> Date: Wed, 04 Jan 2006 16:58:30 -0800
> From: Michael Snyder <michsnyd@cisco.com>
>
> I actually ran into this once before, years ago -- in fact it was
> RMS himself who called me to beef about gdb bailing on him, when
> he was debugging emacs and crashed the stack with an infinite
> recursion. I think gdb ran out of memory while trying to do a
> backtrace. He wanted me to make it recover gracefully and let him
> keep debugging. I couldn't do it, but then I didn't have the
> luxury of having all you guys to ask for advice!
>
> In present time, I'm suggesting that nomem should just write
> a simple error msg to the console and abort. What do you think?
Perhaps we could do better: we could notice the memory usage each time
through the top-level command loop, just before invoking the command
dispatch, and then, if we ran out of memory during a command, we could
conclude that the last command is the culprit, and throw back to the
top level. That would free the memory used up by that last command,
and GDB could ``recover gracefully'', as RMS wanted. If that doesn't
help, then yes, abort with internal error, since that means GDB leaks
some significant memory. The ``doesn't help'' part could be
implemented by trying to allocate some memory, just to see if we now
have something to continue with, or by comparing the break level to
the one we recorded before the command that ran out of memory.
WDYT?