This is the mail archive of the
mailing list for the GDB project.
Re: RFC: implement DW_OP_call_frame_cfa
- From: Daniel Jacobowitz <drow at false dot org>
- To: Tom Tromey <tromey at redhat dot com>
- Cc: gdb-patches at sourceware dot org
- Date: Tue, 9 Jun 2009 08:35:21 -0400
- Subject: Re: RFC: implement DW_OP_call_frame_cfa
- References: <email@example.com>
On Mon, Jun 08, 2009 at 04:23:20PM -0600, Tom Tromey wrote:
> GCC developers would like to change GCC to emit DW_OP_call_frame_cfa,
> as this would reduce the size of the generated debuginfo.
> A prerequisite to this is that GDB understand this. So, this patch
> implements this feature. This is PR 10224.
> I'm interested in feedback on this. I am not sure whether the
> implementation of dwarf2_frame_cfa is ok.
It isn't, sorry. It will crash with bad debug info (e.g. manually
stripped .debug_frame), because it runs the unwinder without passing
through the sniffer. It also allocates an entire unwinding cache for
every local variable using this operation, which is very wasteful.
I think, as much as we've tried to avoid it, you're going to need a
back channel to find the existing cache iff the frame has a particular
> No test case since at some point GCC will start generating this
> (perhaps optionally -- but I feel certain we'll do it by default in
> Fedora), and since it therefore seemed like a lot of work for little
IMO, not good enough; this is what gdb.dwarf2/ is for. My compiler
doesn't generate this extension but I'd still like to not break it.
Maybe an x86-specific test?