Bug 3068 - dwarf_find_proc_info is broken for remote unwinding
Summary: dwarf_find_proc_info is broken for remote unwinding
Status: RESOLVED WONTFIX
Alias: None
Product: frysk
Classification: Unclassified
Component: general (show other bugs)
Version: unspecified
: P2 normal
Target Milestone: ---
Assignee: Andrew Cagney
URL:
Keywords:
Depends on:
Blocks: 2936
  Show dependency treegraph
 
Reported: 2006-08-15 18:07 UTC by Adam Jocksch
Modified: 2006-09-04 03:56 UTC (History)
5 users (show)

See Also:
Host:
Target:
Build:
Last reconfirmed:


Attachments

Note You need to log in before you can comment on or make changes to this bug.
Description Adam Jocksch 2006-08-15 18:07:40 UTC
dwarf_find_proc_info uses dl_iterate_phdrs, which iterates throught the caller's
.so headers. This will not work for remote unwinding as the process being
unwound is not the process that is calling the function.
Comment 1 Alexandre Oliva 2006-08-18 08:40:47 UTC
dwarf_find_proc_info() is for internal use for the local process only, and it's
not supposed to be referenced from outside libunwind proper.  We won't get any
good from using it.

If we're to support similar functionality (but see comments in bug 3070), we'd
probably have to export some find_proc_info entry point including a
need_unwind_info argument, or perhaps provide some default remote-compatible
implementation of find_proc_info, put_unwind_info et al by peeking at
unw_local_addr_space at the time we create our accessors data structure which we
then use to create the remote address space in
lib::unwind::StackTraceCreator::unwind_setup, and then get the default entry
points be wrappers that distinguish between local and remote address spaces and
optimize accordingly.

Alternately, we might implement and export additional remote-friendly entry
points in the libunwind dwarf machinery to enable that function to use it directly.
Comment 2 Alexandre Oliva 2006-09-04 03:56:10 UTC
This is not a bug.  Using dwarf_find_proc_info for the wrong purpose was the
bug.  See the patch in bug 3070 for more details.