"previous frame inner to this frame" error when unwinding fibers

Andrey Turkin andrey.turkin@gmail.com
Thu Jan 4 10:00:59 GMT 2024


Hi Tom,

Thanks for the update.

Re stopping the unwind for green threads - callback makes sense I
think; I guess most users would want to chop off some useless tail
there. But also custom unwinders might want to have some clean way to
do the same for whatever reason. I currently just do
create_unwind_info() without filling any registers; that works but
results in an ugly error message.

Re inner-than thing - this is orthogonal to green threads. This is
something that happens because of unwinder stitching together
different stacks; it doesn't have to be due to green threads. In fact,
with the green threads support it might not be necessary to do the
stiching; we might get away with doing several bts for the threads we
want shown together, or something like a custom command to switch
between callee/caller contexts etc. But anyway, if this is to be
implemented, it seems to me this belongs to the unwinders domain.

Re PoC implementation on GH - I tried it out. There was an obvious bug
(see attached patch), after dealing with that I was able to create a
green thread for my own test example (boost coroutines, using core
file) as well as Vireo examples (live process) with your python file.
Thread exists, it is shown in info threads with the name I gave it in
python, I can switch to it etc, but the registers don't get fetched
and bt, info frame etc shows information of a native thread with id 1
(even if I'm switching from the different thread, and creating the
green thread while switched to another thread, and tid passed to
create_green_thread being some random number which is definitely not
1). py_green_thread::fetch_registers doesn't get called for some
reason, so the Python counterpart doesn't as well.

пт, 22 дек. 2023 г. в 22:19, Tom Tromey <tom@tromey.com>:
>
> Tom> Yeah, the API on the branch is very simple.
>
> I made a new branch and reworked it a bit, dropping a few stale parts.
> It's in my github as "green-threads-v2"; you can try it if you want.
>
> I've appended a sample .py that works with the "vireo" user-space
> threads package so you can see how it works.
>
> I still haven't implemented a way for green threads to know when to stop
> unwinding.  And, I haven't added a way to indicate that the "inner than"
> requirement should be lifted.
>
> If you have ideas for how those ought to work, that would be good to
> hear.  We talked about doing this in an unwinder but it seems kind of
> heavy to require an unwinder to supplement green threads; though maybe
> in your case that's ok.
>
> One thought I had was to have a callback on the green thread object that
> could be passed a frame to see if it is the outermost frame.  This way
> green threads, at least, could stop unwinding at their scheduler; and
> users wanting to debug a scheduler could switch to a "real" thread to
> see that.
>
> Maybe inner-than could just be an optional flag passed to
> create_green_thread.
>
> Tom
>
>
> import gdb
>
> thread_map = {}
>
> main_thread = None
>
> # From glibc/sysdeps/unix/sysv/linux/x86/sys/ucontext.h
> x8664_regs = [ 'r8', 'r9', 'r10', 'r11', 'r12', 'r13', 'r14',
>                'r15', 'rdi', 'rsi', 'rbp', 'rbx', 'rdx', 'rax',
>                'rcx', 'rsp', 'rip', 'efl', 'csgsfs', 'err',
>                'trapno', 'oldmask', 'cr2' ]
>
> def vireo_current():
>     return int(gdb.parse_and_eval('curenv')) + 1
>
> class VireoGreenThread:
>     def __init__(self, tid):
>         self.tid = tid
>
>     def _get_state(self):
>         return gdb.parse_and_eval('envs')[self.tid]['state']
>
>     def fetch(self, reg):
>         """Fetch REG from memory."""
>         global x8664_regs
>         global thread_map
>         thread = thread[self.tid]
>         state = self._get_state()
>         gregs = state['uc_mcontext']['gregs']
>         for i in range(0, len(x8664_regs)):
>             if reg is None or reg == x8664_regs[i]:
>                 thread.write_register(x8664_regs[i], gregs[i])
>
>     def store(self, reg):
>         global x8664_regs
>         global thread_map
>         thread = thread[self.tid]
>         state = self._get_state()
>         gregs = state['uc_mcontext']['gregs']
>         for i in range(0, len(x8664_regs)):
>             if reg is None or reg == x8664_regs[i]:
>                 gregs[i] = thread.read_register(x8664_regs[i])
>
>     def name(self):
>         return "Vireo Thread " + str(self.tid)
>
>     def underlying_thread(self):
>         if vireo_current() == self.tid:
>             global main_thread
>             return main_thread
>         return None
>
> class VFinish(gdb.FinishBreakpoint):
>     def stop(self):
>         tid = int(self.return_value) + 1
>         global thread_map
>         thread_map[tid] = gdb.create_green_thread(tid, VireoGreenThread(tid))
>         return False
>
> class VCreate(gdb.Breakpoint):
>     def stop(self):
>         VFinish(gdb.newest_frame(), True)
>         return False
>
> class VExit(gdb.Breakpoint):
>     def stop(self):
>         global main_thread
>         if main_thread is None:
>             main_thread = gdb.selected_thread()
>         global thread_map
>         tid = vireo_current()
>         if tid in thread_map:
>             thread_map[tid].set_exited()
>             del thread_map[tid]
>
> VCreate('vireo_create', internal=True)
> VExit('vireo_exit', internal=True)
-------------- next part --------------
A non-text attachment was scrubbed...
Name: patch.py
Type: application/octet-stream
Size: 468 bytes
Desc: not available
URL: <https://sourceware.org/pipermail/gdb/attachments/20240104/1cfb5b48/attachment.obj>


More information about the Gdb mailing list