This is the mail archive of the gdb-patches@sourceware.org mailing list for the GDB project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: [RFC 0/7] Support for Linux kernel debugging


On 17-01-26 14:12:25, Philipp Rudo wrote:
> > 
> > Live debug of a target is the main use case we are trying to support
> > with the linux-kthread patches. So for us on-going thread
> > synchronisation between GDB and the Linux target is a key feature we
> > need.
> 
> For us live debugging is more a nice-to-have. That's why we
> wanted to delay implementation of on-going synchronisation after the
> basic structure of our code was discussed on the mailing list. So we
> could avoid some work if we needed to rework it. Apparently we need to
> change this plan now ;)
>

Nowadays, when GDB debugs normal application, it has four target layers,

The current target stack is:

  - multi-thread (multi-threaded child process.)
  - native (Native process)
  - exec (Local exec file)
  - None (None)

when it debugs corefile, it becomes

The current target stack is:
  - core (Local core dump file)
  - exec (Local exec file)
  - None (None)

This same can apply to kernel debugging.  GDB can have the right target layer
(Peter's patch) and when GDB debugs kernel dump, GDB has the target layer
from your patch.  We can share something between these two target layers.
I think all code about parsing kernel data structures can be shared.
Even more, we can add a shared target layer for linux kernel debugging,
and live debugging target layer and dump debugging target layer can sit
on top it.  They can use the beneath linux kernel target layer to fetch
registers, get thread names, etc.

> > > Knowing
> > > this weakness i discussed quite long with Andreas how to improve
> > > it. In this context we also discussed the reavenscar-approach Peter
> > > is using. In the end we decided against this approach. In
> > > particular we discussed a scenario when you also stack a userspace
> > > target on top of the kernel target.  
> > 
> > How do you stack a userspace target on top with a coredump?
> 
> You don't. At least with the current code base it is impossible.
> 
> Andreas and I see the ravenscar-approach as a workaround for
> limitations in GDB. Thus while discussing it we thought about possible
> scenarios for the future which would be impossible to implement using
> this approach. The userspace-on-kernel was just meant to be an example.
> Other examples would be a Go target where the libthreaddb
> (POSIX-threads) and Go (goroutines) targets would compete on
> thread stratum. Or (for switching targets) a program that runs
> simultaneous on CPU and GPU and needs different targets for both code
> parts.
>  

IMO, https://sourceware.org/gdb/wiki/MultiTarget is the right way to
solve such problem.  We can have one JTAG remote target debugging kernel,
and one GDBserver debugging user-space app on that machine.

> > > In this case
> > > you would have three different views on threads (hardware, kernel,
> > > userspace). With the ravenscar-approach this scenario is impossible
> > > to implement as the private_thread_info is already occupied by the
> > > kernel target and the userspace would have no chance to use it.
> > > Furthermore you would like to have the possibility to switch
> > > between different views, i.e. see what the kernel and userspace
> > > view on a given process is.  
> > 
> > Is this a feature you are actively using today with the coredump
> > (stacking userspace)?
> 
> We are not using it, as discussed above. In particular with our dumps
> it is even impossible as we strip it of all userspace memory (a crash
> of our build server created a ~9 GB dump (kdump, kernelspace only)
> imagine adding all of userspace to that ...). But for live debugging or
> smaller systems it could be an interesting scenario to find bugs
> triggered by "buggy" userspace behavior.
> 

With MultiTarget in place, we need to somehow associate a function call
in one target to another function in another different target.  A
user-space program call a syscall and trap into kernel, GDB just associate
the user-space call stack on one target with the right kernel space call
stack in another target.  I remember some one presented something on
GDB show stack traces across over RPC calls in the GNU Cauldron several
years.  This is about live debugging.

-- 
Yao (齐尧)


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]