This is the mail archive of the gdb@sourceware.org mailing list for the GDB project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: GDB using a lot of CPU time and writing a lot to disk on startup


----- Original Message ----
> From: Paul Pluzhnikov <ppluzhnikov@google.com>
> To: Nick Savoiu <savoiu@yahoo.com>
> Cc: gdb@sourceware.org
> Sent: Monday, April 20, 2009 4:39:56 PM
> Subject: Re: GDB using a lot of CPU time and writing a lot to disk on startup
> 
> On Mon, Apr 20, 2009 at 3:40 PM, Nick Savoiu wrote:
> 
> > I'm using GDB 6.8 (x86_64-unknown-linux-gnu) from within KDevelop
> > 3.5.4. I've noticed that for some projects GDB uses 1 minute of CPU time
> > and seems to do a lot of disk writing during this time.
> 
> AFAIU, GDB doesn't write anything to disk (unless you ask it to with gcore
> that is).

Paul,

That's
what I thought too. I should have said that I suspect GDB to be the one
writing to the disk but I could not find a way to verify/prove that.
The best I could to I track CPU usage for each process in the chain

  kdevelop(2381)---gdb(7810)---kernel(7816)

>From 16:50:28 to 17:00:37 (start of debugging session until break in main) here are the stats from 'top':

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
2381 nsavoiu   15   0  594m 424m  34m S    0 10.9   0:49.13 kdevelop
2381 nsavoiu   16   0  594m 424m  34m S    1 10.9   0:49.31 kdevelop

7810 nsavoiu   16   0  140m 130m 2472 S   22  3.4   0:03.92 gdb
7810 nsavoiu   16   0  634m 622m 2536 S   73 16.0   1:51.12 gdb

7816 nsavoiu   25   0  153m 138m  61m T    0  3.6   0:08.23 kernel
7816 nsavoiu   25   0  153m 138m  61m T    0  3.6   0:08.23 kernel

so my best guess was GDB given where the CPU times went. No other processes were running actively and there was enough memory

Mem:   3975148k total,  3372012k used,   603136k free,    86136k buffers
Swap:  8385848k total,   756112k used,  7629736k free,  1407932k cached

so swapping should not be an issue.

The
thing is that pretty much from the moment that I click the 'debug'
button in KDevelop the disk chugs continuously for as long as you can
see from above info (~10 minutes in this case). Using KSysGuard shows
it as 99% being writes (beats me what gets written).

> > I used pstack every second until the debugging actually starts and here
> > are all the unique #0 locations in the pstack output.
> >
> > #0  0x0000000000446d16 in msymbol_hash_iw ()
> > #0  0x0000000000446f97 in lookup_minimal_symbol ()
> > #0  0x00000000004bfda0 in symbol_natural_name ()
> > #0  0x00000000004bffe4 in find_pc_sect_psymtab ()
> > #0  0x00000000004c0118 in find_pc_sect_psymbol ()
> > #0  0x00000000004fd755 in bcache_data ()
> > #0  0x000000000050d11a in dwarf2_lookup_abbrev ()
> > #0  0x0000000000610c67 in d_print_comp ()
> > #0  0x00000035aae28250 in __ctype_b_loc () from /lib64/tls/libc.so.6
> > #0  0x00000035aae68ced in _int_free () from /lib64/tls/libc.so.6
> > #0  0x00000035aaeb94a5 in _xstat () from /lib64/tls/libc.so.6
> > #0  0x00000035aaeb9545 in _lxstat () from /lib64/tls/libc.so.6
> > #0  0x00000035aaeb9832 in __open_nocancel () from /lib64/tls/libc.so.6
> > #0  0x00000035aaebe18f in poll () from /lib64/tls/libc.so.6
> 
> This output is bogus.

These are only the unique top entries from pstack. Here's a typical full stack:

#0  0x00000035aaeb94a5 in _xstat () from /lib64/tls/libc.so.6
#1  0x00000000004aa827 in is_regular_file ()
#2  0x00000000004aa965 in openp ()
#3  0x00000000004aae12 in find_and_open_source ()
#4  0x00000000004aaff0 in psymtab_to_fullname ()
#5  0x00000000004bf581 in lookup_partial_symtab ()
#6  0x00000000004bf427 in lookup_symtab ()
#7  0x00000000004cd578 in symtab_from_filename ()
#8  0x00000000004cc8e0 in decode_line_1 ()
#9  0x00000000004a36c5 in breakpoint_re_set_one ()
#10 0x00000000004dee60 in catch_errors ()
#11 0x00000000004a393b in breakpoint_re_set ()
#12 0x00000000004c5917 in new_symfile_objfile ()
#13 0x00000000004c5b36 in symbol_file_add_with_addrs_or_offsets ()
#14 0x00000000004c5c85 in symbol_file_add_from_bfd ()
#15 0x000000000045889e in symbol_add_stub ()
#16 0x00000000004dee60 in catch_errors ()
#17 0x0000000000458946 in solib_read_symbols ()
#18 0x0000000000458c74 in solib_add ()
#19 0x00000000004d4d8f in handle_inferior_event ()
#20 0x00000000004d349a in wait_for_inferior ()
#21 0x00000000004d3207 in proceed ()
#22 0x00000000004cfd8e in run_command_1 ()
#23 0x000000000047d7cf in do_cfunc ()
#24 0x000000000047f73e in cmd_func ()
#25 0x0000000000448852 in execute_command ()
#26 0x0000000000489f77 in mi_execute_async_cli_command ()
#27 0x0000000000489cee in mi_cmd_execute ()
#28 0x0000000000489a1a in captured_mi_execute_command ()
#29 0x00000000004dec96 in catch_exception ()
#30 0x0000000000489b5f in mi_execute_command ()
#31 0x00000000004e34fb in gdb_readline2 ()
#32 0x00000000004e2c1b in stdin_event_handler ()
#33 0x00000000004e1b7c in handle_file_event ()
#34 0x00000000004e15cf in process_event ()
#35 0x00000000004e1621 in gdb_do_one_event ()
#36 0x00000000004dee60 in catch_errors ()
#37 0x00000000004e164a in start_event_loop ()
#38 0x00000000004df31f in current_interp_command_loop ()
#39 0x0000000000442989 in captured_command_loop ()
#40 0x00000000004dee60 in catch_errors ()
#41 0x00000000004434b6 in captured_main ()
#42 0x00000000004dee60 in catch_errors ()
#43 0x0000000000443694 in gdb_main ()
#44 0x0000000000442947 in main ()

Thanks for the link. I get hit by it too but I only have ~200 .solibs.

Nick


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]