This is the mail archive of the
gdb-prs@sourceware.org
mailing list for the GDB project.
[Bug python/12533] New: gdb consumes large memory when millionsgdb.values are created
- From: "joachim.protze at zih dot tu-dresden.de"<sourceware-bugzilla at sourceware dot org>
- To: gdb-prs at sourceware dot org
- Date: Wed, 2 Mar 2011 11:36:53 +0000
- Subject: [Bug python/12533] New: gdb consumes large memory when millionsgdb.values are created
- Auto-submitted: auto-generated
http://sourceware.org/bugzilla/show_bug.cgi?id=12533
Summary: gdb consumes large memory when millions gdb.values are
created
Product: gdb
Version: HEAD
Status: NEW
Severity: normal
Priority: P2
Component: python
AssignedTo: unassigned@sourceware.org
ReportedBy: joachim.protze@zih.tu-dresden.de
For my current pretty-printer project i collect some metadata in inferior
space. This is used to evaluate, how some values - especially send/recv-buffers
- should be formated for printout. In some cases it is necessary to evaluate
metadata for each single value in buffer, which can be 10k+ values.
I observed gdb to consume hundreds of MB memory and tried to analyze the source
of the problem with valgrind + massif: most memory is consumed within
allocate_value()
What i don't understand: why are the value structures not freed while the
command is running. Is this an issue of the python interpreter or is this an
issue of the gdb-py api?
Is there a way to get the values freed?
-----------------------------------------
short example to see this behaviour (for RAM <2GB use smaller iteration
counts!):
(gdb) python v=gdb.parse_and_eval("1")
(gdb) python
>for i in range(1000000):
> v+=1
>end
(gdb) python
>for i in range(1200000):
> v+=1
>end
(gdb)
gdb consumes 600MB after the 1.0M iterations and 780MB after the 1.2M
iterations
--
Configure bugmail: http://sourceware.org/bugzilla/userprefs.cgi?tab=email
------- You are receiving this mail because: -------
You are on the CC list for the bug.