This is the mail archive of the gdb-patches@sourceware.org mailing list for the GDB project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: question: python gc doesn't collect buffer allocated by read_memory()


Hello Tom,

From: Tom Tromey <tromey@redhat.com>
Subject: Re: question: python gc doesn't collect buffer allocated by read_memory()
Date: Tue, 20 Mar 2012 14:58:46 -0600

>>>>>> ">" == HATAYAMA Daisuke <d.hatayama@jp.fujitsu.com> writes:
> 
>>> count = 100000
>>> while count >= 0:
>>>     i.read_memory(buf.address, buf.type.sizeof)
>>>     count -= 1
> 
> You don't say what version of gdb you are using.

Sorry for missing. I first found this on gdb-7.2-48.el6.x86_64. I used
7.4 in the presentation of the first mail.

> What you are reporting sounds like PR 12533, which was fixed in CVS back
> in January.
> 
>>> 5. Looking at referrers of the buffer returned by read_memory(), they
>>> are all empty [], so it looks OK to me if garbage collector collects
>>> the memory...
> 
> In 12533 the problem was that intermediate values weren't properly
> deallocated.
> 
> You could test for this problem by hoisting 'buf.address' out of the
> loop and seeing if that has an effect.
> 
> Tom
> 

I tried on today's daily update version, but the situation didn't
change.

It seems to me that the issue in PR 12533 is different from the issue
here. On the issue here, memory size increased is mostly identical to
the size of data that is read by inferior.read_memory().

The next is a function to read memory in pages from core file. Here
size is 4096 kB.

(gdb) python
>def read_pages(n):
>  while n >= 0:
>    i.read_memory(addr, size)
>    n -= 1
>end

At initial state, gdb has 15600 kB memory.

# ps aux | grep testpro | grep -v grep
root     29655  0.0  0.8 234104 15600 pts/2    Ss+  21:57   0:00 /usr/bin/gdb --annotate=3 testpro

Executing python read_pages(100), it changes to 16004 kB. So, 400 kB increased.

# ps aux | grep testpro | grep -v grep
root     29655  0.0  0.8 234524 16004 pts/2    Ss+  21:57   0:00 /usr/bin/gdb --annotate=3 testpro

Then, executing python read_pages(1000), it changes to 20060 kB. So, 4000 kB increased.

# ps aux | grep testpro | grep -v grep
root     29655  0.0  1.0 238624 20060 pts/2    Ss+  21:57   0:00 /usr/bin/gdb --annotate=3 testpro

I'm beginning with considering that I don't understand memory
management principle on gdb python enough. The behaviour I'm expecting
is just like as follows.

Assume there's file_4KB.txt that contains 4KB data and a simple script
below.

# cat ./endlessread.py
with open('file_4KB.txt') as f:
    while True:
        f.read(4096)
        f.seek(0)

Even if executing this script, virtual memory used in the execution is
never increased unlimitedly.

# ps aux | grep endless | grep -v grep
root     29831 97.3  0.2 160796  5700 pts/1    R    22:13   0:16 python ./endlessread.py
# ps aux | grep endless | grep -v grep
root     29831 94.9  0.2 160796  5700 pts/1    R    22:13   0:18 python ./endlessread.py
# ps aux | grep endless | grep -v grep
root     29831 94.4  0.2 160796  5700 pts/1    R    22:13   0:18 python ./endlessread.py
# ps aux | grep endless | grep -v grep
root     29831 98.0  0.2 160796  5700 pts/1    R    22:13   0:19 python ./endlessread.py

This is because the string returned by f.read(4096) is not referred to
by any other objects, so periodically invoked gc.collect() collects
them entirely.

On the other hand, it appears to me that buffer objects returned by
inferior.read_memory() is never collected by gc.collect().

Thanks.
HATAYAMA, Daisuke


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]