This is the mail archive of the libc-alpha@sourceware.org mailing list for the glibc project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: thread heap leak?


* David Muse:

> I've struggled to get backtraces.  The app has a crash-handler that
> prints a backttrace to the log, but that also crashes inside of a
> malloc.

Please consider disabling the crash handler.  It typically interferes
with debugging, particularly if it tricks like fork.  (malloc in a
crash handler is certainly not a good sign.)

> Code to launch detached threads:
>
> ... main ...
>
> 	cs->threadattr=new pthread_attr_t;
> 	pthread_attr_init(cs->threadattr);
> 	pthread_attr_setdetachstate(cs->threadattr,PTHREAD_CREATE_DETACHED);
> 	...
> 	cs->threadhandle=new pthread_t;
> 	if (pthread_create(cs->threadhandle,cs->threadattr,
> 				(void *(*)(void *))clientThread,
> 				(void *)cs)) {
> 		... error handling ...
> 	}
>
>
> ... inside of clientThread() ...
>
>         pthread_attr_destroy(cs->threadattr);
> 	...
> 	pthread_exit(NULL);
>
>
> No other attributes.

Hmm.  Computing the sizes of mappings you quoted and the gaps between
them[1], I get this:

   0.016 2b933fa97000-2b933fa9b000 rw-p 00000000 00:00 0 
   0.004 2b933fa9b000-2b933fa9c000 ---p 00000000 00:00 0 
   2.000 2b933fa9c000-2b933fc9c000 rw-p 00000000 00:00 0 
   0.004 2b933fc9c000-2b933fc9d000 ---p 00000000 00:00 0 
   2.000 2b933fc9d000-2b933fe9d000 rw-p 00000000 00:00 0 
... 1.387 ...
   0.496 2b9340000000-2b934007f000 rw-p 00000000 00:00 0 
  63.504 2b934007f000-2b9344000000 ---p 00000000 00:00 0 
   0.195 2b9344000000-2b9344032000 rw-p 00000000 00:00 0 
  63.805 2b9344032000-2b9348000000 ---p 00000000 00:00 0 
   0.004 2b9348000000-2b9348001000 ---p 00000000 00:00 0 
   2.000 2b9348001000-2b9348201000 rw-p 00000000 00:00 0 
   0.004 2b9348201000-2b9348202000 ---p 00000000 00:00 0 
   2.000 2b9348202000-2b9348402000 rw-p 00000000 00:00 0 
   0.004 2b9348402000-2b9348403000 ---p 00000000 00:00 0 
   2.000 2b9348403000-2b9348603000 rw-p 00000000 00:00 0 
   0.004 2b9348603000-2b9348604000 ---p 00000000 00:00 0 
   2.000 2b9348604000-2b9348804000 rw-p 00000000 00:00 0 
... 55.984 ...
   0.137 2b934c000000-2b934c023000 rw-p 00000000 00:00 0 
  63.863 2b934c023000-2b9350000000 ---p 00000000 00:00 0 
   0.129 2b9350000000-2b9350021000 rw-p 00000000 00:00 0 
  63.871 2b9350021000-2b9354000000 ---p 00000000 00:00 0 
   0.516 2b9354000000-2b9354084000 rw-p 00000000 00:00 0 
  63.484 2b9354084000-2b9358000000 ---p 00000000 00:00 0 
   0.129 2b9358000000-2b9358021000 rw-p 00000000 00:00 0 
  63.871 2b9358021000-2b935c000000 ---p 00000000 00:00 0 

So these mappings are a mix of thread stacks, assuming that you set
the stack ulimit to 2 MiB (more usual would be 8 MiB).

There are also mostly-deallocated heaps (mapped with PROT_NONE,
probably due to vm.overcommit_memory=2 mode).  The reported gaps are
the result of malloc heap alignment.

So this doesn't look like anything unusual so far.  I guess the next
step would be to look at the full list of mappings and check if the
number of thread stacks is reasonable (there should be about 20 of
them at most, I think).

[1] Script used:

import sys
last_address = 0
for line in sys.stdin:
    line = line.strip()
    comps = line.split(' ')
    low, high = comps[0].split('-')
    low = int(low, 16)
    high = int(high, 16)
    size_mib = (high - low) / 2.**20
    if last_address > 0 and last_address != low:
        print("... {:.3f} ...".format((low - last_address) / 2.**20))
    last_address = high
    print("  {:>6.3f} {}".format(size_mib, line))


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]