This is the mail archive of the mailing list for the glibc project.

Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: [RFC/PoC] malloc: use wfcqueue to speed up remote frees

On 07/31/2018 07:18 PM, Eric Wong wrote:
>> - Can you explain the RSS reduction given this patch? You
>> might think that just adding the frees to a queue wouldn't
>> result in any RSS gains.
> At least two reasons I can see:
> 1) With lock contention, the freeing thread can lose to the
>    allocating thread.  This makes the allocating thread hit
>    sysmalloc since it prevented the freeing thread from doing
>    its job.  sysmalloc is the slow path, so the lock gets held
>    even longer and the problem compounds from there.

How does this impact RSS? It would only block the remote thread
from freeing in a timely fashion, but it would eventually make

> 2) thread caching - memory ends up in the wrong thread and
>    could never get used in some cases.  Fortunately this is
>    bounded, but still a waste.

We can't have memory end up in the wrong thread. The remote thread
computes the arena from the chunk it has, and then frees back to
the appropriate arena, even if it's not the arena that the thread
is attached to.

> I'm still new to the code, but it looks like threads are pinned
> to the arena and the memory used for arenas never gets released.
> Is that correct?

Threads are pinned to their arenas, but they can move in the event
of allocation failures, particularly to the main arena to attempt
sbrk to get more memory.

> I was wondering if there was another possibility: the allocating
> thread gives up the arena and creates a new one because the
> freeing thread locked it, but I don't think that's the case.


> Also, if I spawn a bunch of threads and get a bunch of
> arenas early in the program lifetime; and then only have few
> threads later, there can be a lot of idle arenas.
Yes. That is true. We don't coalesce arenas to match the thread

>> However, you are calling _int_free a lot in row and that
>> deinterleaving may help (you really want vector free API here
>> so you don't walk all the lists so many times, tcache had the
>> same problem but in reverse for finding chunks). 
> Maybe...  I think in the ideal case, the number of allocations
> and frees is close 1:1, so the loop is kept short.
> What may be worth trying is to bypass _int_free for cases where
> a chunk can fulfill the allocation which triggers it.  Delaying
> or avoiding consolidation could worsen fragmentation, though. 


>> - Adding urcu as a build-time dependency is not acceptable for
>> bootstrap, instead we would bundle a copy of urcu and keep it
>> in sync with upstream. Would that make your work easier?
> Yes, bundling that sounds great.  I assume it's something for
> you or one of the regular contributors to work on (build systems
> scare me :x)

Yes, that is something we'd have to do.

>> - What problems are you having with `make -j4 check?' Try
>> master and report back.  We are about to release 2.28 so it
>> should build and pass.
> My fault.  It seems like tests aren't automatically rerun when I
> change the code; so some of my broken work-in-progress changes
> ended up being false positives :x.  When working on this, I made
> the mistake of doing remote_free_step inside malloc_consolidate,
> which could recurse into _int_free or _int_malloc

This depends a bit on what you touch.

> I guess I should remove the *.test-result files before rerunning
> tests?

Yes, that will definitely force the test to be re-run.

> I still get:
> FAIL: nptl/tst-sched1
> 	"Create failed"
> 	I guess my system was overloaded.  pthread_create
> 	failures seemed to happen a lot for me when working
> 	on Ruby, too, and POSIX forcing EAGAIN makes it
> 	hard to diagnose :< (ulimit -u 47999 and 12GB RAM)
> 	Removing the test-result and retrying seems OK.

OK. This one is new. There are a few tests where pthread_create
fails with EAGAIN because the kernel can't reap the children
fast enough.

> FAIL: resolv/tst-resolv-ai_idn
> FAIL: resolv/tst-resolv-ai_idn-latin1
> 	Not root, so no CLONE_NEWUTS
> So I think that's expected...



Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]