Bug 13939 - Malloc can deadlock in retry paths
Summary: Malloc can deadlock in retry paths
Status: RESOLVED FIXED
Alias: None
Product: glibc
Classification: Unclassified
Component: malloc (show other bugs)
Version: 2.15
: P2 normal
Target Milestone: 2.17
Assignee: Not yet assigned to anyone
URL:
Keywords:
Depends on:
Blocks:
 
Reported: 2012-04-03 03:15 UTC by law
Modified: 2014-06-25 11:23 UTC (History)
4 users (show)

See Also:
Host:
Target:
Build:
Last reconfirmed:
fweimer: security-


Attachments
Potential fix (1.71 KB, patch)
2012-04-03 03:15 UTC, law
Details | Diff

Note You need to log in before you can comment on or make changes to this bug.
Description law 2012-04-03 03:15:07 UTC
Created attachment 6312 [details]
Potential fix

Assume we're we're in libc_malloc and the call to _int_malloc has failed because we're unable to sbrk more memory.

We (sensibly) have code which will attempt to use mmap to allocate the memory in the inner else clause below:

  arena_lock(ar_ptr, bytes);
  if(!ar_ptr)
    return 0;
  victim = _int_malloc(ar_ptr, bytes);
  if(!victim) {
    /* Maybe the failure is due to running out of mmapped areas. */
    if(ar_ptr != &main_arena) {
      (void)mutex_unlock(&ar_ptr->mutex);
      ar_ptr = &main_arena;
      (void)mutex_lock(&ar_ptr->mutex);
      victim = _int_malloc(ar_ptr, bytes);
      (void)mutex_unlock(&ar_ptr->mutex);
    } else {
      /* ... or sbrk() has failed and there is still a chance to mmap() */
      ar_ptr = arena_get2(ar_ptr->next ? ar_ptr : 0, bytes);
      (void)mutex_unlock(&main_arena.mutex);
      if(ar_ptr) {
        victim = _int_malloc(ar_ptr, bytes);
        (void)mutex_unlock(&ar_ptr->mutex);
      }
    }
  } else
    (void)mutex_unlock(&ar_ptr->mutex);


Make note that the arena referenced by ar_ptr will still be locked when we call arena_get2 in that else clause.  Furthermore, we know that ar_ptr must refer to the main arena.

Now assume that there are no arenas on the free list and that we've already hit the limit for the number of arenas we're willing to create.  In that case arena_get2 will call reused_arena which looks like this:


static mstate
reused_arena (void)
{
  mstate result;
  static mstate next_to_use;
  if (next_to_use == NULL)
    next_to_use = &main_arena;

  result = next_to_use;
  do
    {
      if (!mutex_trylock(&result->mutex))
        goto out;

      result = result->next;
    }
  while (result != next_to_use);

  /* No arena available.  Wait for the next in line.  */
  (void)mutex_lock(&result->mutex);

So let's make a couple more assumptions.  First, assume that next_to_use refers to the main arena.  Second assume that all the other arenas are currently locked by other threads.  And remember that the main arena is locked by the current thread.

In that case the do-while loop will look at the every arena on the list and find them all locked.  When it hits result == main_arena, then the loop will exit and we call mutex_lock to acquire the main arena's lock.  But since the main arena is already locked by the current thread, we deadlock.

libc_calloc seems to have the same problem.

libc_memalign, libc_calloc, libc_pvalloc all release the lock before calling arena_get2 in the case where sbrk failed, which seems to be the right thing to do.

Closely related since fixing the deadlock provides us with a great opportunity to unify the 5 implementations a little.  For example we have this from libc_memalign:

      mstate prev = ar_ptr->next ? ar_ptr : 0;
      (void)mutex_unlock(&ar_ptr->mutex);
      ar_ptr = arena_get2(prev, bytes);


Which seems like the right way to go, so I'd like to unify the 5 implementations to have a similar structure.  ie, release the lock within the conditional just prior to calling arena_get2 and being safe WRT modification of ar_ptr->next.


So dropping the lock prior to calling arena_get2 avoids the deadlock.  However, in this case all that's going to happen is the retrying allocation will fail.  The original allocation was trying to allocate from the main arena and so will the retrying allocation if it can't acquire a lock for any other arena.

To get this 100% correct would require retrying in every arena, potentially blocking to acquire the lock at each step.  That seems rather excessive.  So instead we can just pass in a state variable indicating we're in a retry situation and if so, avoid retrying in the same arena that just failed.

Attached is the fix we're currently using internally at Red Hat.  It needed updating slightly as Uli cleaned up malloc.c & arena.c.  However, the basic structure is the same as what Red Hat is using internally.  I've verified glibc still builds after adjusting for Uli's cleanups.
Comment 1 Carlos O'Donell 2012-04-03 03:45:38 UTC
Lets solve this for 2.16, setting milestone.
Comment 2 Carlos O'Donell 2012-07-18 02:00:29 UTC
Maxim has provided review of this patch here:
http://sourceware.org/ml/libc-alpha/2012-07/msg00027.html

I think the next step is for Jeff to put out a new version of the patch with the new changes included.
Comment 3 law 2012-07-18 14:47:42 UTC
Carlos,

Yes there's some minor edits I need to make based on Maxim's review.  I'm just a bit buried at the moment.
Comment 4 law 2012-11-29 17:37:52 UTC
commit bf51f568f19bc063e62904d18b77d7e449a6a44f
Author: Jeff Law <law@redhat.com>
Date:   Fri Aug 10 09:37:04 2012 -0600

            [BZ #13939]
            * malloc.c/arena.c (reused_arena): New parameter, avoid_arena.
            When avoid_arena is set, don't retry in the that arena.  Pick the
            next one, whatever it might be.
            (arena_get2): New parameter avoid_arena, pass through to reused_arena.
            (arena_lock): Pass in new parameter to arena_get2.
            * malloc/malloc.c (__libc_memalign): Pass in new parameter to
            arena_get2.
            (__libc_malloc): Unify retrying after main arena failure with
            __libc_memalign version.
            (__libc_valloc, __libc_pvalloc, __libc_calloc): Likewise.