This is the mail archive of the
mailing list for the glibc project.
Re: [PATCH][BZ #11087] Use atomic operations to track memory
- From: "Carlos O'Donell" <carlos at redhat dot com>
- To: OndÅej BÃlka <neleai at seznam dot cz>
- Cc: libc-alpha at sourceware dot org
- Date: Thu, 17 Oct 2013 14:33:03 -0400
- Subject: Re: [PATCH][BZ #11087] Use atomic operations to track memory
- Authentication-results: sourceware.org; auth=none
- References: <20131017114140 dot GA24230 at domone dot podge>
On 10/17/2013 07:41 AM, OndÅej BÃlka wrote:
> I fixed this mostly because ulrich was wrong here in several ways.
> Calling added locking to update statistics as too expensive is nonsense
> as this is needed only after mmap and mmap + associated minor faults
> are much more costy.
> Also there is no locking needed, atomic add will do job well.
> This bug affects also malloc_stats.
> * malloc/malloc.c: Accurately track mmaped memory.
While this patch fixes the consistency problems with the variables
in question it doesn't fix concurrent stores of slightly different
values to the same variables.
> diff --git a/malloc/malloc.c b/malloc/malloc.c
> index 2938234..cdbd6f3 100644
> --- a/malloc/malloc.c
> +++ b/malloc/malloc.c
> @@ -2334,10 +2334,11 @@ static void* sysmalloc(INTERNAL_SIZE_T nb, mstate av)
> /* update statistics */
> - if (++mp_.n_mmaps > mp_.max_n_mmaps)
> + __sync_fetch_and_add (&mp_.n_mmaps, 1);
> + if (mp_.n_mmaps > mp_.max_n_mmaps)
> mp_.max_n_mmaps = mp_.n_mmaps;
Don't two threads race to update mp_.max_n_mmaps with potentially
mp_.n_mmaps = x
mp_.max_n_mmaps = x
__sync_fetch_and_add (&mp_.n_mmaps, 1); [mp_.n_mmaps == x+1]
__sync_fetch_and_add (&mp_.n_mmaps, 1); [mp_.n_mmaps == x+2]
if (mp_.n_mmaps > mp_.max_n_mmaps) [x+2 > x == true]
if (mp_.n_mmaps > mp_.max_n_mmaps) [x+1 > x == true]
mp_.max_n_mmaps = mp_.n_mmaps; [mp_.n_mmaps = x+2]
mp_.max_n_mmaps = mp_.n_mmaps; [mp_.n_mmaps = x+1]
Now there are 2 mmaps, but the statistics show only 1.
If the store to mp_.max_n_mmaps was a compare and swap then
we would know if another thread updated, and then we could
retry the update if the value was lower.
> - sum = mp_.mmapped_mem += size;
> + sum = __sync_fetch_and_add (&mp_.mmapped_mem, size);
> if (sum > (unsigned long)(mp_.max_mmapped_mem))
> mp_.max_mmapped_mem = sum;
Likewise for mp_.max_mmaped_mem?
> @@ -2789,8 +2790,8 @@ munmap_chunk(mchunkptr p)
> - mp_.n_mmaps--;
> - mp_.mmapped_mem -= total_size;
> + __sync_fetch_and_sub (&mp_.n_mmaps, 1)
> + __sync_fetch_and_sub (&mp_.mmapped_mem, total_size);
> /* If munmap failed the process virtual memory address space is in a
> bad shape. Just leave the block hanging around, the process will
> @@ -2831,8 +2832,7 @@ mremap_chunk(mchunkptr p, size_t new_size)
> assert((p->prev_size == offset));
> set_head(p, (new_size - offset)|IS_MMAPPED);
> - mp_.mmapped_mem -= size + offset;
> - mp_.mmapped_mem += new_size;
> + __sync_fetch_and_add (&mp_.mmapped_mem, new_size - size - offset);
> if ((unsigned long)mp_.mmapped_mem > (unsigned long)mp_.max_mmapped_mem)
> mp_.max_mmapped_mem = mp_.mmapped_mem;
> return p;