This is the mail archive of the libc-alpha@sourceware.org mailing list for the glibc project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: [PATCH] Annotate malloc conditions with glibc_likely.


On 9 December 2013 20:44, OndÅej BÃlka <neleai@seznam.cz> wrote:
> Hi,
>
> Another little optimization in malloc is annotate that fastbins are
> likely case. These should be fast path and given that other path
> slower it should pay off.

In general I am not in favour of adding these types of annotations
unless we can show they improve performance as they make code harder
to read. They are also something of a blunt instrument - does "likely"
mean will happen with 75% likelihood? 90%? 99%? I guess the results
will vary with compiler and architecture as to whether or not the
annotation helps or not.

> I also moved a short mmmaped case before long normal one to make code
> more readable.

I think this should be a separate patch.

> There is a discrepancy in freeing mmaped chunks, this is first done in
> __libc_free and __libc_realloc. I wonder from where in __int_free mmaped
> chunk comes from.
>
>
>         * malloc/malloc.c (_int_malloc, _int_free): Annotate branches
>         by __glibc_likely/__glibc_unlikely.
>
>
> diff --git a/malloc/malloc.c b/malloc/malloc.c
> index 393316e..c9d67f3 100644
> --- a/malloc/malloc.c
> +++ b/malloc/malloc.c
> @@ -3221,7 +3221,8 @@ _int_malloc(mstate av, size_t bytes)
>      can try it without checking, which saves some time on this fast path.
>    */
>
> -  if ((unsigned long)(nb) <= (unsigned long)(get_max_fast ())) {
> +  if (__glibc_likely((unsigned long)(nb) <= (unsigned long)(get_max_fast ()))
> +     ) {

The coding style looks a little odd (all the malloc code does, but
never mind!) and the casts to unsigned long are superfluous if
INTERNAL_SIZE_T becomes size_t.

>      idx = fastbin_index(nb);
>      mfastbinptr* fb = &fastbin (av, idx);
>      mchunkptr pp = *fb;
> @@ -3711,8 +3712,8 @@ _int_free(mstate av, mchunkptr p, int have_lock)
>      and used quickly in malloc.
>    */
>
> -  if ((unsigned long)(size) <= (unsigned long)(get_max_fast ())
> -
> +  if (__glibc_likely ((unsigned long)(size) <=
> +                     (unsigned long)(get_max_fast ()))

Again the casts can be removed if we are using size_t.

>  #if TRIM_FASTBINS
>        /*
>         If TRIM_FASTBINS set, don't place chunks
> @@ -3777,12 +3778,15 @@ _int_free(mstate av, mchunkptr p, int have_lock)
>         goto errout;
>        }
>    }
> +  else if (__glibc_unlikely(chunk_is_mmapped(p))) {
> +    munmap_chunk (p);
> +  }
> +  else {
>
>    /*
>      Consolidate other non-mmapped chunks as they arrive.
>    */
>
> -  else if (!chunk_is_mmapped(p)) {
>      if (! have_lock) {
>  #if THREAD_STATS
>        if(!mutex_trylock(&av->mutex))
> @@ -3929,13 +3933,6 @@ _int_free(mstate av, mchunkptr p, int have_lock)
>        (void)mutex_unlock(&av->mutex);
>      }
>    }
> -  /*
> -    If the chunk was allocated via mmap, release via munmap().
> -  */
> -
> -  else {
> -    munmap_chunk (p);
> -  }
>  }
>
>  /*



-- 
Will Newton
Toolchain Working Group, Linaro


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]