This is the mail archive of the
libc-alpha@sourceware.org
mailing list for the glibc project.
Re: [PATCH] Simple malloc benchtest.
- From: OndÅej BÃlka <neleai at seznam dot cz>
- To: Siddhesh Poyarekar <siddhesh at redhat dot com>
- Cc: libc-alpha at sourceware dot org
- Date: Mon, 23 Dec 2013 14:31:57 +0100
- Subject: Re: [PATCH] Simple malloc benchtest.
- Authentication-results: sourceware.org; auth=none
- References: <20131221153303 dot GA8420 at domone dot podge> <20131223090627 dot GF4979 at spoyarek dot pnq dot redhat dot com> <20131223095034 dot GA20816 at domone> <20131223110912 dot GG4979 at spoyarek dot pnq dot redhat dot com>
On Mon, Dec 23, 2013 at 04:39:12PM +0530, Siddhesh Poyarekar wrote:
> On Mon, Dec 23, 2013 at 10:50:34AM +0100, OndÅej BÃlka wrote:
> > You cannot do that, you would repeat same mistake that plagued allocator
> > research in seventies, allocation patterns are simply different than
> > simulations and all that you would get from that measurement is is meaningles garbage,
> > see following link:
> >
> > http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.97.5185&rep=rep1&type=pdf
>
> I don't think the conclusions of that paper are valid because their
> measurements are tweaked to give the most optimistic number possible.
> They do pretend to use a more pessimistic measurement as well, but its
> higher numbers are simply ignored in their conclusion, stating that
> they're 'misleading'.
>
Please justify your opinion, a relevant metric was:
"3. The maximum amount of memory used by the allocator
relative to the amount of memory requested by the pro-
gram at the point of maximal memory usage."
If that metric is valid you have a severe problem with fragmentation on
following program;
char *ary[1000];
for (int i = 0; i < 1000; i++)
ary[i] = malloc (10000);
for (int i = 0; i < 1000; i++)
ary[i] = realloc (ary[i], 100);
char *next = malloc (10000);
Which according to that measure has 10000% fragmentation.
> Additionally, we still need to account for allocator overhead (which
> that paper correctly ignores, given its scope),
Not quite,
> so I'm going to modify
> my request to ask for a simple measurement (which could get refined
> over time) of allocator overhead and fragmentation - a single number
> should be sufficient for now, since differentiating between allocator
> overhead and fragmentation is only useful when you're comparing
> different allocators.
>
> If you want to put out a more comprehensive measurement of
> fragmentation (+ overhead) over time, I'd suggest looking at memory
> used vs memory requested at specific intervals and simply plot them
> into a graph. Of course, the actual graph is out of scope for now,
> but you could at least get a limited set of plot points that a graph
> generator could use and print them out for now.
>
As you cannot use this benchmark to compare different algorithms what
you propose is useless. As sequence off addresses allocated will stay
the same a posible graph stays same and you cannot get any information
from constant.
These make sense only when you do whole system profiling where you get
real data.