This is the mail archive of the
mailing list for the glibc project.
Re: [PATCH] Simple malloc benchtest.
- From: Siddhesh Poyarekar <siddhesh at redhat dot com>
- To: OndÅej BÃlka <neleai at seznam dot cz>
- Cc: libc-alpha at sourceware dot org
- Date: Mon, 23 Dec 2013 16:39:12 +0530
- Subject: Re: [PATCH] Simple malloc benchtest.
- Authentication-results: sourceware.org; auth=none
- References: <20131221153303 dot GA8420 at domone dot podge> <20131223090627 dot GF4979 at spoyarek dot pnq dot redhat dot com> <20131223095034 dot GA20816 at domone>
On Mon, Dec 23, 2013 at 10:50:34AM +0100, OndÅej BÃlka wrote:
> You cannot do that, you would repeat same mistake that plagued allocator
> research in seventies, allocation patterns are simply different than
> simulations and all that you would get from that measurement is is meaningles garbage,
> see following link:
I don't think the conclusions of that paper are valid because their
measurements are tweaked to give the most optimistic number possible.
They do pretend to use a more pessimistic measurement as well, but its
higher numbers are simply ignored in their conclusion, stating that
Additionally, we still need to account for allocator overhead (which
that paper correctly ignores, given its scope), so I'm going to modify
my request to ask for a simple measurement (which could get refined
over time) of allocator overhead and fragmentation - a single number
should be sufficient for now, since differentiating between allocator
overhead and fragmentation is only useful when you're comparing
If you want to put out a more comprehensive measurement of
fragmentation (+ overhead) over time, I'd suggest looking at memory
used vs memory requested at specific intervals and simply plot them
into a graph. Of course, the actual graph is out of scope for now,
but you could at least get a limited set of plot points that a graph
generator could use and print them out for now.