3 * - strdup? maybe shouldn't bother yet, it seems difficult to get includes
4 * right using dlmalloc.h
5 * - add STD_C prototyping
6 * - adhere to comment conventions
7 * - maybe fix ALLOCFILL vs. MOATFILL in do_init_realloced_chunk()
8 * - keep a list of mmaped regions for checking in malloc_update_mallinfo()
9 * - I think memalign() is wrong: it aligns the chunk rather than the memory
10 * portion of the chunk.
11 * - "& -alignment" in memalign() is suspect: should use "& ~alignment"
13 * - malloc.h doesn't need malloc_COPY or probably a bunch of other stuff
14 * - add mallopt options for e.g. fill?
15 * - come up with a non-BBC version of M_C
16 * - document necessity of checking chunk address in do_check_chunk prior to
17 * accessing any of its fields
19 * minor speedup due to extend check before mremap
20 * minor speedup due to returning malloc() result in memalign() if aligned
21 * made malloc_update_mallinfo() check alloced regions at start of sbrk area
22 * fixed bug: After discovering foreign sbrk, if old_top was MINSIZE, would
23 * reduce old_top_size to 0, thus making inuse(old_top) return 0; other
24 * functions would consequently attempt to access old_top->{fd,bk}, which
25 * were invalid. This is in malloc_extend_top(), in the "double
28 * malloc_usable_size(P) is equivalent to realloc(P, malloc_usable_size(P))
31 * Revision 1.5 2001/10/03 03:49:25 cgf
32 * * cygheap.cc (cfree): Remove malloc debugging probe.
33 * * dlmalloc.c (errprint): Remove abort() call which causes interesting error
34 * message printing to abort prematurely.
35 * * environ.cc: Sprinkle MALLOC_CHECKs liberally throughout.
36 * (_addenv): Allocate two empty elements at end of environ to
37 * (apparently) work around problems with some buggy applications.
38 * (winenv): Avoid calling alloca if no forced environment variable is present.
40 * * exceptions.cc (open_stackdumpfile): Don't print "Dumping stack trace to..."
41 * when running in a cygwin environment (i.e., the parent is a cygwin process).
43 * * dtable.cc (dtable::init_std_file_from_handle): Move device type detection
44 * code from build_fhandler here since it is only used by this function.
45 * (dtable::build_fhandler_from_name): New method. Renamed from
46 * dtable::build_fhandler.
47 * (dtable::build_fhandler): Use build_fhandler_from_name.
48 * (cygwin_attach_handle_to_fd): Ditto.
49 * * syscalls.cc (_open): Ditto.
50 * (stat_worker): Ditto.
51 * * dtable.h (dtable::build_fhandler_from_name): Rename declaration from
52 * dtable::build_fhandler.
54 * Revision 1.4 2001/09/07 21:32:04 cgf
55 * * cygheap.h (init_cygheap): Move heap pointers here.
56 * * include/sys/cygwin.h (perprocess): Remove heap pointers.
57 * * dcrt0.cc (__cygwin_user_data): Reflect obsolete perprocess stuff.
58 * (_dll_crt0): Don't initialize heap pointers.
59 * (cygwin_dll_init): Ditto.
60 * (release_upto): Use heap pointers from cygheap.
62 * * fork.cc (fork_parent): Ditto. Don't set heap pointers in ch.
63 * (fork_child): Remove obsolete sigproc_fixup_after_fork.
64 * * shared.cc (memory_init): Reorganize so that cygheap initialization is called
65 * prior to regular heap since regular heap uses cygheap now.
66 * * sigproc.cc (proc_subproc): Eliminate zombies allocation.
67 * (sigproc_init): Move zombies alloation here. Don't free up array on fork, just
69 * (sigproc_fixup_after_fork): Eliminate.
71 * * include/cygwin/version.h: Reflect change to perprocess structure.
73 * Revision 1.3 2001/06/26 14:47:48 cgf
74 * * mmap.cc: Clean up *ResourceLock calls throughout.
75 * * thread.cc (pthread_cond::TimedWait): Check for WAIT_TIMEOUT as well as
77 * (__pthread_cond_timedwait): Calculate a relative wait from the abstime
80 * Revision 1.2 2001/06/24 22:26:49 cgf
83 * Revision 1.1 2001/04/24 15:25:30 duda
84 * * dlmalloc.c: New file. Port of Doug Lea's malloc
85 * * dlmalloc.h: Ditto.
86 * * Makefile.in: Add support for MALLOC_DEBUG
87 * * config.h.in: Ditto.
89 * * configure.in: Add --enable-malloc-debugging option.
90 * * configure: Regenerate.
91 * * debug.h: Include declarations for debugging malloc.
92 * * tty.cc (grantpt): Fix definition.
95 * Revision 1.1 1997/12/24 18:34:47 nsd
99 /* ---------- To make a malloc.h, start cutting here ------------ */
102 A version of malloc/free/realloc written by Doug Lea and released to the
103 public domain. Send questions/comments/complaints/performance data
106 * VERSION 2.6.4 Thu Nov 28 07:54:55 1996 Doug Lea (dl at gee)
108 Note: There may be an updated version of this malloc obtainable at
109 ftp://g.oswego.edu/pub/misc/malloc.c
110 Check before installing!
112 * Why use this malloc?
114 This is not the fastest, most space-conserving, most portable, or
115 most tunable malloc ever written. However it is among the fastest
116 while also being among the most space-conserving, portable and tunable.
117 Consistent balance across these factors results in a good general-purpose
118 allocator. For a high-level description, see
119 http://g.oswego.edu/dl/html/malloc.html
121 * Synopsis of public routines
123 (Much fuller descriptions are contained in the program documentation below.)
126 Return a pointer to a newly allocated chunk of at least n bytes, or null
127 if no space is available.
129 Release the chunk of memory pointed to by p, or no effect if p is null.
130 realloc(Void_t* p, size_t n);
131 Return a pointer to a chunk of size n that contains the same data
132 as does chunk p up to the minimum of (n, p's size) bytes, or null
133 if no space is available. The returned pointer may or may not be
134 the same as p. If p is null, equivalent to malloc. Unless the
135 #define realloc_ZERO_BYTES_FREES below is set, realloc with a
136 size argument of zero (re)allocates a minimum-sized chunk.
137 memalign(size_t alignment, size_t n);
138 Return a pointer to a newly allocated chunk of n bytes, aligned
139 in accord with the alignment argument, which must be a power of
142 Equivalent to memalign(pagesize, n), where pagesize is the page
143 size of the system (or as near to this as can be figured out from
144 all the includes/defines below.)
146 Equivalent to valloc(minimum-page-that-holds(n)), that is,
147 round up n to nearest pagesize.
148 calloc(size_t unit, size_t quantity);
149 Returns a pointer to quantity * unit bytes, with all locations
152 Equivalent to free(p).
153 malloc_trim(size_t pad);
154 Release all but pad bytes of freed top-most memory back
155 to the system. Return 1 if successful, else 0.
156 malloc_usable_size(Void_t* p);
157 Report the number usable allocated bytes associated with allocated
158 chunk p. This may or may not report more bytes than were requested,
159 due to alignment and minimum size constraints.
161 Prints brief summary statistics on stderr.
163 Returns (by copy) a struct containing various summary statistics.
164 mallopt(int parameter_number, int parameter_value)
165 Changes one of the tunable parameters described below. Returns
166 1 if successful in changing the parameter, else 0.
171 8 byte alignment is currently hardwired into the design. This
172 seems to suffice for all current machines and C compilers.
174 Assumed pointer representation: 4 or 8 bytes
175 Code for 8-byte pointers is untested by me but has worked
176 reliably by Wolfram Gloger, who contributed most of the
177 changes supporting this.
179 Assumed size_t representation: 4 or 8 bytes
180 Note that size_t is allowed to be 4 bytes even if pointers are 8.
182 Minimum overhead per allocated chunk: 4 or 8 bytes
183 Each malloced chunk has a hidden overhead of 4 bytes holding size
184 and status information.
186 Minimum allocated size: 4-byte ptrs: 16 bytes (including 4 overhead)
187 8-byte ptrs: 24/32 bytes (including, 4/8 overhead)
189 When a chunk is freed, 12 (for 4byte ptrs) or 20 (for 8 byte
190 ptrs but 4 byte size) or 24 (for 8/8) additional bytes are
191 needed; 4 (8) for a trailing size field
192 and 8 (16) bytes for free list pointers. Thus, the minimum
193 allocatable size is 16/24/32 bytes.
195 Even a request for zero bytes (i.e., malloc(0)) returns a
196 pointer to something of the minimum allocatable size.
198 Maximum allocated size: 4-byte size_t: 2^31 - 8 bytes
199 8-byte size_t: 2^63 - 16 bytes
201 It is assumed that (possibly signed) size_t bit values suffice to
202 represent chunk sizes. `Possibly signed' is due to the fact
203 that `size_t' may be defined on a system as either a signed or
204 an unsigned type. To be conservative, values that would appear
205 as negative numbers are avoided.
206 Requests for sizes with a negative sign bit will return a
209 Maximum overhead wastage per allocated chunk: normally 15 bytes
211 Alignnment demands, plus the minimum allocatable size restriction
212 make the normal worst-case wastage 15 bytes (i.e., up to 15
213 more bytes will be allocated than were requested in malloc), with
215 1. Because requests for zero bytes allocate non-zero space,
216 the worst case wastage for a request of zero bytes is 24 bytes.
217 2. For requests >= mmap_threshold that are serviced via
218 mmap(), the worst case wastage is 8 bytes plus the remainder
219 from a system page (the minimal mmap unit); typically 4096 bytes.
223 Here are some features that are NOT currently supported
225 * No user-definable hooks for callbacks and the like.
226 * No automated mechanism for fully checking that all accesses
227 to malloced memory stay within their bounds.
228 * No support for compaction.
230 * Synopsis of compile-time options:
232 People have reported using previous versions of this malloc on all
233 versions of Unix, sometimes by tweaking some of the defines
234 below. It has been tested most extensively on Solaris and
235 Linux. It is also reported to work on WIN32 platforms.
236 People have also reported adapting this malloc for use in
237 stand-alone embedded systems.
239 The implementation is in straight, hand-tuned ANSI C. Among other
240 consequences, it uses a lot of macros. Because of this, to be at
241 all usable, this code should be compiled using an optimizing compiler
242 (for example gcc -O2) that can simplify expressions and control
245 __STD_C (default: derived from C compiler defines)
246 Nonzero if using ANSI-standard C compiler, a C++ compiler, or
247 a C compiler sufficiently close to ANSI to get away with it.
248 DEBUG (default: NOT defined)
249 Define to enable debugging. Adds fairly extensive assertion-based
250 checking to help track down memory errors, but noticeably slows down
252 realloc_ZERO_BYTES_FREES (default: NOT defined)
253 Define this if you think that realloc(p, 0) should be equivalent
254 to free(p). Otherwise, since malloc returns a unique pointer for
255 malloc(0), so does realloc(p, 0).
256 HAVE_memcpy (default: defined)
257 Define if you are not otherwise using ANSI STD C, but still
258 have memcpy and memset in your C library and want to use them.
259 Otherwise, simple internal versions are supplied.
260 USE_memcpy (default: 1 if HAVE_memcpy is defined, 0 otherwise)
261 Define as 1 if you want the C library versions of memset and
262 memcpy called in realloc and calloc (otherwise macro versions are used).
263 At least on some platforms, the simple macro versions usually
264 outperform libc versions.
265 HAVE_MMAP (default: defined as 1)
266 Define to non-zero to optionally make malloc() use mmap() to
267 allocate very large blocks.
268 HAVE_MREMAP (default: defined as 0 unless Linux libc set)
269 Define to non-zero to optionally make realloc() use mremap() to
270 reallocate very large blocks.
271 malloc_getpagesize (default: derived from system #includes)
272 Either a constant or routine call returning the system page size.
273 HAVE_USR_INCLUDE_malloc_H (default: NOT defined)
274 Optionally define if you are on a system with a /usr/include/malloc.h
275 that declares struct mallinfo. It is not at all necessary to
276 define this even if you do, but will ensure consistency.
277 INTERNAL_SIZE_T (default: size_t)
278 Define to a 32-bit type (probably `unsigned int') if you are on a
279 64-bit machine, yet do not want or need to allow malloc requests of
280 greater than 2^31 to be handled. This saves space, especially for
282 INTERNAL_LINUX_C_LIB (default: NOT defined)
283 Defined only when compiled as part of Linux libc.
284 Also note that there is some odd internal name-mangling via defines
285 (for example, internally, `malloc' is named `mALLOc') needed
286 when compiling in this case. These look funny but don't otherwise
288 WIN32 (default: undefined)
289 Define this on MS win (95, nt) platforms to compile in sbrk emulation.
290 LACKS_UNISTD_H (default: undefined)
291 Define this if your system does not have a <unistd.h>.
292 MORECORE (default: sbrk)
293 The name of the routine to call to obtain more memory from the system.
294 MORECORE_FAILURE (default: -1)
295 The value returned upon failure of MORECORE.
296 MORECORE_CLEARS (default 0)
297 True (1) if the routine mapped to MORECORE zeroes out memory (which
299 DEFAULT_TRIM_THRESHOLD
301 DEFAULT_MMAP_THRESHOLD
303 Default values of tunable parameters (described in detail below)
304 controlling interaction with host system routines (sbrk, mmap, etc).
305 These values may also be changed dynamically via mallopt(). The
306 preset defaults are those that give best performance for typical
327 #endif /*__cplusplus*/
339 #define __MALLOC_H_INCLUDED
342 #include <stddef.h> /* for size_t */
344 #include <sys/types.h>
351 #include <stdio.h> /* needed for malloc_stats */
362 Because freed chunks may be overwritten with link fields, this
363 malloc will often die when freed memory is overwritten by user
364 programs. This can be very effective (albeit in an annoying way)
365 in helping track down dangling pointers.
367 If you compile with -DDEBUG, a number of assertion checks are
368 enabled that will catch more memory errors. You probably won't be
369 able to make much sense of the actual assertion errors, but they
370 should help you locate incorrectly overwritten memory. The
371 checking is fairly extensive, and will slow down execution
372 noticeably. Calling malloc_stats or mallinfo with DEBUG set will
373 attempt to check every non-mmapped allocated and free chunk in the
374 course of computing the summmaries. (By nature, mmapped regions
375 cannot be checked very much automatically.)
377 Setting DEBUG may also be helpful if you are trying to modify
378 this code. The assertions in the check routines spell out in more
379 detail the assumptions and invariants underlying the algorithms.
393 #define assert(x) ((void)0)
397 INTERNAL_SIZE_T is the word-size used for internal bookkeeping
398 of chunk sizes. On a 64-bit machine, you can reduce malloc
399 overhead by defining INTERNAL_SIZE_T to be a 32 bit `unsigned int'
400 at the expense of not being able to handle requests greater than
401 2^31. This limitation is hardly ever a concern; you are encouraged
402 to set this. However, the default version is the same as size_t.
405 #ifndef INTERNAL_SIZE_T
406 #define INTERNAL_SIZE_T size_t
410 realloc_ZERO_BYTES_FREES should be set if a call to
411 realloc with zero bytes should be the same as a call to free.
412 Some people think it should. Otherwise, since this malloc
413 returns a unique pointer for malloc(0), so does realloc(p, 0).
417 /* #define realloc_ZERO_BYTES_FREES */
421 WIN32 causes an emulation of sbrk to be compiled in
422 mmap-based options are not currently supported in WIN32.
427 #define MORECORE wsbrk
433 HAVE_memcpy should be defined if you are not otherwise using
434 ANSI STD C, but still have memcpy and memset in your C library
435 and want to use them in calloc and realloc. Otherwise simple
436 macro versions are defined here.
438 USE_memcpy should be defined as 1 if you actually want to
439 have memset and memcpy called. People report that the macro
440 versions are often enough faster than libc versions on many
441 systems that it is better to use them.
455 #if (__STD_C || defined(HAVE_memcpy))
458 void* memset(void*, int, size_t);
459 void* memcpy(void*, const void*, size_t);
470 /* The following macros are only invoked with (2n+1)-multiples of
471 INTERNAL_SIZE_T units, with a positive integer n. This is exploited
472 for fast inline execution when n is small. */
474 #define malloc_ZERO(charp, nbytes) \
476 INTERNAL_SIZE_T mzsz = (nbytes); \
477 if(mzsz <= 9*sizeof(mzsz)) { \
478 INTERNAL_SIZE_T* mz = (INTERNAL_SIZE_T*) (charp); \
479 if(mzsz >= 5*sizeof(mzsz)) { *mz++ = 0; \
481 if(mzsz >= 7*sizeof(mzsz)) { *mz++ = 0; \
483 if(mzsz >= 9*sizeof(mzsz)) { *mz++ = 0; \
488 } else memset((charp), 0, mzsz); \
491 #define malloc_COPY(dest,src,nbytes) \
493 INTERNAL_SIZE_T mcsz = (nbytes); \
494 if(mcsz <= 9*sizeof(mcsz)) { \
495 INTERNAL_SIZE_T* mcsrc = (INTERNAL_SIZE_T*) (src); \
496 INTERNAL_SIZE_T* mcdst = (INTERNAL_SIZE_T*) (dest); \
497 if(mcsz >= 5*sizeof(mcsz)) { *mcdst++ = *mcsrc++; \
498 *mcdst++ = *mcsrc++; \
499 if(mcsz >= 7*sizeof(mcsz)) { *mcdst++ = *mcsrc++; \
500 *mcdst++ = *mcsrc++; \
501 if(mcsz >= 9*sizeof(mcsz)) { *mcdst++ = *mcsrc++; \
502 *mcdst++ = *mcsrc++; }}} \
503 *mcdst++ = *mcsrc++; \
504 *mcdst++ = *mcsrc++; \
506 } else memcpy(dest, src, mcsz); \
509 #else /* !USE_memcpy */
511 /* Use Duff's device for good zeroing/copying performance. */
513 #define malloc_ZERO(charp, nbytes) \
515 INTERNAL_SIZE_T* mzp = (INTERNAL_SIZE_T*)(charp); \
516 long mctmp = (nbytes)/sizeof(INTERNAL_SIZE_T), mcn; \
517 if (mctmp < 8) mcn = 0; else { mcn = (mctmp-1)/8; mctmp %= 8; } \
519 case 0: for(;;) { *mzp++ = 0; \
520 case 7: *mzp++ = 0; \
521 case 6: *mzp++ = 0; \
522 case 5: *mzp++ = 0; \
523 case 4: *mzp++ = 0; \
524 case 3: *mzp++ = 0; \
525 case 2: *mzp++ = 0; \
526 case 1: *mzp++ = 0; if(mcn <= 0) break; mcn--; } \
530 #define malloc_COPY(dest,src,nbytes) \
532 INTERNAL_SIZE_T* mcsrc = (INTERNAL_SIZE_T*) src; \
533 INTERNAL_SIZE_T* mcdst = (INTERNAL_SIZE_T*) dest; \
534 long mctmp = (nbytes)/sizeof(INTERNAL_SIZE_T), mcn; \
535 if (mctmp < 8) mcn = 0; else { mcn = (mctmp-1)/8; mctmp %= 8; } \
537 case 0: for(;;) { *mcdst++ = *mcsrc++; \
538 case 7: *mcdst++ = *mcsrc++; \
539 case 6: *mcdst++ = *mcsrc++; \
540 case 5: *mcdst++ = *mcsrc++; \
541 case 4: *mcdst++ = *mcsrc++; \
542 case 3: *mcdst++ = *mcsrc++; \
543 case 2: *mcdst++ = *mcsrc++; \
544 case 1: *mcdst++ = *mcsrc++; if(mcn <= 0) break; mcn--; } \
552 /* The trailing moat invalidates the above prediction about the nbytes
553 parameter to malloc_ZERO and malloc_COPY. */
555 #define malloc_ZERO(charp, nbytes) \
557 char *mzp = (char *)(charp); \
558 long mzn = (nbytes); \
563 #define malloc_COPY(dest,src,nbytes) \
565 char *mcsrc = (char *)(src); \
566 char *mcdst = (char *)(dest); \
567 long mcn = (nbytes); \
569 *mcdst++ = *mcsrc++; \
575 Define HAVE_MMAP to optionally make malloc() use mmap() to
576 allocate very large blocks. These will be returned to the
577 operating system immediately after a free().
585 Define HAVE_MREMAP to make realloc() use mremap() to re-allocate
586 large blocks. This is currently only possible on Linux with
587 kernel versions newer than 1.3.77.
591 #ifdef INTERNAL_LINUX_C_LIB
592 #define HAVE_MREMAP 1
594 #define HAVE_MREMAP 0
602 #include <sys/mman.h>
604 #if !defined(MAP_ANONYMOUS) && defined(MAP_ANON)
605 #define MAP_ANONYMOUS MAP_ANON
608 #endif /* HAVE_MMAP */
611 Access to system page size. To the extent possible, this malloc
612 manages memory from the system in page-size units.
614 The following mechanics for getpagesize were adapted from
615 bsd/gnu getpagesize.h
618 #ifndef LACKS_UNISTD_H
622 #ifndef malloc_getpagesize
623 # ifdef _SC_PAGESIZE /* some SVR4 systems omit an underscore */
624 # ifndef _SC_PAGE_SIZE
625 # define _SC_PAGE_SIZE _SC_PAGESIZE
628 # ifdef _SC_PAGE_SIZE
629 # define malloc_getpagesize sysconf(_SC_PAGE_SIZE)
631 # if defined(BSD) || defined(DGUX) || defined(HAVE_GETPAGESIZE)
633 extern size_t getpagesize(void);
635 extern size_t getpagesize();
637 # define malloc_getpagesize getpagesize()
639 # include <sys/param.h>
640 # ifdef EXEC_PAGESIZE
641 # define malloc_getpagesize EXEC_PAGESIZE
645 # define malloc_getpagesize NBPG
647 # define malloc_getpagesize (NBPG * CLSIZE)
651 # define malloc_getpagesize NBPC
654 # define malloc_getpagesize PAGESIZE
656 # define malloc_getpagesize (4096) /* just guess */
669 This version of malloc supports the standard SVID/XPG mallinfo
670 routine that returns a struct containing the same kind of
671 information you can get from malloc_stats. It should work on
672 any SVID/XPG compliant system that has a /usr/include/malloc.h
673 defining struct mallinfo. (If you'd like to install such a thing
674 yourself, cut out the preliminary declarations as described above
675 and below and save them in a malloc.h file. But there's no
676 compelling reason to bother to do this.)
678 The main declaration needed is the mallinfo struct that is returned
679 (by-copy) by mallinfo(). The SVID/XPG malloinfo struct contains a
680 bunch of fields, most of which are not even meaningful in this
681 version of malloc. Some of these fields are are instead filled by
682 mallinfo() with other numbers that might possibly be of interest.
684 HAVE_USR_INCLUDE_malloc_H should be set if you have a
685 /usr/include/malloc.h file that includes a declaration of struct
686 mallinfo. If so, it is included; else an SVID2/XPG2 compliant
687 version is declared below. These must be precisely the same for
692 /* #define HAVE_USR_INCLUDE_malloc_H */
694 #if HAVE_USR_INCLUDE_malloc_H
695 #include "/usr/include/malloc.h"
698 /* SVID2/XPG mallinfo structure */
701 int arena
; /* total space allocated from system */
702 int ordblks
; /* number of non-inuse chunks */
703 int smblks
; /* unused -- always zero */
704 int hblks
; /* number of mmapped regions */
705 int hblkhd
; /* total space in mmapped regions */
706 int usmblks
; /* unused -- always zero */
707 int fsmblks
; /* unused -- always zero */
708 int uordblks
; /* total allocated space */
709 int fordblks
; /* total non-inuse space */
710 int keepcost
; /* top-most, releasable (via malloc_trim) space */
713 /* SVID2/XPG mallopt options */
715 #define M_MXFAST 1 /* UNUSED in this malloc */
716 #define M_NLBLKS 2 /* UNUSED in this malloc */
717 #define M_GRAIN 3 /* UNUSED in this malloc */
718 #define M_KEEP 4 /* UNUSED in this malloc */
722 /* mallopt options that actually do something */
724 #define M_TRIM_THRESHOLD -1
726 #define M_MMAP_THRESHOLD -3
727 #define M_MMAP_MAX -4
728 #define M_SCANHEAP -5
733 #ifndef DEFAULT_TRIM_THRESHOLD
734 #define DEFAULT_TRIM_THRESHOLD (128 * 1024)
738 M_TRIM_THRESHOLD is the maximum amount of unused top-most memory
739 to keep before releasing via malloc_trim in free().
741 Automatic trimming is mainly useful in long-lived programs.
742 Because trimming via sbrk can be slow on some systems, and can
743 sometimes be wasteful (in cases where programs immediately
744 afterward allocate more large chunks) the value should be high
745 enough so that your overall system performance would improve by
748 The trim threshold and the mmap control parameters (see below)
749 can be traded off with one another. Trimming and mmapping are
750 two different ways of releasing unused memory back to the
751 system. Between these two, it is often possible to keep
752 system-level demands of a long-lived program down to a bare
753 minimum. For example, in one test suite of sessions measuring
754 the XF86 X server on Linux, using a trim threshold of 128K and a
755 mmap threshold of 192K led to near-minimal long term resource
758 If you are using this malloc in a long-lived program, it should
759 pay to experiment with these values. As a rough guide, you
760 might set to a value close to the average size of a process
761 (program) running on your system. Releasing this much memory
762 would allow such a process to run in memory. Generally, it's
763 worth it to tune for trimming rather tham memory mapping when a
764 program undergoes phases where several large chunks are
765 allocated and released in ways that can reuse each other's
766 storage, perhaps mixed with phases where there are no such
767 chunks at all. And in well-behaved long-lived programs,
768 controlling release of large blocks via trimming versus mapping
771 However, in most programs, these parameters serve mainly as
772 protection against the system-level effects of carrying around
773 massive amounts of unneeded memory. Since frequent calls to
774 sbrk, mmap, and munmap otherwise degrade performance, the default
775 parameters are set to relatively high values that serve only as
778 The default trim value is high enough to cause trimming only in
779 fairly extreme (by current memory consumption standards) cases.
780 It must be greater than page size to have any useful effect. To
781 disable trimming completely, you can set to (unsigned long)(-1);
787 #ifndef DEFAULT_TOP_PAD
788 #define DEFAULT_TOP_PAD (0)
792 M_TOP_PAD is the amount of extra `padding' space to allocate or
793 retain whenever sbrk is called. It is used in two ways internally:
795 * When sbrk is called to extend the top of the arena to satisfy
796 a new malloc request, this much padding is added to the sbrk
799 * When malloc_trim is called automatically from free(),
800 it is used as the `pad' argument.
802 In both cases, the actual amount of padding is rounded
803 so that the end of the arena is always a system page boundary.
805 The main reason for using padding is to avoid calling sbrk so
806 often. Having even a small pad greatly reduces the likelihood
807 that nearly every malloc request during program start-up (or
808 after trimming) will invoke sbrk, which needlessly wastes
811 Automatic rounding-up to page-size units is normally sufficient
812 to avoid measurable overhead, so the default is 0. However, in
813 systems where sbrk is relatively slow, it can pay to increase
814 this value, at the expense of carrying around more memory than
820 #ifndef DEFAULT_MMAP_THRESHOLD
821 #define DEFAULT_MMAP_THRESHOLD (128 * 1024)
826 M_MMAP_THRESHOLD is the request size threshold for using mmap()
827 to service a request. Requests of at least this size that cannot
828 be allocated using already-existing space will be serviced via mmap.
829 (If enough normal freed space already exists it is used instead.)
831 Using mmap segregates relatively large chunks of memory so that
832 they can be individually obtained and released from the host
833 system. A request serviced through mmap is never reused by any
834 other request (at least not directly; the system may just so
835 happen to remap successive requests to the same locations).
837 Segregating space in this way has the benefit that mmapped space
838 can ALWAYS be individually released back to the system, which
839 helps keep the system level memory demands of a long-lived
840 program low. Mapped memory can never become `locked' between
841 other chunks, as can happen with normally allocated chunks, which
842 menas that even trimming via malloc_trim would not release them.
844 However, it has the disadvantages that:
846 1. The space cannot be reclaimed, consolidated, and then
847 used to service later requests, as happens with normal chunks.
848 2. It can lead to more wastage because of mmap page alignment
850 3. It causes malloc performance to be more dependent on host
851 system memory management support routines which may vary in
852 implementation quality and may impose arbitrary
853 limitations. Generally, servicing a request via normal
854 malloc steps is faster than going through a system's mmap.
856 All together, these considerations should lead you to use mmap
857 only for relatively large requests.
864 #ifndef DEFAULT_MMAP_MAX
866 #define DEFAULT_MMAP_MAX (64)
868 #define DEFAULT_MMAP_MAX (0)
873 M_MMAP_MAX is the maximum number of requests to simultaneously
874 service using mmap. This parameter exists because:
876 1. Some systems have a limited number of internal tables for
878 2. In most systems, overreliance on mmap can degrade overall
880 3. If a program allocates many large regions, it is probably
881 better off using normal sbrk-based allocation routines that
882 can reclaim and reallocate normal heap memory. Using a
883 small value allows transition into this mode after the
884 first few allocations.
886 Setting to 0 disables all use of mmap. If HAVE_MMAP is not set,
887 the default value is 0, and attempts to set it to non-zero values
888 in mallopt will fail.
896 Special defines for linux libc
898 Except when compiled using these special defines for Linux libc
899 using weak aliases, this malloc is NOT designed to work in
900 multithreaded applications. No semaphores or other concurrency
901 control are provided to ensure that multiple malloc or free calls
902 don't run at the same time, which could be disasterous. A single
903 semaphore could be used across malloc, realloc, and free (which is
904 essentially the effect of the linux weak alias approach). It would
905 be hard to obtain finer granularity.
910 #ifdef INTERNAL_LINUX_C_LIB
914 Void_t
* __default_morecore_init (ptrdiff_t);
915 Void_t
*(*__morecore
)(ptrdiff_t) = __default_morecore_init
;
919 Void_t
* __default_morecore_init ();
920 Void_t
*(*__morecore
)() = __default_morecore_init
;
924 #define MORECORE (*__morecore)
925 #define MORECORE_FAILURE 0
926 #define MORECORE_CLEARS 1
928 #else /* INTERNAL_LINUX_C_LIB */
931 /* extern Void_t* sbrk(ptrdiff_t);*/
933 extern Void_t
* sbrk();
937 #define MORECORE sbrk
940 #ifndef MORECORE_FAILURE
941 #define MORECORE_FAILURE -1
944 #ifndef MORECORE_CLEARS
945 #define MORECORE_CLEARS 0
948 #endif /* INTERNAL_LINUX_C_LIB */
950 #if defined(INTERNAL_LINUX_C_LIB) && defined(__ELF__)
952 #define cALLOc __libc_calloc
953 #define fREe __libc_free
954 #define mALLOc __libc_malloc
955 #define mEMALIGn __libc_memalign
956 #define rEALLOc __libc_realloc
957 #define vALLOc __libc_valloc
958 #define pvALLOc __libc_pvalloc
959 #define mALLINFo __libc_mallinfo
960 #define mALLOPt __libc_mallopt
962 #pragma weak calloc = __libc_calloc
963 #pragma weak free = __libc_free
964 #pragma weak cfree = __libc_free
965 #pragma weak malloc = __libc_malloc
966 #pragma weak memalign = __libc_memalign
967 #pragma weak realloc = __libc_realloc
968 #pragma weak valloc = __libc_valloc
969 #pragma weak pvalloc = __libc_pvalloc
970 #pragma weak mallinfo = __libc_mallinfo
971 #pragma weak mallopt = __libc_mallopt
976 #define cALLOc calloc
982 #define mALLOc malloc
985 #define mEMALIGn memalign
988 #define rEALLOc realloc
991 #define vALLOc valloc
994 #define pvALLOc pvalloc
997 #define mALLINFo mallinfo
1000 #define mALLOPt mallopt
1005 /* Public routines */
1008 #define malloc(size) malloc_dbg(size, __FILE__, __LINE__)
1009 #define free(p) free_dbg(p, __FILE__, __LINE__)
1010 #define realloc(p, size) realloc_dbg(p, size, __FILE__, __LINE__)
1011 #define calloc(n, size) calloc_dbg(n, size, __FILE__, __LINE__)
1012 #define memalign(align, size) memalign_dbg(align, size, __FILE__, __LINE__)
1013 #define valloc(size) valloc_dbg(size, __FILE__, __LINE__)
1014 #define pvalloc(size) pvalloc_dbg(size, __FILE__, __LINE__)
1015 #define cfree(p) cfree_dbg(p, __FILE__, __LINE__)
1016 #define malloc_trim(pad) malloc_trim_dbg(pad, __FILE__, __LINE__)
1017 #define malloc_usable_size(p) malloc_usable_size_dbg(p, __FILE__, __LINE__)
1018 #define malloc_stats(void) malloc_stats_dbg(__FILE__, __LINE__)
1019 #define mallopt(flag, val) mallopt_dbg(flag, val, __FILE__, __LINE__)
1020 #define mallinfo(void) mallinfo_dbg(__FILE__, __LINE__)
1023 Void_t
* malloc_dbg(size_t, const char *, int);
1024 void free_dbg(Void_t
*, const char *, int);
1025 Void_t
* realloc_dbg(Void_t
*, size_t, const char *, int);
1026 Void_t
* calloc_dbg(size_t, size_t, const char *, int);
1027 Void_t
* memalign_dbg(size_t, size_t, const char *, int);
1028 Void_t
* valloc_dbg(size_t, const char *, int);
1029 Void_t
* pvalloc_dbg(size_t, const char *, int);
1030 void cfree_dbg(Void_t
*, const char *, int);
1031 int malloc_trim_dbg(size_t, const char *, int);
1032 size_t malloc_usable_size_dbg(Void_t
*, const char *, int);
1033 void malloc_stats_dbg(const char *, int);
1034 int mallopt_dbg(int, int, const char *, int);
1035 struct mallinfo
mallinfo_dbg(const char *, int);
1037 Void_t
* malloc_dbg();
1039 Void_t
* realloc_dbg();
1040 Void_t
* calloc_dbg();
1041 Void_t
* memalign_dbg();
1042 Void_t
* valloc_dbg();
1043 Void_t
* pvalloc_dbg();
1045 int malloc_trim_dbg();
1046 size_t malloc_usable_size_dbg();
1047 void malloc_stats_dbg();
1049 struct mallinfo
mallinfo_dbg();
1050 #endif /* !__STD_C */
1056 Void_t
* mALLOc(size_t);
1058 Void_t
* rEALLOc(Void_t
*, size_t);
1059 Void_t
* cALLOc(size_t, size_t);
1060 Void_t
* mEMALIGn(size_t, size_t);
1061 Void_t
* vALLOc(size_t);
1062 Void_t
* pvALLOc(size_t);
1063 void cfree(Void_t
*);
1064 int malloc_trim(size_t);
1065 size_t malloc_usable_size(Void_t
*);
1066 void malloc_stats(void);
1067 int mALLOPt(int, int);
1068 struct mallinfo
mALLINFo(void);
1079 size_t malloc_usable_size();
1080 void malloc_stats();
1082 struct mallinfo
mALLINFo();
1084 #endif /* !DEBUG2 */
1087 }; /* end of extern "C" */
1090 /* ---------- To make a malloc.h, end cutting here ------------ */
1107 #undef malloc_usable_size
1113 Void_t
* mALLOc(size_t);
1115 Void_t
* rEALLOc(Void_t
*, size_t);
1116 Void_t
* cALLOc(size_t, size_t);
1117 Void_t
* mEMALIGn(size_t, size_t);
1118 Void_t
* vALLOc(size_t);
1119 Void_t
* pvALLOc(size_t);
1120 void cfree(Void_t
*);
1121 int malloc_trim(size_t);
1122 size_t malloc_usable_size(Void_t
*);
1123 void malloc_stats(void);
1124 int mALLOPt(int, int);
1125 struct mallinfo
mALLINFo(void);
1136 size_t malloc_usable_size();
1137 void malloc_stats();
1139 struct mallinfo
mALLINFo();
1142 #include <ctype.h> /* isprint() */
1144 #include <stdlib.h> /* atexit() */
1148 }; /* end of extern "C" */
1154 Emulation of sbrk for WIN32
1155 All code within the ifdef WIN32 is untested by me.
1161 #define AlignPage(add) (((add) + (malloc_getpagesize-1)) & \
1162 ~(malloc_getpagesize-1))
1164 /* resrve 64MB to insure large contiguous space */
1165 #define RESERVED_SIZE (1024*1024*64)
1166 #define NEXT_SIZE (2048*1024)
1167 #define TOP_MEMORY ((unsigned long)2*1024*1024*1024)
1169 struct GmListElement
;
1170 typedef struct GmListElement GmListElement
;
1172 struct GmListElement
1174 GmListElement
* next
;
1178 static GmListElement
* head
= 0;
1179 static unsigned int gNextAddress
= 0;
1180 static unsigned int gAddressBase
= 0;
1181 static unsigned int gAllocatedSize
= 0;
1184 GmListElement
* makeGmListElement (void* bas
)
1186 GmListElement
* this;
1187 this = (GmListElement
*)(void*)LocalAlloc (0, sizeof (GmListElement
));
1201 ASSERT ( (head
== NULL
) || (head
->base
== (void*)gAddressBase
));
1202 if (gAddressBase
&& (gNextAddress
- gAddressBase
))
1204 rval
= VirtualFree ((void*)gAddressBase
,
1205 gNextAddress
- gAddressBase
,
1211 GmListElement
* next
= head
->next
;
1212 rval
= VirtualFree (head
->base
, 0, MEM_RELEASE
);
1220 void* findRegion (void* start_address
, unsigned long size
)
1222 MEMORY_BASIC_INFORMATION info
;
1223 while ((unsigned long)start_address
< TOP_MEMORY
)
1225 VirtualQuery (start_address
, &info
, sizeof (info
));
1226 if (info
.State
!= MEM_FREE
)
1227 start_address
= (char*)info
.BaseAddress
+ info
.RegionSize
;
1228 else if (info
.RegionSize
>= size
)
1229 return start_address
;
1231 start_address
= (char*)info
.BaseAddress
+ info
.RegionSize
;
1238 void* wsbrk (long size
)
1243 if (gAddressBase
== 0)
1245 gAllocatedSize
= max (RESERVED_SIZE
, AlignPage (size
));
1246 gNextAddress
= gAddressBase
=
1247 (unsigned int)VirtualAlloc (NULL
, gAllocatedSize
,
1248 MEM_RESERVE
, PAGE_NOACCESS
);
1249 } else if (AlignPage (gNextAddress
+ size
) > (gAddressBase
+
1252 long new_size
= max (NEXT_SIZE
, AlignPage (size
));
1253 void* new_address
= (void*)(gAddressBase
+gAllocatedSize
);
1256 new_address
= findRegion (new_address
, new_size
);
1258 if (new_address
== 0)
1261 gAddressBase
= gNextAddress
=
1262 (unsigned int)VirtualAlloc (new_address
, new_size
,
1263 MEM_RESERVE
, PAGE_NOACCESS
);
1264 // repeat in case of race condition
1265 // The region that we found has been snagged
1266 // by another thread
1268 while (gAddressBase
== 0);
1270 ASSERT (new_address
== (void*)gAddressBase
);
1272 gAllocatedSize
= new_size
;
1274 if (!makeGmListElement ((void*)gAddressBase
))
1277 if ((size
+ gNextAddress
) > AlignPage (gNextAddress
))
1280 res
= VirtualAlloc ((void*)AlignPage (gNextAddress
),
1281 (size
+ gNextAddress
-
1282 AlignPage (gNextAddress
)),
1283 MEM_COMMIT
, PAGE_READWRITE
);
1287 tmp
= (void*)gNextAddress
;
1288 gNextAddress
= (unsigned int)tmp
+ size
;
1293 unsigned int alignedGoal
= AlignPage (gNextAddress
+ size
);
1294 /* Trim by releasing the virtual memory */
1295 if (alignedGoal
>= gAddressBase
)
1297 VirtualFree ((void*)alignedGoal
, gNextAddress
- alignedGoal
,
1299 gNextAddress
= gNextAddress
+ size
;
1300 return (void*)gNextAddress
;
1304 VirtualFree ((void*)gAddressBase
, gNextAddress
- gAddressBase
,
1306 gNextAddress
= gAddressBase
;
1312 return (void*)gNextAddress
;
1325 # define MOATWIDTH 4 /* number of guard bytes at each end of
1327 # define MOATFILL 5 /* moat fill character */
1328 # define ALLOCFILL 1 /* fill char for allocated */
1329 # define FREEFILL 2 /* and freed regions */
1332 typedef struct malloc_chunk
1334 INTERNAL_SIZE_T prev_size
; /* Size of previous chunk (if free). */
1335 INTERNAL_SIZE_T size
; /* Size in bytes, including overhead. */
1336 struct malloc_chunk
* fd
; /* double links -- used only if free. */
1337 struct malloc_chunk
* bk
;
1339 const char *file
; /* file and */
1340 int line
; /* line number of [re]allocation */
1341 size_t pad
; /* nr pad bytes at mem end, excluding moat */
1342 int alloced
; /* whether the chunk is allocated -- less prone
1343 to segv than inuse(chunk) */
1344 char moat
[MOATWIDTH
]; /* actual leading moat is last MOATWIDTH bytes
1345 of chunk header; those bytes may follow this
1346 field due to header alignment padding */
1350 typedef Chunk
* mchunkptr
;
1354 malloc_chunk details:
1356 (The following includes lightly edited explanations by Colin Plumb.)
1358 Chunks of memory are maintained using a `boundary tag' method as
1359 described in e.g., Knuth or Standish. (See the paper by Paul
1360 Wilson ftp://ftp.cs.utexas.edu/pub/garbage/allocsrv.ps for a
1361 survey of such techniques.) Sizes of free chunks are stored both
1362 in the front of each chunk and at the end. This makes
1363 consolidating fragmented chunks into bigger chunks very fast. The
1364 size fields also hold bits representing whether chunks are free or
1367 An allocated chunk looks like this:
1370 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1371 | Size of previous chunk, if allocated | |
1372 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1373 | Size of chunk, in bytes |P|
1374 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1375 | User data starts here... .
1377 . (malloc_usable_space() bytes) .
1379 nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1381 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1384 Where "chunk" is the front of the chunk for the purpose of most of
1385 the malloc code, but "mem" is the pointer that is returned to the
1386 user. "Nextchunk" is the beginning of the next contiguous chunk.
1388 Chunks always begin on even word boundries, so the mem portion
1389 (which is returned to the user) is also on an even word boundary, and
1390 thus double-word aligned.
1392 Free chunks are stored in circular doubly-linked lists, and look like this:
1394 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1395 | Size of previous chunk |
1396 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1397 `head:' | Size of chunk, in bytes |P|
1398 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1399 | Forward pointer to next chunk in list |
1400 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1401 | Back pointer to previous chunk in list |
1402 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1403 | Unused space (may be 0 bytes long) .
1406 nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1407 `foot:' | Size of chunk, in bytes |
1408 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1410 The P (PREV_INUSE) bit, stored in the unused low-order bit of the
1411 chunk size (which is always a multiple of two words), is an in-use
1412 bit for the *previous* chunk. If that bit is *clear*, then the
1413 word before the current chunk size contains the previous chunk
1414 size, and can be used to find the front of the previous chunk.
1415 (The very first chunk allocated always has this bit set,
1416 preventing access to non-existent (or non-owned) memory.)
1418 Note that the `foot' of the current chunk is actually represented
1419 as the prev_size of the NEXT chunk. (This makes it easier to
1420 deal with alignments etc).
1422 The two exceptions to all this are
1424 1. The special chunk `top', which doesn't bother using the
1425 trailing size field since there is no
1426 next contiguous chunk that would have to index off it. (After
1427 initialization, `top' is forced to always exist. If it would
1428 become less than MINSIZE bytes long, it is replenished via
1431 2. Chunks allocated via mmap, which have the second-lowest-order
1432 bit (IS_MMAPPED) set in their size fields. Because they are
1433 never merged or traversed from any other chunk, they have no
1434 foot size or inuse information.
1436 Available chunks are kept in any of several places (all declared below):
1438 * `av': An array of chunks serving as bin headers for consolidated
1439 chunks. Each bin is doubly linked. The bins are approximately
1440 proportionally (log) spaced. There are a lot of these bins
1441 (128). This may look excessive, but works very well in
1442 practice. All procedures maintain the invariant that no
1443 consolidated chunk physically borders another one. Chunks in
1444 bins are kept in size order, with ties going to the
1445 approximately least recently used chunk.
1447 The chunks in each bin are maintained in decreasing sorted order by
1448 size. This is irrelevant for the small bins, which all contain
1449 the same-sized chunks, but facilitates best-fit allocation for
1450 larger chunks. (These lists are just sequential. Keeping them in
1451 order almost never requires enough traversal to warrant using
1452 fancier ordered data structures.) Chunks of the same size are
1453 linked with the most recently freed at the front, and allocations
1454 are taken from the back. This results in LRU or FIFO allocation
1455 order, which tends to give each chunk an equal opportunity to be
1456 consolidated with adjacent freed chunks, resulting in larger free
1457 chunks and less fragmentation.
1459 * `top': The top-most available chunk (i.e., the one bordering the
1460 end of available memory) is treated specially. It is never
1461 included in any bin, is used only if no other chunk is
1462 available, and is released back to the system if it is very
1463 large (see M_TRIM_THRESHOLD).
1465 * `last_remainder': A bin holding only the remainder of the
1466 most recently split (non-top) chunk. This bin is checked
1467 before other non-fitting chunks, so as to provide better
1468 locality for runs of sequentially allocated chunks.
1470 * Implicitly, through the host system's memory mapping tables.
1471 If supported, requests greater than a threshold are usually
1472 serviced via calls to mmap, and then later released via munmap.
1481 /* sizes, alignments */
1483 #define SIZE_SZ sizeof(INTERNAL_SIZE_T)
1484 #define ALIGNMENT (SIZE_SZ + SIZE_SZ)
1485 #define ALIGN_MASK (ALIGNMENT - 1)
1487 # define MEMOFFSET (2*SIZE_SZ)
1488 # define OVERHEAD SIZE_SZ
1489 # define MMAP_EXTRA SIZE_SZ /* for correct alignment */
1490 # define MINSIZE sizeof(Chunk)
1493 char strut
[(sizeof(Chunk
) - 1) / ALIGNMENT
+ 1][ALIGNMENT
];
1496 # define MEMOFFSET sizeof(PaddedChunk)
1497 # define OVERHEAD (MEMOFFSET + MOATWIDTH)
1498 # define MMAP_EXTRA 0
1499 # define MINSIZE ((OVERHEAD + ALIGN_MASK) & ~ALIGN_MASK)
1502 /* conversion from malloc headers to user pointers, and back */
1504 #define chunk2mem(p) ((Void_t*)((char*)(p) + MEMOFFSET))
1505 #define mem2chunk(mem) ((mchunkptr)((char*)(mem) - MEMOFFSET))
1507 /* pad request bytes into a usable size, including overhead */
1509 #define request2size(req) \
1510 ((long)((req) + OVERHEAD) < (long)MINSIZE ? MINSIZE : \
1511 ((req) + OVERHEAD + ALIGN_MASK) & ~ALIGN_MASK)
1513 /* Check if m has acceptable alignment */
1515 #define aligned_OK(m) (((unsigned long)((m)) & ALIGN_MASK) == 0)
1521 Physical chunk operations
1525 /* size field is or'ed with PREV_INUSE when previous adjacent chunk in use */
1527 #define PREV_INUSE 0x1
1529 /* size field is or'ed with IS_MMAPPED if the chunk was obtained with mmap() */
1531 #define IS_MMAPPED 0x2
1533 /* Bits to mask off when extracting size */
1535 #define SIZE_BITS (PREV_INUSE|IS_MMAPPED)
1538 /* Ptr to next physical malloc_chunk. */
1540 #define next_chunk(p) ((mchunkptr)( ((char*)(p)) + ((p)->size & ~PREV_INUSE) ))
1542 /* Ptr to previous physical malloc_chunk */
1544 #define prev_chunk(p)\
1545 ((mchunkptr)( ((char*)(p)) - ((p)->prev_size) ))
1548 /* Treat space at ptr + offset as a chunk */
1550 #define chunk_at_offset(p, s) ((mchunkptr)(((char*)(p)) + (s)))
1556 Dealing with use bits
1559 /* extract p's inuse bit */
1562 ((((mchunkptr)(((char*)(p))+((p)->size & ~PREV_INUSE)))->size) & PREV_INUSE)
1564 /* extract inuse bit of previous chunk */
1566 #define prev_inuse(p) ((p)->size & PREV_INUSE)
1568 /* check for mmap()'ed chunk */
1571 # define chunk_is_mmapped(p) ((p)->size & IS_MMAPPED)
1573 # define chunk_is_mmapped(p) 0
1576 /* set/clear chunk as in use without otherwise disturbing */
1578 #define set_inuse(p)\
1579 ((mchunkptr)(((char*)(p)) + ((p)->size & ~PREV_INUSE)))->size |= PREV_INUSE
1581 #define clear_inuse(p)\
1582 ((mchunkptr)(((char*)(p)) + ((p)->size & ~PREV_INUSE)))->size &= ~(PREV_INUSE)
1584 /* check/set/clear inuse bits in known places */
1586 #define inuse_bit_at_offset(p, s)\
1587 (((mchunkptr)(((char*)(p)) + (s)))->size & PREV_INUSE)
1589 #define set_inuse_bit_at_offset(p, s)\
1590 (((mchunkptr)(((char*)(p)) + (s)))->size |= PREV_INUSE)
1592 #define clear_inuse_bit_at_offset(p, s)\
1593 (((mchunkptr)(((char*)(p)) + (s)))->size &= ~(PREV_INUSE))
1599 Dealing with size fields
1602 /* Get size, ignoring use bits */
1604 #define chunksize(p) ((p)->size & ~(SIZE_BITS))
1606 /* Set size at head, without disturbing its use bit */
1608 #define set_head_size(p, s) ((p)->size = (((p)->size & PREV_INUSE) | (s)))
1610 /* Set size/use ignoring previous bits in header */
1612 #define set_head(p, s) ((p)->size = (s))
1614 /* Set size at footer (only when chunk is not in use) */
1616 #define set_foot(p, s) (((mchunkptr)((char*)(p) + (s)))->prev_size = (s))
1625 The bins, `av_' are an array of pairs of pointers serving as the
1626 heads of (initially empty) doubly-linked lists of chunks, laid out
1627 in a way so that each pair can be treated as if it were in a
1628 malloc_chunk. (This way, the fd/bk offsets for linking bin heads
1629 and chunks are the same).
1631 Bins for sizes < 512 bytes contain chunks of all the same size, spaced
1632 8 bytes apart. Larger bins are approximately logarithmically
1633 spaced. (See the table below.) The `av_' array is never mentioned
1634 directly in the code, but instead via bin access macros.
1642 4 bins of size 32768
1643 2 bins of size 262144
1644 1 bin of size what's left
1646 There is actually a little bit of slop in the numbers in bin_index
1647 for the sake of speed. This makes no difference elsewhere.
1649 The special chunks `top' and `last_remainder' get their own bins,
1650 (this is implemented via yet more trickery with the av_ array),
1651 although `top' is never properly linked to its bin since it is
1652 always handled specially.
1656 #define NAV 128 /* number of bins */
1658 typedef Chunk
* mbinptr
;
1662 #define bin_at(i) ((mbinptr)((char*)&(av_[2*(i) + 2]) - 2*SIZE_SZ))
1663 #define next_bin(b) ((mbinptr)((char*)(b) + 2 * sizeof(mbinptr)))
1664 #define prev_bin(b) ((mbinptr)((char*)(b) - 2 * sizeof(mbinptr)))
1667 The first 2 bins are never indexed. The corresponding av_ cells are instead
1668 used for bookkeeping. This is not to save space, but to simplify
1669 indexing, maintain locality, and avoid some initialization tests.
1672 #define top (bin_at(0)->fd) /* The topmost chunk */
1673 #define last_remainder (bin_at(1)) /* remainder from last split */
1677 Because top initially points to its own bin with initial
1678 zero size, thus forcing extension on the first malloc request,
1679 we avoid having any special code in malloc to check whether
1680 it even exists yet. But we still need to in malloc_extend_top.
1683 #define initial_top ((mchunkptr)(bin_at(0)))
1685 /* Helper macro to initialize bins */
1687 #define IAV(i) bin_at(i), bin_at(i)
1689 static mbinptr av_
[NAV
* 2 + 2] = {
1691 IAV(0), IAV(1), IAV(2), IAV(3), IAV(4), IAV(5), IAV(6), IAV(7),
1692 IAV(8), IAV(9), IAV(10), IAV(11), IAV(12), IAV(13), IAV(14), IAV(15),
1693 IAV(16), IAV(17), IAV(18), IAV(19), IAV(20), IAV(21), IAV(22), IAV(23),
1694 IAV(24), IAV(25), IAV(26), IAV(27), IAV(28), IAV(29), IAV(30), IAV(31),
1695 IAV(32), IAV(33), IAV(34), IAV(35), IAV(36), IAV(37), IAV(38), IAV(39),
1696 IAV(40), IAV(41), IAV(42), IAV(43), IAV(44), IAV(45), IAV(46), IAV(47),
1697 IAV(48), IAV(49), IAV(50), IAV(51), IAV(52), IAV(53), IAV(54), IAV(55),
1698 IAV(56), IAV(57), IAV(58), IAV(59), IAV(60), IAV(61), IAV(62), IAV(63),
1699 IAV(64), IAV(65), IAV(66), IAV(67), IAV(68), IAV(69), IAV(70), IAV(71),
1700 IAV(72), IAV(73), IAV(74), IAV(75), IAV(76), IAV(77), IAV(78), IAV(79),
1701 IAV(80), IAV(81), IAV(82), IAV(83), IAV(84), IAV(85), IAV(86), IAV(87),
1702 IAV(88), IAV(89), IAV(90), IAV(91), IAV(92), IAV(93), IAV(94), IAV(95),
1703 IAV(96), IAV(97), IAV(98), IAV(99), IAV(100), IAV(101), IAV(102), IAV(103),
1704 IAV(104), IAV(105), IAV(106), IAV(107), IAV(108), IAV(109), IAV(110), IAV(111),
1705 IAV(112), IAV(113), IAV(114), IAV(115), IAV(116), IAV(117), IAV(118), IAV(119),
1706 IAV(120), IAV(121), IAV(122), IAV(123), IAV(124), IAV(125), IAV(126), IAV(127)
1711 /* field-extraction macros */
1713 #define first(b) ((b)->fd)
1714 #define last(b) ((b)->bk)
1720 #define bin_index(sz) \
1721 (((((unsigned long)(sz)) >> 9) == 0) ? (((unsigned long)(sz)) >> 3): \
1722 ((((unsigned long)(sz)) >> 9) <= 4) ? 56 + (((unsigned long)(sz)) >> 6): \
1723 ((((unsigned long)(sz)) >> 9) <= 20) ? 91 + (((unsigned long)(sz)) >> 9): \
1724 ((((unsigned long)(sz)) >> 9) <= 84) ? 110 + (((unsigned long)(sz)) >> 12): \
1725 ((((unsigned long)(sz)) >> 9) <= 340) ? 119 + (((unsigned long)(sz)) >> 15): \
1726 ((((unsigned long)(sz)) >> 9) <= 1364) ? 124 + (((unsigned long)(sz)) >> 18): \
1729 bins for chunks < 512 are all spaced 8 bytes apart, and hold
1730 identically sized chunks. This is exploited in malloc.
1733 #define MAX_SMALLBIN 63
1734 #define MAX_SMALLBIN_SIZE 512
1735 #define SMALLBIN_WIDTH 8
1737 #define smallbin_index(sz) (((unsigned long)(sz)) >> 3)
1740 Requests are `small' if both the corresponding and the next bin are small
1743 #define is_small_request(nb) (nb < MAX_SMALLBIN_SIZE - SMALLBIN_WIDTH)
1748 To help compensate for the large number of bins, a one-level index
1749 structure is used for bin-by-bin searching. `binblocks' is a
1750 one-word bitvector recording whether groups of BINBLOCKWIDTH bins
1751 have any (possibly) non-empty bins, so they can be skipped over
1752 all at once during during traversals. The bits are NOT always
1753 cleared as soon as all bins in a block are empty, but instead only
1754 when all are noticed to be empty during traversal in malloc.
1757 #define BINBLOCKWIDTH 4 /* bins per block */
1759 #define binblocks (bin_at(0)->size) /* bitvector of nonempty blocks */
1761 /* bin<->block macros */
1763 #define idx2binblock(ix) ((unsigned)1 << (ix / BINBLOCKWIDTH))
1764 #define mark_binblock(ii) (binblocks |= idx2binblock(ii))
1765 #define clear_binblock(ii) (binblocks &= ~(idx2binblock(ii)))
1771 /* Other static bookkeeping data */
1773 /* variables holding tunable values */
1775 static unsigned long trim_threshold
= DEFAULT_TRIM_THRESHOLD
;
1776 static unsigned long top_pad
= DEFAULT_TOP_PAD
;
1777 static unsigned int n_mmaps_max
= DEFAULT_MMAP_MAX
;
1778 static unsigned long mmap_threshold
= DEFAULT_MMAP_THRESHOLD
;
1780 static int scanheap
= 1;
1783 /* The first value returned from sbrk */
1784 static char* sbrk_base
= (char*)(-1);
1786 /* The maximum memory obtained from system via sbrk */
1787 static unsigned long max_sbrked_mem
= 0;
1789 /* The maximum via either sbrk or mmap */
1790 static unsigned long max_total_mem
= 0;
1792 /* internal working copy of mallinfo */
1793 static struct mallinfo current_mallinfo
= { 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 };
1795 /* The total memory obtained from system via sbrk */
1796 #define sbrked_mem (current_mallinfo.arena)
1798 /* Tracking mmaps */
1800 static unsigned int n_mmaps
= 0;
1801 static unsigned long mmapped_mem
= 0;
1803 static unsigned int max_n_mmaps
= 0;
1804 static unsigned long max_mmapped_mem
= 0;
1816 # define unless(cond, err, p) assert(cond)
1818 # define unless(cond, err, p) do { if (!(cond)) malloc_err(err, p); } while (0)
1821 * When debug_file is non-null, it and debug_line respectively contain the
1822 * file and line number of the current invocation of malloc(), calloc(),
1823 * realloc(), or free().
1825 static const char *debug_file
= NULL
;
1826 static int debug_line
;
1829 * Avoid dereferencing invalid chunk.file pointers by tracking the range of
1830 * valid ones. Could add an "unallocated" flag to init_freed_chunk() for
1831 * more protection, but that's probably not necessary.
1833 static const char *debug_file_min
= (char *)~0;
1834 static const char *debug_file_max
= NULL
;
1836 static char *itos(int n
)
1838 #define NDIGITS (sizeof(int) * 3)
1839 static char s
[NDIGITS
+ 1];
1842 s
[--i
] = '0' + n
% 10;
1849 static int recurs
= 0;
1851 static void errprint(const char *file
, int line
, const char *err
)
1859 write(2, file
, strlen(file
));
1862 write(2, itos(line
), strlen(itos(line
)));
1866 write(2, err
, strlen(err
));
1871 static void malloc_err(const char *err
, mchunkptr p
)
1874 * Display ERR on stderr, accompanying it with the caller's file and line
1875 * number if available. If P is non-null, also attempt to display the file
1876 * and line number at which P was most recently [re]allocated.
1878 * This function's name begins with "malloc_" to make setting debugger
1879 * breakpoints here more convenient.
1881 errprint(debug_file
, debug_line
, err
);
1884 p
= 0; /* avoid "unused param" warning */
1887 /* avoid invalid pointers */
1889 p
->file
>= debug_file_min
&&
1890 p
->file
<= debug_file_max
&&
1891 /* try to avoid garbage file names */
1893 errprint(p
->file
, p
->line
, "in block allocated here");
1906 #undef malloc_usable_size
1911 static void malloc_update_mallinfo(void);
1914 * Define front-end functions for all user-visible entry points that may
1917 #define skel(retdecl, retassign, call, retstmt) \
1919 debug_file = file; \
1920 debug_line = line; \
1921 if (debug_file < debug_file_min) \
1922 debug_file_min = debug_file; \
1923 if (debug_file > debug_file_max) \
1924 debug_file_max = debug_file; \
1926 malloc_update_mallinfo(); \
1929 malloc_update_mallinfo(); \
1930 debug_file = NULL; \
1934 * The final letter of the names of the following macros is either r or v,
1935 * indicating that the macro handles functions with or without a return value,
1938 # define skelr(rettype, call) \
1939 skel(rettype ret;, ret = , call, return ret)
1941 * AIX's xlc compiler doesn't like empty macro args, so specify useless but
1942 * compilable retdecl, retassign, and retstmt args:
1944 #define skelv(call) \
1945 skel(line += 0;, if (1), call, return)
1947 #define dbgargs const char *file, int line
1950 * Front-end function definitions:
1952 Void_t
* malloc_dbg(size_t bytes
, dbgargs
) {
1953 skelr(Void_t
*, malloc(bytes
));
1955 void free_dbg(Void_t
*mem
, dbgargs
) {
1958 Void_t
* realloc_dbg(Void_t
*oldmem
, size_t bytes
, dbgargs
) {
1959 skelr(Void_t
*, realloc(oldmem
, bytes
));
1961 Void_t
* memalign_dbg(size_t alignment
, size_t bytes
, dbgargs
) {
1962 skelr(Void_t
*, memalign(alignment
, bytes
));
1964 Void_t
* valloc_dbg(size_t bytes
, dbgargs
) {
1965 skelr(Void_t
*, valloc(bytes
));
1967 Void_t
* pvalloc_dbg(size_t bytes
, dbgargs
) {
1968 skelr(Void_t
*, pvalloc(bytes
));
1970 Void_t
* calloc_dbg(size_t n
, size_t elem_size
, dbgargs
) {
1971 skelr(Void_t
*, calloc(n
, elem_size
));
1973 void cfree_dbg(Void_t
*mem
, dbgargs
) {
1976 int malloc_trim_dbg(size_t pad
, dbgargs
) {
1977 skelr(int, malloc_trim(pad
));
1979 size_t malloc_usable_size_dbg(Void_t
*mem
, dbgargs
) {
1980 skelr(size_t, malloc_usable_size(mem
));
1982 void malloc_stats_dbg(dbgargs
) {
1983 skelv(malloc_stats());
1985 int mallopt_dbg(int flag
, int value
, dbgargs
) {
1986 skelr(int, mallopt(flag
, value
));
1988 struct mallinfo
mallinfo_dbg(dbgargs
) {
1989 skelr(struct mallinfo
, mallinfo());
2000 These routines make a number of assertions about the states
2001 of data structures that should be true at all times. If any
2002 are not true, it's very likely that a user program has somehow
2003 trashed memory. (It's also possible that there is a coding error
2004 in malloc. In which case, please report it!)
2008 static int memtest(void *s
, int c
, size_t n
)
2011 * Return whether the N-byte memory region starting at S consists
2012 * entirely of bytes with value C.
2014 unsigned char *p
= (unsigned char *)s
;
2016 for (i
= 0; i
< n
; i
++)
2017 if (p
[i
] != (unsigned char)c
)
2024 #define check_moats(P)
2026 #define check_moats do_check_moats
2027 static void do_check_moats(mchunkptr p
)
2029 INTERNAL_SIZE_T sz
= chunksize(p
);
2030 unless(memtest((char *)chunk2mem(p
) - MOATWIDTH
, MOATFILL
,
2031 MOATWIDTH
), "region underflow", p
);
2032 unless(memtest((char *)p
+ sz
- MOATWIDTH
- p
->pad
, MOATFILL
,
2033 MOATWIDTH
+ p
->pad
), "region overflow", p
);
2038 static void do_check_chunk(mchunkptr p
)
2040 static void do_check_chunk(p
) mchunkptr p
;
2043 /* Try to ensure legal addresses before accessing any chunk fields, in the
2044 * hope of issuing an informative message rather than causing a segv.
2046 * The following chunk_is_mmapped() call accesses p->size #if HAVE_MMAP.
2047 * This is unavoidable without maintaining a record of mmapped regions.
2049 if (!chunk_is_mmapped(p
))
2053 unless((char*)p
>= sbrk_base
, "chunk precedes sbrk_base", p
);
2054 unless((char*)p
+ MINSIZE
<= (char*)top
+ chunksize(top
),
2055 "chunk past sbrk area", p
);
2059 unless((char*)p
+ sz
<= (char*)top
, "chunk extends beyond top", p
);
2061 unless((char*)p
+ sz
<= sbrk_base
+ sbrked_mem
,
2062 "chunk extends past sbrk area", p
);
2068 static void do_check_free_chunk(mchunkptr p
)
2070 static void do_check_free_chunk(p
) mchunkptr p
;
2073 INTERNAL_SIZE_T sz
= chunksize(p
);
2074 mchunkptr next
= chunk_at_offset(p
, sz
);
2078 /* Check whether it claims to be free ... */
2079 unless(!inuse(p
), "free chunk marked inuse", p
);
2081 /* Unless a special marker, must have OK fields */
2082 if ((long)sz
>= (long)MINSIZE
)
2084 unless((sz
& ALIGN_MASK
) == 0, "freed size defies alignment", p
);
2085 unless(aligned_OK(chunk2mem(p
)), "misaligned freed region", p
);
2086 /* ... matching footer field */
2087 unless(next
->prev_size
== sz
, "chunk size mismatch", p
);
2088 /* ... and is fully consolidated */
2089 unless(prev_inuse(p
), "free chunk not joined with prev", p
);
2090 unless(next
== top
|| inuse(next
), "free chunk not joined with next", p
);
2092 /* ... and has minimally sane links */
2093 unless(p
->fd
->bk
== p
, "broken forward link", p
);
2094 unless(p
->bk
->fd
== p
, "broken backward link", p
);
2096 else /* markers are always of size SIZE_SZ */
2097 unless(sz
== SIZE_SZ
, "invalid small chunk size", p
);
2101 static void do_check_inuse_chunk(mchunkptr p
)
2103 static void do_check_inuse_chunk(p
) mchunkptr p
;
2109 if (chunk_is_mmapped(p
))
2112 /* Check whether it claims to be in use ... */
2114 unless(p
->alloced
, "memory not allocated", p
);
2116 unless(inuse(p
), "memory not allocated", p
);
2118 /* ... and is surrounded by OK chunks.
2119 Since more things can be checked with free chunks than inuse ones,
2120 if an inuse chunk borders them and debug is on, it's worth doing them.
2124 mchunkptr prv
= prev_chunk(p
);
2125 unless(next_chunk(prv
) == p
, "prev link scrambled", p
);
2126 do_check_free_chunk(prv
);
2128 next
= next_chunk(p
);
2131 unless(prev_inuse(next
), "top chunk wrongly thinks prev is unused", p
);
2132 unless(chunksize(next
) >= MINSIZE
, "top chunk too small", p
);
2134 else if (!inuse(next
))
2135 do_check_free_chunk(next
);
2139 static void do_check_malloced_chunk(mchunkptr p
, INTERNAL_SIZE_T s
)
2141 static void do_check_malloced_chunk(p
, s
) mchunkptr p
; INTERNAL_SIZE_T s
;
2144 INTERNAL_SIZE_T sz
= chunksize(p
);
2147 do_check_inuse_chunk(p
);
2149 /* Legal size ... */
2150 unless((long)sz
>= (long)MINSIZE
, "chunk size too small", p
);
2151 unless((sz
& ALIGN_MASK
) == 0, "malloced size defies alignment", p
);
2152 unless(room
>= 0, "chunk size too small for contents", p
);
2153 unless(room
< (long)MINSIZE
, "chunk size leaves too much spare room", p
);
2155 /* ... and alignment */
2156 unless(aligned_OK(chunk2mem(p
)), "misaligned malloced region", p
);
2159 /* ... and was allocated at front of an available chunk */
2160 unless(prev_inuse(p
), "malloced from the middle of a free chunk", p
);
2164 static void init_alloced_chunk(mchunkptr p
, size_t bytes
)
2166 Void_t
* mem
= chunk2mem(p
);
2167 p
->file
= debug_file
;
2168 p
->line
= debug_line
;
2169 p
->pad
= chunksize(p
) - OVERHEAD
- bytes
;
2171 memset((char *)mem
+ bytes
, MOATFILL
, p
->pad
+ MOATWIDTH
);
2174 static void do_init_malloced_chunk(mchunkptr p
, size_t bytes
)
2176 Void_t
* mem
= chunk2mem(p
);
2177 init_alloced_chunk(p
, bytes
);
2178 memset((char *)mem
- MOATWIDTH
, MOATFILL
, MOATWIDTH
);
2179 memset(mem
, ALLOCFILL
, bytes
);
2182 static void do_init_realloced_chunk(mchunkptr p
, size_t bytes
,
2183 INTERNAL_SIZE_T oldsize
)
2185 Void_t
* mem
= chunk2mem(p
);
2186 INTERNAL_SIZE_T newsize
= chunksize(p
);
2187 init_alloced_chunk(p
, bytes
);
2188 if (oldsize
< newsize
)
2189 /* This incorrectly leaves the leading pad area of the old trailing moat
2190 * set to MOATFILL rather than ALLOCFILL. An alternative is to save the
2191 * old p->pad in rEALLOc() below and pass it to this function.
2193 memset((char *)mem
+ oldsize
- OVERHEAD
, ALLOCFILL
,
2194 bytes
- (oldsize
- OVERHEAD
));
2197 static void do_check_freefill(mchunkptr p
, long newsize
,
2198 INTERNAL_SIZE_T oldsize
)
2200 /* The first newsize bytes of oldsize-byte chunk p are about to be
2201 * allocated. Issue a warning if any freefill locations in p that are about
2202 * to be overwritten do not contain the character FREEFILL.
2204 size_t bytes
, maxbytes
;
2207 bytes
= newsize
- MEMOFFSET
/* don't check p's header */
2208 + MEMOFFSET
; /* header of split-off remainder */
2209 maxbytes
= oldsize
- OVERHEAD
;
2210 if (bytes
> maxbytes
)
2212 unless(memtest(chunk2mem(p
), FREEFILL
, bytes
),
2213 "detected write to freed region", p
);
2216 static void do_init_freed_chunk(mchunkptr p
, INTERNAL_SIZE_T freehead
,
2217 INTERNAL_SIZE_T freetail
)
2219 /* freehead and freetail are the number of bytes at the beginning of p and
2220 * end of p respectively that should already be initialized as free regions.
2222 Void_t
* mem
= chunk2mem(p
);
2223 size_t size
= chunksize(p
);
2224 size_t bytes
= size
- OVERHEAD
;
2227 memset((char *)mem
- MOATWIDTH
, MOATFILL
, MOATWIDTH
);
2228 memset((char *)mem
+ bytes
, MOATFILL
, MOATWIDTH
);
2230 /* To avoid terrible O(n^2) performance when free() repeatedly grows a free
2231 * chunk, it's important not to free-fill regions that are already
2234 if (freehead
+ freetail
< size
) {
2235 Void_t
* start
= !freehead
? mem
: (char *)p
+ freehead
- MOATWIDTH
;
2236 size_t len
= (char *)p
+ size
- (char *)start
-
2237 (!freetail
? MOATWIDTH
: freetail
- OVERHEAD
);
2238 memset(start
, FREEFILL
, len
);
2242 static void do_init_freeable_chunk(mchunkptr p
)
2244 /* Arrange for the subsequent fREe(p) not to generate any warnings. */
2245 init_alloced_chunk(p
, chunksize(p
) - OVERHEAD
);
2246 memset((char *)chunk2mem(p
) - MOATWIDTH
, MOATFILL
, MOATWIDTH
);
2249 static void do_maximize_chunk(mchunkptr p
)
2252 Void_t
* mem
= chunk2mem(p
);
2253 size_t bytes
= chunksize(p
) - OVERHEAD
- p
->pad
;
2254 memset((char *)mem
+ bytes
, ALLOCFILL
, p
->pad
);
2259 static int do_check_init(void)
2261 /* Called from the first invocation of malloc_extend_top(), as detected by
2262 * sbrk_base == -1. Return whether this function allocated any memory.
2264 static int state
= 0; /* 1 => initializing, 2 => initialized */
2267 unless(state
== 0, "multiple calls to check_init", NULL
);
2269 atexit(malloc_update_mallinfo
); /* calls malloc on WinNT */
2270 return sbrk_base
!= (char *)-1;
2274 static mchunkptr lowest_chunk
;
2276 #define check_free_chunk(P) do_check_free_chunk(P)
2277 #define check_inuse_chunk(P) do_check_inuse_chunk(P)
2278 #define check_chunk(P) do_check_chunk(P)
2279 #define check_malloced_chunk(P,N) do_check_malloced_chunk(P,N)
2281 #define check_free_chunk(P)
2282 #define check_inuse_chunk(P)
2283 #define check_chunk(P)
2284 #define check_malloced_chunk(P,N)
2288 #define check_init do_check_init
2289 #define init_malloced_chunk do_init_malloced_chunk
2290 #define init_realloced_chunk do_init_realloced_chunk
2291 #define check_freefill do_check_freefill
2292 #define init_freed_chunk do_init_freed_chunk
2293 #define init_freeable_chunk do_init_freeable_chunk
2294 #define maximize_chunk do_maximize_chunk
2296 #define check_init() 0
2297 #define init_malloced_chunk(P,B)
2298 #define init_realloced_chunk(P,B,O)
2299 #define check_freefill(P,N,O)
2300 #define init_freed_chunk(P,H,T)
2301 #define init_freeable_chunk(P)
2302 #define maximize_chunk(P)
2303 #endif /* !DEBUG3 */
2308 Macro-based internal utilities
2313 Linking chunks in bin lists.
2314 Call these only with variables, not arbitrary expressions, as arguments.
2318 Place chunk p of size s in its bin, in size order,
2319 putting it ahead of others of same size.
2323 #define frontlink(P, S, IDX, BK, FD) \
2325 if (S < MAX_SMALLBIN_SIZE) \
2327 IDX = smallbin_index(S); \
2328 mark_binblock(IDX); \
2333 FD->bk = BK->fd = P; \
2337 IDX = bin_index(S); \
2340 if (FD == BK) mark_binblock(IDX); \
2343 while (FD != BK && S < chunksize(FD)) FD = FD->fd; \
2348 FD->bk = BK->fd = P; \
2353 /* take a chunk off a list */
2355 #define unlink(P, BK, FD) \
2363 /* Place p as the last remainder */
2365 #define link_last_remainder(P) \
2367 last_remainder->fd = last_remainder->bk = P; \
2368 P->fd = P->bk = last_remainder; \
2371 /* Clear the last_remainder bin */
2373 #define clear_last_remainder \
2374 (last_remainder->fd = last_remainder->bk = last_remainder)
2381 /* Routines dealing with mmap(). */
2386 static mchunkptr
mmap_chunk(size_t size
)
2388 static mchunkptr
mmap_chunk(size
) size_t size
;
2391 size_t page_mask
= malloc_getpagesize
- 1;
2394 #ifndef MAP_ANONYMOUS
2398 if(n_mmaps
>= n_mmaps_max
) return 0; /* too many regions */
2400 size
= (size
+ MMAP_EXTRA
+ page_mask
) & ~page_mask
;
2402 #ifdef MAP_ANONYMOUS
2403 p
= (mchunkptr
)mmap(0, size
, PROT_READ
|PROT_WRITE
,
2404 MAP_PRIVATE
|MAP_ANONYMOUS
, -1, 0);
2405 #else /* !MAP_ANONYMOUS */
2408 fd
= open("/dev/zero", O_RDWR
);
2409 if(fd
< 0) return 0;
2411 p
= (mchunkptr
)mmap(0, size
, PROT_READ
|PROT_WRITE
, MAP_PRIVATE
, fd
, 0);
2414 if(p
== (mchunkptr
)-1) return 0;
2417 if (n_mmaps
> max_n_mmaps
) max_n_mmaps
= n_mmaps
;
2419 /* We demand that eight bytes into a page must be 8-byte aligned. */
2420 assert(aligned_OK(chunk2mem(p
)));
2422 /* The offset to the start of the mmapped region is stored
2423 * in the prev_size field of the chunk; normally it is zero,
2424 * but that can be changed in memalign().
2427 set_head(p
, size
|IS_MMAPPED
);
2429 mmapped_mem
+= size
;
2430 if ((unsigned long)mmapped_mem
> (unsigned long)max_mmapped_mem
)
2431 max_mmapped_mem
= mmapped_mem
;
2432 if ((unsigned long)(mmapped_mem
+ sbrked_mem
) > (unsigned long)max_total_mem
)
2433 max_total_mem
= mmapped_mem
+ sbrked_mem
;
2438 static void munmap_chunk(mchunkptr p
)
2440 static void munmap_chunk(p
) mchunkptr p
;
2443 INTERNAL_SIZE_T size
= chunksize(p
);
2446 assert (chunk_is_mmapped(p
));
2447 assert(! ((char*)p
>= sbrk_base
&& (char*)p
< sbrk_base
+ sbrked_mem
));
2448 assert((n_mmaps
> 0));
2449 assert(((p
->prev_size
+ size
) & (malloc_getpagesize
-1)) == 0);
2452 mmapped_mem
-= (size
+ p
->prev_size
);
2454 ret
= munmap((char *)p
- p
->prev_size
, size
+ p
->prev_size
);
2456 /* munmap returns non-zero on failure */
2463 static mchunkptr
mremap_chunk(mchunkptr p
, size_t new_size
)
2465 static mchunkptr
mremap_chunk(p
, new_size
) mchunkptr p
; size_t new_size
;
2468 size_t page_mask
= malloc_getpagesize
- 1;
2469 INTERNAL_SIZE_T offset
= p
->prev_size
;
2470 INTERNAL_SIZE_T size
= chunksize(p
);
2473 assert (chunk_is_mmapped(p
));
2474 assert(! ((char*)p
>= sbrk_base
&& (char*)p
< sbrk_base
+ sbrked_mem
));
2475 assert((n_mmaps
> 0));
2476 assert(((size
+ offset
) & (malloc_getpagesize
-1)) == 0);
2478 new_size
= (new_size
+ offset
+ MMAP_EXTRA
+ page_mask
) & ~page_mask
;
2480 cp
= (char *)mremap((char *)p
- offset
, size
+ offset
, new_size
, 1);
2482 if (cp
== (char *)-1) return 0;
2484 p
= (mchunkptr
)(cp
+ offset
);
2486 assert(aligned_OK(chunk2mem(p
)));
2488 assert(p
->prev_size
== offset
);
2489 set_head(p
, (new_size
- offset
)|IS_MMAPPED
);
2491 mmapped_mem
-= size
+ offset
;
2492 mmapped_mem
+= new_size
;
2493 if ((unsigned long)mmapped_mem
> (unsigned long)max_mmapped_mem
)
2494 max_mmapped_mem
= mmapped_mem
;
2495 if ((unsigned long)(mmapped_mem
+ sbrked_mem
) > (unsigned long)max_total_mem
)
2496 max_total_mem
= mmapped_mem
+ sbrked_mem
;
2500 #endif /* HAVE_MREMAP */
2502 #endif /* HAVE_MMAP */
2508 Extend the top-most chunk by obtaining memory from system.
2509 Main interface to sbrk (but see also malloc_trim).
2513 static void malloc_extend_top(INTERNAL_SIZE_T nb
)
2515 static void malloc_extend_top(nb
) INTERNAL_SIZE_T nb
;
2518 char* lim
; /* return value from sbrk */
2519 INTERNAL_SIZE_T front_misalign
; /* unusable bytes at front of sbrked space */
2520 INTERNAL_SIZE_T correction
; /* bytes for 2nd sbrk call */
2521 char* new_lim
; /* return of 2nd sbrk call */
2522 INTERNAL_SIZE_T top_size
; /* new size of top chunk */
2524 mchunkptr old_top
= top
; /* Record state of old top */
2525 INTERNAL_SIZE_T old_top_size
= chunksize(old_top
);
2526 char* old_end
= (char*)(chunk_at_offset(old_top
, old_top_size
));
2528 /* Pad request with top_pad plus minimal overhead */
2530 INTERNAL_SIZE_T sbrk_size
= nb
+ top_pad
+ MINSIZE
;
2531 unsigned long pagesz
= malloc_getpagesize
;
2533 /* If not the first time through, round to preserve page boundary */
2534 /* Otherwise, we need to correct to a page size below anyway. */
2535 /* (We also correct below if an intervening foreign sbrk call.) */
2537 if (sbrk_base
!= (char*)(-1))
2538 sbrk_size
= (sbrk_size
+ (pagesz
- 1)) & ~(pagesz
- 1);
2540 else if (check_init()) {
2541 if (chunksize(top
) - nb
< (long)MINSIZE
)
2542 malloc_extend_top(nb
);
2546 lim
= (char*)(MORECORE (sbrk_size
));
2548 /* Fail if sbrk failed or if a foreign sbrk call killed our space */
2549 if (lim
== (char*)(MORECORE_FAILURE
) ||
2550 (lim
< old_end
&& old_top
!= initial_top
))
2553 sbrked_mem
+= sbrk_size
;
2555 if (lim
== old_end
) /* can just add bytes to current top */
2557 top_size
= sbrk_size
+ old_top_size
;
2558 set_head(top
, top_size
| PREV_INUSE
);
2563 INTERNAL_SIZE_T padding
= (char *)sbrk (0) - (lim
+ sbrk_size
);
2564 sbrk_size
+= padding
;
2565 sbrked_mem
+= padding
;
2568 if (sbrk_base
== (char*)(-1)) /* First time through. Record base */
2570 else /* Someone else called sbrk(). Count those bytes as sbrked_mem. */
2571 sbrked_mem
+= lim
- (char*)old_end
;
2573 /* Guarantee alignment of first new chunk made from this space */
2574 front_misalign
= (unsigned long)chunk2mem(lim
) & ALIGN_MASK
;
2575 if (front_misalign
> 0)
2577 correction
= (ALIGNMENT
) - front_misalign
;
2583 /* Guarantee the next brk will be at a page boundary */
2584 correction
+= pagesz
- ((unsigned long)(lim
+ sbrk_size
) & (pagesz
- 1));
2586 /* Allocate correction */
2587 new_lim
= (char*)(MORECORE (correction
));
2588 if (new_lim
== (char*)(MORECORE_FAILURE
)) return;
2590 sbrked_mem
+= correction
;
2592 top
= (mchunkptr
)lim
;
2593 top_size
= new_lim
- lim
+ correction
;
2594 set_head(top
, top_size
| PREV_INUSE
);
2600 if (old_top
!= initial_top
)
2603 /* There must have been an intervening foreign sbrk call. */
2604 /* A double fencepost is necessary to prevent consolidation */
2606 /* If not enough space to do this, then user did something very wrong */
2607 if (old_top_size
< MINSIZE
)
2609 set_head(top
, PREV_INUSE
); /* will force null return from malloc */
2613 old_top_size
-= 2*SIZE_SZ
;
2614 chunk_at_offset(old_top
, old_top_size
)->size
=
2616 chunk_at_offset(old_top
, old_top_size
+ SIZE_SZ
)->size
=
2618 set_head_size(old_top
, old_top_size
);
2619 /* If possible, release the rest. */
2620 if (old_top_size
>= MINSIZE
) {
2621 init_freeable_chunk(old_top
);
2622 fREe(chunk2mem(old_top
));
2625 #endif /* OTHER_SBRKS */
2628 init_freed_chunk(top
, old_top
== initial_top
? old_top_size
: 0, 0);
2630 if ((unsigned long)sbrked_mem
> (unsigned long)max_sbrked_mem
)
2631 max_sbrked_mem
= sbrked_mem
;
2632 if ((unsigned long)(mmapped_mem
+ sbrked_mem
) > (unsigned long)max_total_mem
)
2633 max_total_mem
= mmapped_mem
+ sbrked_mem
;
2635 /* We always land on a page boundary */
2636 assert(((unsigned long)((char*)top
+ top_size
) & (pagesz
- 1)) == 0);
2642 /* Main public routines */
2648 The requested size is first converted into a usable form, `nb'.
2649 This currently means to add 4 bytes overhead plus possibly more to
2650 obtain 8-byte alignment and/or to obtain a size of at least
2651 MINSIZE (currently 16 bytes), the smallest allocatable size.
2652 (All fits are considered `exact' if they are within MINSIZE bytes.)
2654 From there, the first successful of the following steps is taken:
2656 1. The bin corresponding to the request size is scanned, and if
2657 a chunk of exactly the right size is found, it is taken.
2659 2. The most recently remaindered chunk is used if it is big
2660 enough. This is a form of (roving) first fit, used only in
2661 the absence of exact fits. Runs of consecutive requests use
2662 the remainder of the chunk used for the previous such request
2663 whenever possible. This limited use of a first-fit style
2664 allocation strategy tends to give contiguous chunks
2665 coextensive lifetimes, which improves locality and can reduce
2666 fragmentation in the long run.
2668 3. Other bins are scanned in increasing size order, using a
2669 chunk big enough to fulfill the request, and splitting off
2670 any remainder. This search is strictly by best-fit; i.e.,
2671 the smallest (with ties going to approximately the least
2672 recently used) chunk that fits is selected.
2674 4. If large enough, the chunk bordering the end of memory
2675 (`top') is split off. (This use of `top' is in accord with
2676 the best-fit search rule. In effect, `top' is treated as
2677 larger (and thus less well fitting) than any other available
2678 chunk since it can be extended to be as large as necessary
2679 (up to system limitations).
2681 5. If the request size meets the mmap threshold and the
2682 system supports mmap, and there are few enough currently
2683 allocated mmapped regions, and a call to mmap succeeds,
2684 the request is allocated via direct memory mapping.
2686 6. Otherwise, the top of memory is extended by
2687 obtaining more space from the system (normally using sbrk,
2688 but definable to anything else via the MORECORE macro).
2689 Memory is gathered from the system (in system page-sized
2690 units) in a way that allows chunks obtained across different
2691 sbrk calls to be consolidated, but does not require
2692 contiguous memory. Thus, it should be safe to intersperse
2693 mallocs with other sbrk calls.
2696 All allocations are made from the the `lowest' part of any found
2697 chunk. (The implementation invariant is that prev_inuse is
2698 always true of any allocated chunk; i.e., that each allocated
2699 chunk borders either a previously allocated and still in-use chunk,
2700 or the base of its memory arena.)
2705 Void_t
* mALLOc(size_t bytes
)
2707 Void_t
* mALLOc(bytes
) size_t bytes
;
2710 mchunkptr victim
; /* inspected/selected chunk */
2711 INTERNAL_SIZE_T victim_size
; /* its size */
2712 int idx
; /* index for bin traversal */
2713 mbinptr bin
; /* associated bin */
2714 mchunkptr remainder
; /* remainder from a split */
2715 long remainder_size
; /* its size */
2716 int remainder_index
; /* its bin index */
2717 unsigned long block
; /* block traverser bit */
2718 int startidx
; /* first bin of a traversed block */
2719 mchunkptr fwd
; /* misc temp for linking */
2720 mchunkptr bck
; /* misc temp for linking */
2721 mbinptr q
; /* misc temp */
2723 INTERNAL_SIZE_T nb
= request2size(bytes
); /* padded request size; */
2725 /* Check for exact match in a bin */
2727 if (is_small_request(nb
)) /* Faster version for small requests */
2729 idx
= smallbin_index(nb
);
2731 /* No traversal or size check necessary for small bins. */
2736 /* Also scan the next one, since it would have a remainder < MINSIZE */
2744 victim_size
= chunksize(victim
);
2745 unlink(victim
, bck
, fwd
);
2746 set_inuse_bit_at_offset(victim
, victim_size
);
2747 check_freefill(victim
, victim_size
, victim_size
);
2748 init_malloced_chunk(victim
, bytes
);
2749 check_malloced_chunk(victim
, nb
);
2751 return chunk2mem(victim
);
2754 idx
+= 2; /* Set for bin scan below. We've already scanned 2 bins. */
2759 idx
= bin_index(nb
);
2762 for (victim
= last(bin
); victim
!= bin
; victim
= victim
->bk
)
2764 victim_size
= chunksize(victim
);
2765 remainder_size
= victim_size
- nb
;
2767 if (remainder_size
>= (long)MINSIZE
) /* too big */
2769 --idx
; /* adjust to rescan below after checking last remainder */
2773 else if (remainder_size
>= 0) /* exact fit */
2775 unlink(victim
, bck
, fwd
);
2776 set_inuse_bit_at_offset(victim
, victim_size
);
2777 check_freefill(victim
, victim_size
, victim_size
);
2778 init_malloced_chunk(victim
, bytes
);
2779 check_malloced_chunk(victim
, nb
);
2780 return chunk2mem(victim
);
2788 /* Try to use the last split-off remainder */
2790 if ( (victim
= last_remainder
->fd
) != last_remainder
)
2792 victim_size
= chunksize(victim
);
2793 remainder_size
= victim_size
- nb
;
2795 if (remainder_size
>= (long)MINSIZE
) /* re-split */
2797 remainder
= chunk_at_offset(victim
, nb
);
2798 set_head(victim
, nb
| PREV_INUSE
);
2799 check_freefill(victim
, nb
, victim_size
);
2800 init_malloced_chunk(victim
, bytes
);
2801 link_last_remainder(remainder
);
2802 set_head(remainder
, remainder_size
| PREV_INUSE
);
2803 set_foot(remainder
, remainder_size
);
2804 init_freed_chunk(remainder
, remainder_size
, 0);
2805 check_malloced_chunk(victim
, nb
);
2806 return chunk2mem(victim
);
2809 clear_last_remainder
;
2811 if (remainder_size
>= 0) /* exhaust */
2813 set_inuse_bit_at_offset(victim
, victim_size
);
2814 check_freefill(victim
, victim_size
, victim_size
);
2815 init_malloced_chunk(victim
, bytes
);
2816 check_malloced_chunk(victim
, nb
);
2817 return chunk2mem(victim
);
2820 /* Else place in bin */
2822 frontlink(victim
, victim_size
, remainder_index
, bck
, fwd
);
2826 If there are any possibly nonempty big-enough blocks,
2827 search for best fitting chunk by scanning bins in blockwidth units.
2830 if ( (block
= idx2binblock(idx
)) <= binblocks
)
2833 /* Get to the first marked block */
2835 if ( (block
& binblocks
) == 0)
2837 /* force to an even block boundary */
2838 idx
= (idx
& ~(BINBLOCKWIDTH
- 1)) + BINBLOCKWIDTH
;
2840 while ((block
& binblocks
) == 0)
2842 idx
+= BINBLOCKWIDTH
;
2847 /* For each possibly nonempty block ... */
2850 startidx
= idx
; /* (track incomplete blocks) */
2851 q
= bin
= bin_at(idx
);
2853 /* For each bin in this block ... */
2856 /* Find and use first big enough chunk ... */
2858 for (victim
= last(bin
); victim
!= bin
; victim
= victim
->bk
)
2860 victim_size
= chunksize(victim
);
2861 remainder_size
= victim_size
- nb
;
2863 if (remainder_size
>= (long)MINSIZE
) /* split */
2865 remainder
= chunk_at_offset(victim
, nb
);
2866 set_head(victim
, nb
| PREV_INUSE
);
2867 check_freefill(victim
, nb
, victim_size
);
2868 unlink(victim
, bck
, fwd
);
2869 init_malloced_chunk(victim
, bytes
);
2870 link_last_remainder(remainder
);
2871 set_head(remainder
, remainder_size
| PREV_INUSE
);
2872 set_foot(remainder
, remainder_size
);
2873 init_freed_chunk(remainder
, remainder_size
, 0);
2874 check_malloced_chunk(victim
, nb
);
2875 return chunk2mem(victim
);
2878 else if (remainder_size
>= 0) /* take */
2880 check_freefill(victim
, victim_size
, victim_size
);
2881 set_inuse_bit_at_offset(victim
, victim_size
);
2882 unlink(victim
, bck
, fwd
);
2883 init_malloced_chunk(victim
, bytes
);
2884 check_malloced_chunk(victim
, nb
);
2885 return chunk2mem(victim
);
2890 bin
= next_bin(bin
);
2892 } while ((++idx
& (BINBLOCKWIDTH
- 1)) != 0);
2894 /* Clear out the block bit. */
2896 do /* Possibly backtrack to try to clear a partial block */
2898 if ((startidx
& (BINBLOCKWIDTH
- 1)) == 0)
2900 binblocks
&= ~block
;
2905 } while (first(q
) == q
);
2907 /* Get to the next possibly nonempty block */
2909 if ( (block
<<= 1) <= binblocks
&& (block
!= 0) )
2911 while ((block
& binblocks
) == 0)
2913 idx
+= BINBLOCKWIDTH
;
2923 /* Try to use top chunk */
2925 /* Require that there be a remainder, ensuring top always exists */
2926 if ( (remainder_size
= chunksize(top
) - nb
) < (long)MINSIZE
)
2930 /* If big and would otherwise need to extend, try to use mmap instead */
2931 if ((unsigned long)nb
>= (unsigned long)mmap_threshold
&&
2932 (victim
= mmap_chunk(nb
)) != 0) {
2933 init_malloced_chunk(victim
, bytes
);
2934 return chunk2mem(victim
);
2939 malloc_extend_top(nb
);
2940 if ( (remainder_size
= chunksize(top
) - nb
) < (long)MINSIZE
)
2941 return 0; /* propagate failure */
2945 set_head(victim
, nb
| PREV_INUSE
);
2946 check_freefill(victim
, nb
, nb
+ remainder_size
);
2947 init_malloced_chunk(victim
, bytes
);
2948 top
= chunk_at_offset(victim
, nb
);
2949 set_head(top
, remainder_size
| PREV_INUSE
);
2950 init_freed_chunk(top
, remainder_size
, 0);
2951 check_malloced_chunk(victim
, nb
);
2952 return chunk2mem(victim
);
2965 1. free(0) has no effect.
2967 2. If the chunk was allocated via mmap, it is release via munmap().
2969 3. If a returned chunk borders the current high end of memory,
2970 it is consolidated into the top, and if the total unused
2971 topmost memory exceeds the trim threshold, malloc_trim is
2974 4. Other chunks are consolidated as they arrive, and
2975 placed in corresponding bins. (This includes the case of
2976 consolidating with the current `last_remainder').
2982 void fREe(Void_t
* mem
)
2984 void fREe(mem
) Void_t
* mem
;
2987 mchunkptr p
; /* chunk corresponding to mem */
2988 INTERNAL_SIZE_T hd
; /* its head field */
2989 INTERNAL_SIZE_T sz
; /* its size */
2990 int idx
; /* its bin index */
2991 mchunkptr next
; /* next contiguous chunk */
2992 INTERNAL_SIZE_T nextsz
; /* its size */
2993 INTERNAL_SIZE_T prevsz
; /* size of previous contiguous chunk */
2994 mchunkptr bck
; /* misc temp for linking */
2995 mchunkptr fwd
; /* misc temp for linking */
2996 int islr
; /* track whether merging with last_remainder */
2998 if (mem
== 0) /* free(0) has no effect */
3002 check_inuse_chunk(p
);
3007 if (hd
& IS_MMAPPED
) /* release mmapped memory. */
3014 sz
= hd
& ~PREV_INUSE
;
3015 next
= chunk_at_offset(p
, sz
);
3016 nextsz
= chunksize(next
);
3017 prevsz
= 0; /* avoid compiler warnings */
3019 if (next
== top
) /* merge with top */
3023 if (!(hd
& PREV_INUSE
)) /* consolidate backward */
3025 prevsz
= p
->prev_size
;
3026 p
= chunk_at_offset(p
, -(long)prevsz
);
3028 unlink(p
, bck
, fwd
);
3031 set_head(p
, sz
| PREV_INUSE
);
3033 init_freed_chunk(top
, !(hd
& PREV_INUSE
) ? prevsz
: 0, nextsz
);
3034 if ((unsigned long)(sz
) >= trim_threshold
)
3035 malloc_trim(top_pad
);
3039 set_head(next
, nextsz
); /* clear inuse bit */
3043 if (!(hd
& PREV_INUSE
)) /* consolidate backward */
3045 prevsz
= p
->prev_size
;
3046 p
= chunk_at_offset(p
, -(long)prevsz
);
3049 if (p
->fd
== last_remainder
) /* keep as last_remainder */
3052 unlink(p
, bck
, fwd
);
3055 if (!(inuse_bit_at_offset(next
, nextsz
))) /* consolidate forward */
3059 if (!islr
&& next
->fd
== last_remainder
) /* re-insert last_remainder */
3062 link_last_remainder(p
);
3065 unlink(next
, bck
, fwd
);
3069 set_head(p
, sz
| PREV_INUSE
);
3072 frontlink(p
, sz
, idx
, bck
, fwd
);
3073 init_freed_chunk(p
, !(hd
& PREV_INUSE
) ? prevsz
: 0,
3074 !inuse_bit_at_offset(next
, nextsz
) ? nextsz
: 0);
3085 Chunks that were obtained via mmap cannot be extended or shrunk
3086 unless HAVE_MREMAP is defined, in which case mremap is used.
3087 Otherwise, if their reallocation is for additional space, they are
3088 copied. If for less, they are just left alone.
3090 Otherwise, if the reallocation is for additional space, and the
3091 chunk can be extended, it is, else a malloc-copy-free sequence is
3092 taken. There are several different ways that a chunk could be
3093 extended. All are tried:
3095 * Extending forward into following adjacent free chunk.
3096 * Shifting backwards, joining preceding adjacent space
3097 * Both shifting backwards and extending forward.
3098 * Extending into newly sbrked space
3100 Unless the #define realloc_ZERO_BYTES_FREES is set, realloc with a
3101 size argument of zero (re)allocates a minimum-sized chunk.
3103 If the reallocation is for less space, and the new request is for
3104 a `small' (<512 bytes) size, then the newly unused space is lopped
3107 The old unix realloc convention of allowing the last-free'd chunk
3108 to be used as an argument to realloc is no longer supported.
3109 I don't know of any programs still relying on this feature,
3110 and allowing it would also allow too many other incorrect
3111 usages of realloc to be sensible.
3118 Void_t
* rEALLOc(Void_t
* oldmem
, size_t bytes
)
3120 Void_t
* rEALLOc(oldmem
, bytes
) Void_t
* oldmem
; size_t bytes
;
3123 INTERNAL_SIZE_T nb
; /* padded request size */
3125 mchunkptr oldp
; /* chunk corresponding to oldmem */
3126 INTERNAL_SIZE_T oldsize
; /* its size */
3128 mchunkptr newp
; /* chunk to return */
3129 INTERNAL_SIZE_T newsize
; /* its size */
3130 Void_t
* newmem
; /* corresponding user mem */
3132 mchunkptr next
; /* next contiguous chunk after oldp */
3133 INTERNAL_SIZE_T nextsize
; /* its size */
3135 mchunkptr prev
; /* previous contiguous chunk before oldp */
3136 INTERNAL_SIZE_T prevsize
; /* its size */
3138 mchunkptr remainder
; /* holds split off extra space from newp */
3139 INTERNAL_SIZE_T remainder_size
; /* its size */
3141 mchunkptr bck
; /* misc temp for linking */
3142 mchunkptr fwd
; /* misc temp for linking */
3144 #ifdef realloc_ZERO_BYTES_FREES
3145 if (bytes
== 0) { fREe(oldmem
); return 0; }
3149 /* realloc of null is supposed to be same as malloc */
3150 if (oldmem
== 0) return mALLOc(bytes
);
3152 newp
= oldp
= mem2chunk(oldmem
);
3153 newsize
= oldsize
= chunksize(oldp
);
3156 nb
= request2size(bytes
);
3158 check_inuse_chunk(oldp
);
3161 if (chunk_is_mmapped(oldp
))
3163 if (oldsize
- MMAP_EXTRA
>= nb
) {
3164 init_realloced_chunk(oldp
, bytes
, oldsize
);
3165 return oldmem
; /* do nothing */
3168 newp
= mremap_chunk(oldp
, nb
);
3170 init_realloced_chunk(newp
, bytes
, oldsize
);
3171 return chunk2mem(newp
);
3174 /* Must alloc, copy, free. */
3175 newmem
= mALLOc(bytes
);
3176 if (newmem
== 0) return 0; /* propagate failure */
3177 malloc_COPY(newmem
, oldmem
, oldsize
- OVERHEAD
- MMAP_EXTRA
);
3186 /* Try expanding forward */
3188 next
= chunk_at_offset(oldp
, oldsize
);
3189 if (next
== top
|| !inuse(next
))
3191 nextsize
= chunksize(next
);
3193 /* Forward into top only if a remainder */
3196 if ((long)(nextsize
+ newsize
) >= (long)(nb
+ MINSIZE
))
3198 check_freefill(next
, nb
- oldsize
, nextsize
);
3199 newsize
+= nextsize
;
3200 top
= chunk_at_offset(oldp
, nb
);
3201 set_head(top
, (newsize
- nb
) | PREV_INUSE
);
3202 init_freed_chunk(top
, newsize
- nb
, 0);
3203 set_head_size(oldp
, nb
);
3204 init_realloced_chunk(oldp
, bytes
, oldsize
);
3205 return chunk2mem(oldp
);
3209 /* Forward into next chunk */
3210 else if (((long)(nextsize
+ newsize
) >= (long)nb
))
3212 check_freefill(next
, nb
- oldsize
, nextsize
);
3213 unlink(next
, bck
, fwd
);
3214 newsize
+= nextsize
;
3224 /* Try shifting backwards. */
3226 if (!prev_inuse(oldp
))
3228 prev
= prev_chunk(oldp
);
3229 prevsize
= chunksize(prev
);
3231 /* try forward + backward first to save a later consolidation */
3238 if ((long)(nextsize
+ prevsize
+ newsize
) >= (long)(nb
+ MINSIZE
))
3240 check_freefill(prev
, nb
, prevsize
);
3241 check_freefill(next
, nb
- (prevsize
+ newsize
), nextsize
);
3242 unlink(prev
, bck
, fwd
);
3244 newsize
+= prevsize
+ nextsize
;
3245 newmem
= chunk2mem(newp
);
3246 malloc_COPY(newmem
, oldmem
, oldsize
- OVERHEAD
);
3247 top
= chunk_at_offset(newp
, nb
);
3248 set_head(top
, (newsize
- nb
) | PREV_INUSE
);
3249 init_freed_chunk(top
, newsize
- nb
, 0);
3250 set_head_size(newp
, nb
);
3251 init_realloced_chunk(newp
, bytes
, oldsize
);
3256 /* into next chunk */
3257 else if (((long)(nextsize
+ prevsize
+ newsize
) >= (long)(nb
)))
3259 check_freefill(prev
, nb
, prevsize
);
3260 check_freefill(next
, nb
- (prevsize
+ newsize
), nextsize
);
3261 unlink(next
, bck
, fwd
);
3262 unlink(prev
, bck
, fwd
);
3264 newsize
+= nextsize
+ prevsize
;
3265 newmem
= chunk2mem(newp
);
3266 malloc_COPY(newmem
, oldmem
, oldsize
- OVERHEAD
);
3272 if (prev
!= 0 && (long)(prevsize
+ newsize
) >= (long)nb
)
3274 check_freefill(prev
, nb
, prevsize
);
3275 unlink(prev
, bck
, fwd
);
3277 newsize
+= prevsize
;
3278 newmem
= chunk2mem(newp
);
3279 malloc_COPY(newmem
, oldmem
, oldsize
- OVERHEAD
);
3286 newmem
= mALLOc (bytes
);
3288 if (newmem
== 0) /* propagate failure */
3291 /* Avoid copy if newp is next chunk after oldp. */
3292 /* (This can only happen when new chunk is sbrk'ed.) */
3294 if ( (newp
= mem2chunk(newmem
)) == next_chunk(oldp
))
3296 newsize
+= chunksize(newp
);
3301 /* Otherwise copy, free, and exit */
3302 malloc_COPY(newmem
, oldmem
, oldsize
- OVERHEAD
);
3308 split
: /* split off extra room in old or expanded chunk */
3310 if (newsize
- nb
>= MINSIZE
) /* split off remainder */
3312 remainder
= chunk_at_offset(newp
, nb
);
3313 remainder_size
= newsize
- nb
;
3314 set_head_size(newp
, nb
);
3315 set_head(remainder
, remainder_size
| PREV_INUSE
);
3316 set_inuse_bit_at_offset(remainder
, remainder_size
);
3317 init_malloced_chunk(remainder
, remainder_size
- OVERHEAD
);
3318 fREe(chunk2mem(remainder
)); /* let free() deal with it */
3322 set_head_size(newp
, newsize
);
3323 set_inuse_bit_at_offset(newp
, newsize
);
3326 init_realloced_chunk(newp
, bytes
, oldsize
);
3327 check_inuse_chunk(newp
);
3328 return chunk2mem(newp
);
3338 memalign requests more than enough space from malloc, finds a spot
3339 within that chunk that meets the alignment request, and then
3340 possibly frees the leading and trailing space.
3342 The alignment argument must be a power of two. This property is not
3343 checked by memalign, so misuse may result in random runtime errors.
3345 8-byte alignment is guaranteed by normal malloc calls, so don't
3346 bother calling memalign with an argument of 8 or less.
3348 Overreliance on memalign is a sure way to fragment space.
3354 Void_t
* mEMALIGn(size_t alignment
, size_t bytes
)
3356 Void_t
* mEMALIGn(alignment
, bytes
) size_t alignment
; size_t bytes
;
3359 INTERNAL_SIZE_T nb
; /* padded request size */
3360 char* m
; /* memory returned by malloc call */
3361 mchunkptr p
; /* corresponding chunk */
3362 char* lim
; /* alignment point within p */
3363 mchunkptr newp
; /* chunk to return */
3364 INTERNAL_SIZE_T newsize
; /* its size */
3365 INTERNAL_SIZE_T leadsize
; /* leading space befor alignment point */
3366 mchunkptr remainder
; /* spare room at end to split off */
3367 long remainder_size
; /* its size */
3369 /* If need less alignment than we give anyway, just relay to malloc */
3371 if (alignment
<= ALIGNMENT
) return mALLOc(bytes
);
3373 /* Otherwise, ensure that it is at least a minimum chunk size */
3375 if (alignment
< MINSIZE
) alignment
= MINSIZE
;
3377 /* Call malloc with worst case padding to hit alignment. */
3379 nb
= request2size(bytes
);
3380 m
= (char*)mALLOc(nb
+ alignment
+ MINSIZE
);
3382 if (m
== 0) return 0; /* propagate failure */
3386 if ((((unsigned long)(m
)) % alignment
) == 0) /* aligned */
3388 init_realloced_chunk(p
, bytes
, chunksize(p
));
3389 return chunk2mem(p
); /* nothing more to do */
3391 else /* misaligned */
3394 Find an aligned spot inside chunk.
3395 Since we need to give back leading space in a chunk of at
3396 least MINSIZE, if the first calculation places us at
3397 a spot with less than MINSIZE leader, we can move to the
3398 next aligned spot -- we've allocated enough total room so that
3399 this is always possible.
3402 lim
= (char*)mem2chunk(((unsigned long)(m
+ alignment
- 1)) &
3404 if ((lim
- (char*)p
) < (long)MINSIZE
) lim
= lim
+ alignment
;
3406 newp
= (mchunkptr
)lim
;
3407 leadsize
= lim
- (char*)p
;
3408 newsize
= chunksize(p
) - leadsize
;
3411 if(chunk_is_mmapped(p
))
3413 newp
->prev_size
= p
->prev_size
+ leadsize
;
3414 set_head(newp
, newsize
|IS_MMAPPED
);
3415 init_malloced_chunk(newp
, bytes
);
3416 return chunk2mem(newp
);
3420 /* give back leader, use the rest */
3422 set_head(newp
, newsize
| PREV_INUSE
);
3423 set_inuse_bit_at_offset(newp
, newsize
);
3424 set_head_size(p
, leadsize
);
3425 init_freeable_chunk(p
);
3429 assert (newsize
>= nb
&& (((unsigned long)(chunk2mem(p
))) % alignment
) == 0);
3432 /* Also give back spare room at the end */
3434 remainder_size
= chunksize(p
) - nb
;
3436 if (remainder_size
>= (long)MINSIZE
)
3438 remainder
= chunk_at_offset(p
, nb
);
3439 set_head(remainder
, remainder_size
| PREV_INUSE
);
3440 set_head_size(p
, nb
);
3441 init_freeable_chunk(remainder
);
3442 fREe(chunk2mem(remainder
));
3445 init_malloced_chunk(p
, bytes
);
3446 check_inuse_chunk(p
);
3447 return chunk2mem(p
);
3455 valloc just invokes memalign with alignment argument equal
3456 to the page size of the system (or as near to this as can
3457 be figured out from all the includes/defines above.)
3461 Void_t
* vALLOc(size_t bytes
)
3463 Void_t
* vALLOc(bytes
) size_t bytes
;
3466 return mEMALIGn (malloc_getpagesize
, bytes
);
3470 pvalloc just invokes valloc for the nearest pagesize
3471 that will accommodate request
3476 Void_t
* pvALLOc(size_t bytes
)
3478 Void_t
* pvALLOc(bytes
) size_t bytes
;
3481 size_t pagesize
= malloc_getpagesize
;
3482 return mEMALIGn (pagesize
, (bytes
+ pagesize
- 1) & ~(pagesize
- 1));
3487 calloc calls malloc, then zeroes out the allocated chunk.
3492 Void_t
* cALLOc(size_t n
, size_t elem_size
)
3494 Void_t
* cALLOc(n
, elem_size
) size_t n
; size_t elem_size
;
3498 INTERNAL_SIZE_T csz
;
3500 INTERNAL_SIZE_T sz
= n
* elem_size
;
3502 /* check if expand_top called, in which case don't need to clear */
3504 mchunkptr oldtop
= top
;
3505 INTERNAL_SIZE_T oldtopsize
= chunksize(top
);
3507 Void_t
* mem
= mALLOc (sz
);
3515 /* Two optional cases in which clearing not necessary */
3519 if (chunk_is_mmapped(p
)) return mem
;
3525 if (p
== oldtop
&& csz
> oldtopsize
)
3527 /* clear only the bytes from non-freshly-sbrked memory */
3532 malloc_ZERO(mem
, csz
- OVERHEAD
);
3533 /* reinstate moat fill in pad region */
3534 init_realloced_chunk(p
, sz
, chunksize(p
));
3541 cfree just calls free. It is needed/defined on some systems
3542 that pair it with calloc, presumably for odd historical reasons.
3546 #if !defined(INTERNAL_LINUX_C_LIB) || !defined(__ELF__)
3548 void cfree(Void_t
*mem
)
3550 void cfree(mem
) Void_t
*mem
;
3561 Malloc_trim gives memory back to the system (via negative
3562 arguments to sbrk) if there is unused memory at the `high' end of
3563 the malloc pool. You can call this after freeing large blocks of
3564 memory to potentially reduce the system-level memory requirements
3565 of a program. However, it cannot guarantee to reduce memory. Under
3566 some allocation patterns, some large free blocks of memory will be
3567 locked between two used chunks, so they cannot be given back to
3570 The `pad' argument to malloc_trim represents the amount of free
3571 trailing space to leave untrimmed. If this argument is zero,
3572 only the minimum amount of memory to maintain internal data
3573 structures will be left (one page or less). Non-zero arguments
3574 can be supplied to maintain enough trailing space to service
3575 future expected allocations without having to re-obtain memory
3578 Malloc_trim returns 1 if it actually released any memory, else 0.
3583 int malloc_trim(size_t pad
)
3585 int malloc_trim(pad
) size_t pad
;
3588 long top_size
; /* Amount of top-most memory */
3589 long extra
; /* Amount to release */
3590 char* current_lim
; /* address returned by pre-check sbrk call */
3591 char* new_lim
; /* address returned by negative sbrk call */
3593 unsigned long pagesz
= malloc_getpagesize
;
3595 top_size
= chunksize(top
);
3596 extra
= ((top_size
- pad
- MINSIZE
+ (pagesz
-1)) / pagesz
- 1) * pagesz
;
3598 if (extra
< (long)pagesz
) /* Not enough memory to release */
3604 /* Test to make sure no one else called sbrk */
3605 current_lim
= (char*)(MORECORE (0));
3606 if (current_lim
!= (char*)(top
) + top_size
)
3607 return 0; /* Apparently we don't own memory; must fail */
3612 new_lim
= (char*)(MORECORE (-extra
));
3614 if (new_lim
== (char*)(MORECORE_FAILURE
)) /* sbrk failed? */
3616 /* Try to figure out what we have */
3617 current_lim
= (char*)(MORECORE (0));
3618 top_size
= current_lim
- (char*)top
;
3619 if (top_size
>= (long)MINSIZE
) /* if not, we are very very dead! */
3621 sbrked_mem
= current_lim
- sbrk_base
;
3622 set_head(top
, top_size
| PREV_INUSE
);
3623 init_freed_chunk(top
, top_size
, 0);
3631 /* Success. Adjust top accordingly. */
3632 set_head(top
, (top_size
- extra
) | PREV_INUSE
);
3633 sbrked_mem
-= extra
;
3634 init_freed_chunk(top
, top_size
- extra
, 0);
3647 This routine tells you how many bytes you can actually use in an
3648 allocated chunk, which may be more than you requested (although
3649 often not). You can use this many bytes without worrying about
3650 overwriting other allocated objects. Not a particularly great
3651 programming practice, but still sometimes useful.
3656 size_t malloc_usable_size(Void_t
* mem
)
3658 size_t malloc_usable_size(mem
) Void_t
* mem
;
3667 check_inuse_chunk(p
);
3669 if(!chunk_is_mmapped(p
))
3671 if (!inuse(p
)) return 0;
3672 return chunksize(p
) - OVERHEAD
;
3674 return chunksize(p
) - OVERHEAD
- MMAP_EXTRA
;
3681 /* Utility to update current_mallinfo for malloc_stats and mallinfo() */
3683 static void malloc_update_mallinfo(void)
3692 INTERNAL_SIZE_T avail
= chunksize(top
);
3693 int navail
= avail
>= MINSIZE
? 1 : 0;
3694 check_freefill(top
, avail
, avail
);
3698 for (p
= lowest_chunk
;
3699 p
< top
&& inuse(p
) && chunksize(p
) >= MINSIZE
;
3701 check_inuse_chunk(p
);
3704 for (i
= 1; i
< NAV
; ++i
)
3707 for (p
= last(b
); p
!= b
; p
= p
->bk
)
3710 check_free_chunk(p
);
3711 check_freefill(p
, chunksize(p
), chunksize(p
));
3712 for (q
= next_chunk(p
);
3713 q
< top
&& inuse(q
) && chunksize(q
) >= MINSIZE
;
3715 check_inuse_chunk(q
);
3717 avail
+= chunksize(p
);
3722 current_mallinfo
.ordblks
= navail
;
3723 current_mallinfo
.uordblks
= sbrked_mem
- avail
;
3724 current_mallinfo
.fordblks
= avail
;
3725 current_mallinfo
.hblks
= n_mmaps
;
3726 current_mallinfo
.hblkhd
= mmapped_mem
;
3727 current_mallinfo
.keepcost
= chunksize(top
);
3737 Prints on stderr the amount of space obtain from the system (both
3738 via sbrk and mmap), the maximum amount (which may be more than
3739 current if malloc_trim and/or munmap got called), the maximum
3740 number of simultaneous mmap regions used, and the current number
3741 of bytes allocated via malloc (or realloc, etc) but not yet
3742 freed. (Note that this is the number of bytes allocated, not the
3743 number requested. It will be larger than the number requested
3744 because of alignment and bookkeeping overhead.)
3748 void malloc_stats(void)
3750 malloc_update_mallinfo();
3751 fprintf(stderr
, "max system bytes = %10u\n",
3752 (unsigned int)(max_total_mem
));
3753 fprintf(stderr
, "system bytes = %10u\n",
3754 (unsigned int)(sbrked_mem
+ mmapped_mem
));
3755 fprintf(stderr
, "in use bytes = %10u\n",
3756 (unsigned int)(current_mallinfo
.uordblks
+ mmapped_mem
));
3758 fprintf(stderr
, "max mmap regions = %10u\n",
3759 (unsigned int)max_n_mmaps
);
3764 mallinfo returns a copy of updated current mallinfo.
3767 struct mallinfo
mALLINFo(void)
3769 malloc_update_mallinfo();
3770 return current_mallinfo
;
3779 mallopt is the general SVID/XPG interface to tunable parameters.
3780 The format is to provide a (parameter-number, parameter-value) pair.
3781 mallopt then sets the corresponding parameter to the argument
3782 value if it can (i.e., so long as the value is meaningful),
3783 and returns 1 if successful else 0.
3785 See descriptions of tunable parameters above.
3790 int mALLOPt(int param_number
, int value
)
3792 int mALLOPt(param_number
, value
) int param_number
; int value
;
3795 switch(param_number
)
3797 case M_TRIM_THRESHOLD
:
3798 trim_threshold
= value
; return 1;
3800 top_pad
= value
; return 1;
3801 case M_MMAP_THRESHOLD
:
3802 mmap_threshold
= value
; return 1;
3805 n_mmaps_max
= value
; return 1;
3807 if (value
!= 0) return 0; else n_mmaps_max
= value
; return 1;
3824 V2.6.3 Sun May 19 08:17:58 1996 Doug Lea (dl at gee)
3825 * Added pvalloc, as recommended by H.J. Liu
3826 * Added 64bit pointer support mainly from Wolfram Gloger
3827 * Added anonymously donated WIN32 sbrk emulation
3828 * Malloc, calloc, getpagesize: add optimizations from Raymond Nijssen
3829 * malloc_extend_top: fix mask error that caused wastage after
3831 * Add linux mremap support code from HJ Liu
3833 V2.6.2 Tue Dec 5 06:52:55 1995 Doug Lea (dl at gee)
3834 * Integrated most documentation with the code.
3835 * Add support for mmap, with help from
3836 Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
3837 * Use last_remainder in more cases.
3838 * Pack bins using idea from colin@nyx10.cs.du.edu
3839 * Use ordered bins instead of best-fit threshhold
3840 * Eliminate block-local decls to simplify tracing and debugging.
3841 * Support another case of realloc via move into top
3842 * Fix error occuring when initial sbrk_base not word-aligned.
3843 * Rely on page size for units instead of SBRK_UNIT to
3844 avoid surprises about sbrk alignment conventions.
3845 * Add mallinfo, mallopt. Thanks to Raymond Nijssen
3846 (raymond@es.ele.tue.nl) for the suggestion.
3847 * Add `pad' argument to malloc_trim and top_pad mallopt parameter.
3848 * More precautions for cases where other routines call sbrk,
3849 courtesy of Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
3850 * Added macros etc., allowing use in linux libc from
3851 H.J. Lu (hjl@gnu.ai.mit.edu)
3852 * Inverted this history list
3854 V2.6.1 Sat Dec 2 14:10:57 1995 Doug Lea (dl at gee)
3855 * Re-tuned and fixed to behave more nicely with V2.6.0 changes.
3856 * Removed all preallocation code since under current scheme
3857 the work required to undo bad preallocations exceeds
3858 the work saved in good cases for most test programs.
3859 * No longer use return list or unconsolidated bins since
3860 no scheme using them consistently outperforms those that don't
3861 given above changes.
3862 * Use best fit for very large chunks to prevent some worst-cases.
3863 * Added some support for debugging
3865 V2.6.0 Sat Nov 4 07:05:23 1995 Doug Lea (dl at gee)
3866 * Removed footers when chunks are in use. Thanks to
3867 Paul Wilson (wilson@cs.texas.edu) for the suggestion.
3869 V2.5.4 Wed Nov 1 07:54:51 1995 Doug Lea (dl at gee)
3870 * Added malloc_trim, with help from Wolfram Gloger
3871 (wmglo@Dent.MED.Uni-Muenchen.DE).
3873 V2.5.3 Tue Apr 26 10:16:01 1994 Doug Lea (dl at g)
3875 V2.5.2 Tue Apr 5 16:20:40 1994 Doug Lea (dl at g)
3876 * realloc: try to expand in both directions
3877 * malloc: swap order of clean-bin strategy;
3878 * realloc: only conditionally expand backwards
3879 * Try not to scavenge used bins
3880 * Use bin counts as a guide to preallocation
3881 * Occasionally bin return list chunks in first scan
3882 * Add a few optimizations from colin@nyx10.cs.du.edu
3884 V2.5.1 Sat Aug 14 15:40:43 1993 Doug Lea (dl at g)
3885 * faster bin computation & slightly different binning
3886 * merged all consolidations to one part of malloc proper
3887 (eliminating old malloc_find_space & malloc_clean_bin)
3888 * Scan 2 returns chunks (not just 1)
3889 * Propagate failure in realloc if malloc returns 0
3890 * Add stuff to allow compilation on non-ANSI compilers
3891 from kpv@research.att.com
3893 V2.5 Sat Aug 7 07:41:59 1993 Doug Lea (dl at g.oswego.edu)
3894 * removed potential for odd address access in prev_chunk
3895 * removed dependency on getpagesize.h
3896 * misc cosmetics and a bit more internal documentation
3897 * anticosmetics: mangled names in macros to evade debugger strangeness
3898 * tested on sparc, hp-700, dec-mips, rs6000
3899 with gcc & native cc (hp, dec only) allowing
3900 Detlefs & Zorn comparison study (in SIGPLAN Notices.)
3902 Trial version Fri Aug 28 13:14:29 1992 Doug Lea (dl at g.oswego.edu)
3903 * Based loosely on libg++-1.2X malloc. (It retains some of the overall
3904 structure of old version, but most details differ.)