]> sourceware.org Git - newlib-cygwin.git/blob - winsup/cygwin/dlmalloc.c
* environ.cc (environ_init): Avoid a compiler warning.
[newlib-cygwin.git] / winsup / cygwin / dlmalloc.c
1 /*
2 * To do:
3 * - strdup? maybe shouldn't bother yet, it seems difficult to get includes
4 * right using dlmalloc.h
5 * - add STD_C prototyping
6 * - adhere to comment conventions
7 * - maybe fix ALLOCFILL vs. MOATFILL in do_init_realloced_chunk()
8 * - keep a list of mmaped regions for checking in malloc_update_mallinfo()
9 * - I think memalign() is wrong: it aligns the chunk rather than the memory
10 * portion of the chunk.
11 * - "& -alignment" in memalign() is suspect: should use "& ~alignment"
12 * instead?
13 * - malloc.h doesn't need malloc_COPY or probably a bunch of other stuff
14 * - add mallopt options for e.g. fill?
15 * - come up with a non-BBC version of M_C
16 * - document necessity of checking chunk address in do_check_chunk prior to
17 * accessing any of its fields
18 * Done:
19 * minor speedup due to extend check before mremap
20 * minor speedup due to returning malloc() result in memalign() if aligned
21 * made malloc_update_mallinfo() check alloced regions at start of sbrk area
22 * fixed bug: After discovering foreign sbrk, if old_top was MINSIZE, would
23 * reduce old_top_size to 0, thus making inuse(old_top) return 0; other
24 * functions would consequently attempt to access old_top->{fd,bk}, which
25 * were invalid. This is in malloc_extend_top(), in the "double
26 * fencepost" section.
27 * Documentation:
28 * malloc_usable_size(P) is equivalent to realloc(P, malloc_usable_size(P))
29 *
30 * $Log$
31 * Revision 1.5 2001/10/03 03:49:25 cgf
32 * * cygheap.cc (cfree): Remove malloc debugging probe.
33 * * dlmalloc.c (errprint): Remove abort() call which causes interesting error
34 * message printing to abort prematurely.
35 * * environ.cc: Sprinkle MALLOC_CHECKs liberally throughout.
36 * (_addenv): Allocate two empty elements at end of environ to
37 * (apparently) work around problems with some buggy applications.
38 * (winenv): Avoid calling alloca if no forced environment variable is present.
39 *
40 * * exceptions.cc (open_stackdumpfile): Don't print "Dumping stack trace to..."
41 * when running in a cygwin environment (i.e., the parent is a cygwin process).
42 *
43 * * dtable.cc (dtable::init_std_file_from_handle): Move device type detection
44 * code from build_fhandler here since it is only used by this function.
45 * (dtable::build_fhandler_from_name): New method. Renamed from
46 * dtable::build_fhandler.
47 * (dtable::build_fhandler): Use build_fhandler_from_name.
48 * (cygwin_attach_handle_to_fd): Ditto.
49 * * syscalls.cc (_open): Ditto.
50 * (stat_worker): Ditto.
51 * * dtable.h (dtable::build_fhandler_from_name): Rename declaration from
52 * dtable::build_fhandler.
53 *
54 * Revision 1.4 2001/09/07 21:32:04 cgf
55 * * cygheap.h (init_cygheap): Move heap pointers here.
56 * * include/sys/cygwin.h (perprocess): Remove heap pointers.
57 * * dcrt0.cc (__cygwin_user_data): Reflect obsolete perprocess stuff.
58 * (_dll_crt0): Don't initialize heap pointers.
59 * (cygwin_dll_init): Ditto.
60 * (release_upto): Use heap pointers from cygheap.
61 * * heap.h: Ditto.
62 * * fork.cc (fork_parent): Ditto. Don't set heap pointers in ch.
63 * (fork_child): Remove obsolete sigproc_fixup_after_fork.
64 * * shared.cc (memory_init): Reorganize so that cygheap initialization is called
65 * prior to regular heap since regular heap uses cygheap now.
66 * * sigproc.cc (proc_subproc): Eliminate zombies allocation.
67 * (sigproc_init): Move zombies alloation here. Don't free up array on fork, just
68 * reuse it.
69 * (sigproc_fixup_after_fork): Eliminate.
70 * * sigproc.h: Ditto.
71 * * include/cygwin/version.h: Reflect change to perprocess structure.
72 *
73 * Revision 1.3 2001/06/26 14:47:48 cgf
74 * * mmap.cc: Clean up *ResourceLock calls throughout.
75 * * thread.cc (pthread_cond::TimedWait): Check for WAIT_TIMEOUT as well as
76 * WAIT_ABANDONED.
77 * (__pthread_cond_timedwait): Calculate a relative wait from the abstime
78 * parameter.
79 *
80 * Revision 1.2 2001/06/24 22:26:49 cgf
81 * forced commit
82 *
83 * Revision 1.1 2001/04/24 15:25:30 duda
84 * * dlmalloc.c: New file. Port of Doug Lea's malloc
85 * * dlmalloc.h: Ditto.
86 * * Makefile.in: Add support for MALLOC_DEBUG
87 * * config.h.in: Ditto.
88 * * winsup.h: Ditto.
89 * * configure.in: Add --enable-malloc-debugging option.
90 * * configure: Regenerate.
91 * * debug.h: Include declarations for debugging malloc.
92 * * tty.cc (grantpt): Fix definition.
93 * (unlockpt): Ditto.
94 *
95 * Revision 1.1 1997/12/24 18:34:47 nsd
96 * Initial revision
97 *
98 */
99 /* ---------- To make a malloc.h, start cutting here ------------ */
100
101 /*
102 A version of malloc/free/realloc written by Doug Lea and released to the
103 public domain. Send questions/comments/complaints/performance data
104 to dl@cs.oswego.edu
105
106 * VERSION 2.6.4 Thu Nov 28 07:54:55 1996 Doug Lea (dl at gee)
107
108 Note: There may be an updated version of this malloc obtainable at
109 ftp://g.oswego.edu/pub/misc/malloc.c
110 Check before installing!
111
112 * Why use this malloc?
113
114 This is not the fastest, most space-conserving, most portable, or
115 most tunable malloc ever written. However it is among the fastest
116 while also being among the most space-conserving, portable and tunable.
117 Consistent balance across these factors results in a good general-purpose
118 allocator. For a high-level description, see
119 http://g.oswego.edu/dl/html/malloc.html
120
121 * Synopsis of public routines
122
123 (Much fuller descriptions are contained in the program documentation below.)
124
125 malloc(size_t n);
126 Return a pointer to a newly allocated chunk of at least n bytes, or null
127 if no space is available.
128 free(Void_t* p);
129 Release the chunk of memory pointed to by p, or no effect if p is null.
130 realloc(Void_t* p, size_t n);
131 Return a pointer to a chunk of size n that contains the same data
132 as does chunk p up to the minimum of (n, p's size) bytes, or null
133 if no space is available. The returned pointer may or may not be
134 the same as p. If p is null, equivalent to malloc. Unless the
135 #define realloc_ZERO_BYTES_FREES below is set, realloc with a
136 size argument of zero (re)allocates a minimum-sized chunk.
137 memalign(size_t alignment, size_t n);
138 Return a pointer to a newly allocated chunk of n bytes, aligned
139 in accord with the alignment argument, which must be a power of
140 two.
141 valloc(size_t n);
142 Equivalent to memalign(pagesize, n), where pagesize is the page
143 size of the system (or as near to this as can be figured out from
144 all the includes/defines below.)
145 pvalloc(size_t n);
146 Equivalent to valloc(minimum-page-that-holds(n)), that is,
147 round up n to nearest pagesize.
148 calloc(size_t unit, size_t quantity);
149 Returns a pointer to quantity * unit bytes, with all locations
150 set to zero.
151 cfree(Void_t* p);
152 Equivalent to free(p).
153 malloc_trim(size_t pad);
154 Release all but pad bytes of freed top-most memory back
155 to the system. Return 1 if successful, else 0.
156 malloc_usable_size(Void_t* p);
157 Report the number usable allocated bytes associated with allocated
158 chunk p. This may or may not report more bytes than were requested,
159 due to alignment and minimum size constraints.
160 malloc_stats();
161 Prints brief summary statistics on stderr.
162 mallinfo()
163 Returns (by copy) a struct containing various summary statistics.
164 mallopt(int parameter_number, int parameter_value)
165 Changes one of the tunable parameters described below. Returns
166 1 if successful in changing the parameter, else 0.
167
168 * Vital statistics:
169
170 Alignment: 8-byte
171 8 byte alignment is currently hardwired into the design. This
172 seems to suffice for all current machines and C compilers.
173
174 Assumed pointer representation: 4 or 8 bytes
175 Code for 8-byte pointers is untested by me but has worked
176 reliably by Wolfram Gloger, who contributed most of the
177 changes supporting this.
178
179 Assumed size_t representation: 4 or 8 bytes
180 Note that size_t is allowed to be 4 bytes even if pointers are 8.
181
182 Minimum overhead per allocated chunk: 4 or 8 bytes
183 Each malloced chunk has a hidden overhead of 4 bytes holding size
184 and status information.
185
186 Minimum allocated size: 4-byte ptrs: 16 bytes (including 4 overhead)
187 8-byte ptrs: 24/32 bytes (including, 4/8 overhead)
188
189 When a chunk is freed, 12 (for 4byte ptrs) or 20 (for 8 byte
190 ptrs but 4 byte size) or 24 (for 8/8) additional bytes are
191 needed; 4 (8) for a trailing size field
192 and 8 (16) bytes for free list pointers. Thus, the minimum
193 allocatable size is 16/24/32 bytes.
194
195 Even a request for zero bytes (i.e., malloc(0)) returns a
196 pointer to something of the minimum allocatable size.
197
198 Maximum allocated size: 4-byte size_t: 2^31 - 8 bytes
199 8-byte size_t: 2^63 - 16 bytes
200
201 It is assumed that (possibly signed) size_t bit values suffice to
202 represent chunk sizes. `Possibly signed' is due to the fact
203 that `size_t' may be defined on a system as either a signed or
204 an unsigned type. To be conservative, values that would appear
205 as negative numbers are avoided.
206 Requests for sizes with a negative sign bit will return a
207 minimum-sized chunk.
208
209 Maximum overhead wastage per allocated chunk: normally 15 bytes
210
211 Alignnment demands, plus the minimum allocatable size restriction
212 make the normal worst-case wastage 15 bytes (i.e., up to 15
213 more bytes will be allocated than were requested in malloc), with
214 two exceptions:
215 1. Because requests for zero bytes allocate non-zero space,
216 the worst case wastage for a request of zero bytes is 24 bytes.
217 2. For requests >= mmap_threshold that are serviced via
218 mmap(), the worst case wastage is 8 bytes plus the remainder
219 from a system page (the minimal mmap unit); typically 4096 bytes.
220
221 * Limitations
222
223 Here are some features that are NOT currently supported
224
225 * No user-definable hooks for callbacks and the like.
226 * No automated mechanism for fully checking that all accesses
227 to malloced memory stay within their bounds.
228 * No support for compaction.
229
230 * Synopsis of compile-time options:
231
232 People have reported using previous versions of this malloc on all
233 versions of Unix, sometimes by tweaking some of the defines
234 below. It has been tested most extensively on Solaris and
235 Linux. It is also reported to work on WIN32 platforms.
236 People have also reported adapting this malloc for use in
237 stand-alone embedded systems.
238
239 The implementation is in straight, hand-tuned ANSI C. Among other
240 consequences, it uses a lot of macros. Because of this, to be at
241 all usable, this code should be compiled using an optimizing compiler
242 (for example gcc -O2) that can simplify expressions and control
243 paths.
244
245 __STD_C (default: derived from C compiler defines)
246 Nonzero if using ANSI-standard C compiler, a C++ compiler, or
247 a C compiler sufficiently close to ANSI to get away with it.
248 DEBUG (default: NOT defined)
249 Define to enable debugging. Adds fairly extensive assertion-based
250 checking to help track down memory errors, but noticeably slows down
251 execution.
252 realloc_ZERO_BYTES_FREES (default: NOT defined)
253 Define this if you think that realloc(p, 0) should be equivalent
254 to free(p). Otherwise, since malloc returns a unique pointer for
255 malloc(0), so does realloc(p, 0).
256 HAVE_memcpy (default: defined)
257 Define if you are not otherwise using ANSI STD C, but still
258 have memcpy and memset in your C library and want to use them.
259 Otherwise, simple internal versions are supplied.
260 USE_memcpy (default: 1 if HAVE_memcpy is defined, 0 otherwise)
261 Define as 1 if you want the C library versions of memset and
262 memcpy called in realloc and calloc (otherwise macro versions are used).
263 At least on some platforms, the simple macro versions usually
264 outperform libc versions.
265 HAVE_MMAP (default: defined as 1)
266 Define to non-zero to optionally make malloc() use mmap() to
267 allocate very large blocks.
268 HAVE_MREMAP (default: defined as 0 unless Linux libc set)
269 Define to non-zero to optionally make realloc() use mremap() to
270 reallocate very large blocks.
271 malloc_getpagesize (default: derived from system #includes)
272 Either a constant or routine call returning the system page size.
273 HAVE_USR_INCLUDE_malloc_H (default: NOT defined)
274 Optionally define if you are on a system with a /usr/include/malloc.h
275 that declares struct mallinfo. It is not at all necessary to
276 define this even if you do, but will ensure consistency.
277 INTERNAL_SIZE_T (default: size_t)
278 Define to a 32-bit type (probably `unsigned int') if you are on a
279 64-bit machine, yet do not want or need to allow malloc requests of
280 greater than 2^31 to be handled. This saves space, especially for
281 very small chunks.
282 INTERNAL_LINUX_C_LIB (default: NOT defined)
283 Defined only when compiled as part of Linux libc.
284 Also note that there is some odd internal name-mangling via defines
285 (for example, internally, `malloc' is named `mALLOc') needed
286 when compiling in this case. These look funny but don't otherwise
287 affect anything.
288 WIN32 (default: undefined)
289 Define this on MS win (95, nt) platforms to compile in sbrk emulation.
290 LACKS_UNISTD_H (default: undefined)
291 Define this if your system does not have a <unistd.h>.
292 MORECORE (default: sbrk)
293 The name of the routine to call to obtain more memory from the system.
294 MORECORE_FAILURE (default: -1)
295 The value returned upon failure of MORECORE.
296 MORECORE_CLEARS (default 0)
297 True (1) if the routine mapped to MORECORE zeroes out memory (which
298 holds for sbrk).
299 DEFAULT_TRIM_THRESHOLD
300 DEFAULT_TOP_PAD
301 DEFAULT_MMAP_THRESHOLD
302 DEFAULT_MMAP_MAX
303 Default values of tunable parameters (described in detail below)
304 controlling interaction with host system routines (sbrk, mmap, etc).
305 These values may also be changed dynamically via mallopt(). The
306 preset defaults are those that give best performance for typical
307 programs/systems.
308
309
310 */
311
312 \f
313
314
315 /* Preliminaries */
316
317 #include "winsup.h"
318
319 #ifndef __STD_C
320 #ifdef __STDC__
321 #define __STD_C 1
322 #else
323 #if __cplusplus
324 #define __STD_C 1
325 #else
326 #define __STD_C 0
327 #endif /*__cplusplus*/
328 #endif /*__STDC__*/
329 #endif /*__STD_C*/
330
331 #ifndef Void_t
332 #if __STD_C
333 #define Void_t void
334 #else
335 #define Void_t char
336 #endif
337 #endif /*Void_t*/
338
339 #define __MALLOC_H_INCLUDED
340
341 #if __STD_C
342 #include <stddef.h> /* for size_t */
343 #else
344 #include <sys/types.h>
345 #endif
346
347 #ifdef __cplusplus
348 extern "C" {
349 #endif
350
351 #include <stdio.h> /* needed for malloc_stats */
352
353
354 /*
355 Compile-time options
356 */
357
358
359 /*
360 Debugging:
361
362 Because freed chunks may be overwritten with link fields, this
363 malloc will often die when freed memory is overwritten by user
364 programs. This can be very effective (albeit in an annoying way)
365 in helping track down dangling pointers.
366
367 If you compile with -DDEBUG, a number of assertion checks are
368 enabled that will catch more memory errors. You probably won't be
369 able to make much sense of the actual assertion errors, but they
370 should help you locate incorrectly overwritten memory. The
371 checking is fairly extensive, and will slow down execution
372 noticeably. Calling malloc_stats or mallinfo with DEBUG set will
373 attempt to check every non-mmapped allocated and free chunk in the
374 course of computing the summmaries. (By nature, mmapped regions
375 cannot be checked very much automatically.)
376
377 Setting DEBUG may also be helpful if you are trying to modify
378 this code. The assertions in the check routines spell out in more
379 detail the assumptions and invariants underlying the algorithms.
380
381 */
382
383 #ifdef MALLOC_DEBUG
384 #define DEBUG 1
385 #define DEBUG1 1
386 #define DEBUG2 1
387 #define DEBUG3 1
388 #endif
389
390 #if DEBUG
391 #include <assert.h>
392 #else
393 #define assert(x) ((void)0)
394 #endif
395
396 /*
397 INTERNAL_SIZE_T is the word-size used for internal bookkeeping
398 of chunk sizes. On a 64-bit machine, you can reduce malloc
399 overhead by defining INTERNAL_SIZE_T to be a 32 bit `unsigned int'
400 at the expense of not being able to handle requests greater than
401 2^31. This limitation is hardly ever a concern; you are encouraged
402 to set this. However, the default version is the same as size_t.
403 */
404
405 #ifndef INTERNAL_SIZE_T
406 #define INTERNAL_SIZE_T size_t
407 #endif
408
409 /*
410 realloc_ZERO_BYTES_FREES should be set if a call to
411 realloc with zero bytes should be the same as a call to free.
412 Some people think it should. Otherwise, since this malloc
413 returns a unique pointer for malloc(0), so does realloc(p, 0).
414 */
415
416
417 /* #define realloc_ZERO_BYTES_FREES */
418
419
420 /*
421 WIN32 causes an emulation of sbrk to be compiled in
422 mmap-based options are not currently supported in WIN32.
423 */
424
425 /* #define WIN32 */
426 #ifdef WIN32
427 #define MORECORE wsbrk
428 #define HAVE_MMAP 0
429 #endif
430
431
432 /*
433 HAVE_memcpy should be defined if you are not otherwise using
434 ANSI STD C, but still have memcpy and memset in your C library
435 and want to use them in calloc and realloc. Otherwise simple
436 macro versions are defined here.
437
438 USE_memcpy should be defined as 1 if you actually want to
439 have memset and memcpy called. People report that the macro
440 versions are often enough faster than libc versions on many
441 systems that it is better to use them.
442
443 */
444
445 #define HAVE_memcpy
446
447 #ifndef USE_memcpy
448 #ifdef HAVE_memcpy
449 #define USE_memcpy 1
450 #else
451 #define USE_memcpy 0
452 #endif
453 #endif
454
455 #if (__STD_C || defined(HAVE_memcpy))
456
457 #if __STD_C
458 void* memset(void*, int, size_t);
459 void* memcpy(void*, const void*, size_t);
460 #else
461 Void_t* memset();
462 Void_t* memcpy();
463 #endif
464 #endif
465
466 #ifndef DEBUG3
467
468 #if USE_memcpy
469
470 /* The following macros are only invoked with (2n+1)-multiples of
471 INTERNAL_SIZE_T units, with a positive integer n. This is exploited
472 for fast inline execution when n is small. */
473
474 #define malloc_ZERO(charp, nbytes) \
475 do { \
476 INTERNAL_SIZE_T mzsz = (nbytes); \
477 if(mzsz <= 9*sizeof(mzsz)) { \
478 INTERNAL_SIZE_T* mz = (INTERNAL_SIZE_T*) (charp); \
479 if(mzsz >= 5*sizeof(mzsz)) { *mz++ = 0; \
480 *mz++ = 0; \
481 if(mzsz >= 7*sizeof(mzsz)) { *mz++ = 0; \
482 *mz++ = 0; \
483 if(mzsz >= 9*sizeof(mzsz)) { *mz++ = 0; \
484 *mz++ = 0; }}} \
485 *mz++ = 0; \
486 *mz++ = 0; \
487 *mz = 0; \
488 } else memset((charp), 0, mzsz); \
489 } while(0)
490
491 #define malloc_COPY(dest,src,nbytes) \
492 do { \
493 INTERNAL_SIZE_T mcsz = (nbytes); \
494 if(mcsz <= 9*sizeof(mcsz)) { \
495 INTERNAL_SIZE_T* mcsrc = (INTERNAL_SIZE_T*) (src); \
496 INTERNAL_SIZE_T* mcdst = (INTERNAL_SIZE_T*) (dest); \
497 if(mcsz >= 5*sizeof(mcsz)) { *mcdst++ = *mcsrc++; \
498 *mcdst++ = *mcsrc++; \
499 if(mcsz >= 7*sizeof(mcsz)) { *mcdst++ = *mcsrc++; \
500 *mcdst++ = *mcsrc++; \
501 if(mcsz >= 9*sizeof(mcsz)) { *mcdst++ = *mcsrc++; \
502 *mcdst++ = *mcsrc++; }}} \
503 *mcdst++ = *mcsrc++; \
504 *mcdst++ = *mcsrc++; \
505 *mcdst = *mcsrc ; \
506 } else memcpy(dest, src, mcsz); \
507 } while(0)
508
509 #else /* !USE_memcpy */
510
511 /* Use Duff's device for good zeroing/copying performance. */
512
513 #define malloc_ZERO(charp, nbytes) \
514 do { \
515 INTERNAL_SIZE_T* mzp = (INTERNAL_SIZE_T*)(charp); \
516 long mctmp = (nbytes)/sizeof(INTERNAL_SIZE_T), mcn; \
517 if (mctmp < 8) mcn = 0; else { mcn = (mctmp-1)/8; mctmp %= 8; } \
518 switch (mctmp) { \
519 case 0: for(;;) { *mzp++ = 0; \
520 case 7: *mzp++ = 0; \
521 case 6: *mzp++ = 0; \
522 case 5: *mzp++ = 0; \
523 case 4: *mzp++ = 0; \
524 case 3: *mzp++ = 0; \
525 case 2: *mzp++ = 0; \
526 case 1: *mzp++ = 0; if(mcn <= 0) break; mcn--; } \
527 } \
528 } while(0)
529
530 #define malloc_COPY(dest,src,nbytes) \
531 do { \
532 INTERNAL_SIZE_T* mcsrc = (INTERNAL_SIZE_T*) src; \
533 INTERNAL_SIZE_T* mcdst = (INTERNAL_SIZE_T*) dest; \
534 long mctmp = (nbytes)/sizeof(INTERNAL_SIZE_T), mcn; \
535 if (mctmp < 8) mcn = 0; else { mcn = (mctmp-1)/8; mctmp %= 8; } \
536 switch (mctmp) { \
537 case 0: for(;;) { *mcdst++ = *mcsrc++; \
538 case 7: *mcdst++ = *mcsrc++; \
539 case 6: *mcdst++ = *mcsrc++; \
540 case 5: *mcdst++ = *mcsrc++; \
541 case 4: *mcdst++ = *mcsrc++; \
542 case 3: *mcdst++ = *mcsrc++; \
543 case 2: *mcdst++ = *mcsrc++; \
544 case 1: *mcdst++ = *mcsrc++; if(mcn <= 0) break; mcn--; } \
545 } \
546 } while(0)
547
548 #endif
549
550 #else /* DEBUG3 */
551
552 /* The trailing moat invalidates the above prediction about the nbytes
553 parameter to malloc_ZERO and malloc_COPY. */
554
555 #define malloc_ZERO(charp, nbytes) \
556 do { \
557 char *mzp = (char *)(charp); \
558 long mzn = (nbytes); \
559 while (mzn--) \
560 *mzp++ = '\0'; \
561 } while(0)
562
563 #define malloc_COPY(dest,src,nbytes) \
564 do { \
565 char *mcsrc = (char *)(src); \
566 char *mcdst = (char *)(dest); \
567 long mcn = (nbytes); \
568 while (mcn--) \
569 *mcdst++ = *mcsrc++; \
570 } while(0)
571
572 #endif /* DEBUG3 */
573
574 /*
575 Define HAVE_MMAP to optionally make malloc() use mmap() to
576 allocate very large blocks. These will be returned to the
577 operating system immediately after a free().
578 */
579
580 #ifndef HAVE_MMAP
581 #define HAVE_MMAP 1
582 #endif
583
584 /*
585 Define HAVE_MREMAP to make realloc() use mremap() to re-allocate
586 large blocks. This is currently only possible on Linux with
587 kernel versions newer than 1.3.77.
588 */
589
590 #ifndef HAVE_MREMAP
591 #ifdef INTERNAL_LINUX_C_LIB
592 #define HAVE_MREMAP 1
593 #else
594 #define HAVE_MREMAP 0
595 #endif
596 #endif
597
598 #if HAVE_MMAP
599
600 #include <unistd.h>
601 #include <fcntl.h>
602 #include <sys/mman.h>
603
604 #if !defined(MAP_ANONYMOUS) && defined(MAP_ANON)
605 #define MAP_ANONYMOUS MAP_ANON
606 #endif
607
608 #endif /* HAVE_MMAP */
609
610 /*
611 Access to system page size. To the extent possible, this malloc
612 manages memory from the system in page-size units.
613
614 The following mechanics for getpagesize were adapted from
615 bsd/gnu getpagesize.h
616 */
617
618 #ifndef LACKS_UNISTD_H
619 # include <unistd.h>
620 #endif
621
622 #ifndef malloc_getpagesize
623 # ifdef _SC_PAGESIZE /* some SVR4 systems omit an underscore */
624 # ifndef _SC_PAGE_SIZE
625 # define _SC_PAGE_SIZE _SC_PAGESIZE
626 # endif
627 # endif
628 # ifdef _SC_PAGE_SIZE
629 # define malloc_getpagesize sysconf(_SC_PAGE_SIZE)
630 # else
631 # if defined(BSD) || defined(DGUX) || defined(HAVE_GETPAGESIZE)
632 # if __STD_C
633 extern size_t getpagesize(void);
634 # else
635 extern size_t getpagesize();
636 # endif
637 # define malloc_getpagesize getpagesize()
638 # else
639 # include <sys/param.h>
640 # ifdef EXEC_PAGESIZE
641 # define malloc_getpagesize EXEC_PAGESIZE
642 # else
643 # ifdef NBPG
644 # ifndef CLSIZE
645 # define malloc_getpagesize NBPG
646 # else
647 # define malloc_getpagesize (NBPG * CLSIZE)
648 # endif
649 # else
650 # ifdef NBPC
651 # define malloc_getpagesize NBPC
652 # else
653 # ifdef PAGESIZE
654 # define malloc_getpagesize PAGESIZE
655 # else
656 # define malloc_getpagesize (4096) /* just guess */
657 # endif
658 # endif
659 # endif
660 # endif
661 # endif
662 # endif
663 #endif
664
665
666
667 /*
668
669 This version of malloc supports the standard SVID/XPG mallinfo
670 routine that returns a struct containing the same kind of
671 information you can get from malloc_stats. It should work on
672 any SVID/XPG compliant system that has a /usr/include/malloc.h
673 defining struct mallinfo. (If you'd like to install such a thing
674 yourself, cut out the preliminary declarations as described above
675 and below and save them in a malloc.h file. But there's no
676 compelling reason to bother to do this.)
677
678 The main declaration needed is the mallinfo struct that is returned
679 (by-copy) by mallinfo(). The SVID/XPG malloinfo struct contains a
680 bunch of fields, most of which are not even meaningful in this
681 version of malloc. Some of these fields are are instead filled by
682 mallinfo() with other numbers that might possibly be of interest.
683
684 HAVE_USR_INCLUDE_malloc_H should be set if you have a
685 /usr/include/malloc.h file that includes a declaration of struct
686 mallinfo. If so, it is included; else an SVID2/XPG2 compliant
687 version is declared below. These must be precisely the same for
688 mallinfo() to work.
689
690 */
691
692 /* #define HAVE_USR_INCLUDE_malloc_H */
693
694 #if HAVE_USR_INCLUDE_malloc_H
695 #include "/usr/include/malloc.h"
696 #else
697
698 /* SVID2/XPG mallinfo structure */
699
700 struct mallinfo {
701 int arena; /* total space allocated from system */
702 int ordblks; /* number of non-inuse chunks */
703 int smblks; /* unused -- always zero */
704 int hblks; /* number of mmapped regions */
705 int hblkhd; /* total space in mmapped regions */
706 int usmblks; /* unused -- always zero */
707 int fsmblks; /* unused -- always zero */
708 int uordblks; /* total allocated space */
709 int fordblks; /* total non-inuse space */
710 int keepcost; /* top-most, releasable (via malloc_trim) space */
711 };
712
713 /* SVID2/XPG mallopt options */
714
715 #define M_MXFAST 1 /* UNUSED in this malloc */
716 #define M_NLBLKS 2 /* UNUSED in this malloc */
717 #define M_GRAIN 3 /* UNUSED in this malloc */
718 #define M_KEEP 4 /* UNUSED in this malloc */
719
720 #endif
721
722 /* mallopt options that actually do something */
723
724 #define M_TRIM_THRESHOLD -1
725 #define M_TOP_PAD -2
726 #define M_MMAP_THRESHOLD -3
727 #define M_MMAP_MAX -4
728 #define M_SCANHEAP -5
729 #define M_FILL
730
731
732
733 #ifndef DEFAULT_TRIM_THRESHOLD
734 #define DEFAULT_TRIM_THRESHOLD (128 * 1024)
735 #endif
736
737 /*
738 M_TRIM_THRESHOLD is the maximum amount of unused top-most memory
739 to keep before releasing via malloc_trim in free().
740
741 Automatic trimming is mainly useful in long-lived programs.
742 Because trimming via sbrk can be slow on some systems, and can
743 sometimes be wasteful (in cases where programs immediately
744 afterward allocate more large chunks) the value should be high
745 enough so that your overall system performance would improve by
746 releasing.
747
748 The trim threshold and the mmap control parameters (see below)
749 can be traded off with one another. Trimming and mmapping are
750 two different ways of releasing unused memory back to the
751 system. Between these two, it is often possible to keep
752 system-level demands of a long-lived program down to a bare
753 minimum. For example, in one test suite of sessions measuring
754 the XF86 X server on Linux, using a trim threshold of 128K and a
755 mmap threshold of 192K led to near-minimal long term resource
756 consumption.
757
758 If you are using this malloc in a long-lived program, it should
759 pay to experiment with these values. As a rough guide, you
760 might set to a value close to the average size of a process
761 (program) running on your system. Releasing this much memory
762 would allow such a process to run in memory. Generally, it's
763 worth it to tune for trimming rather tham memory mapping when a
764 program undergoes phases where several large chunks are
765 allocated and released in ways that can reuse each other's
766 storage, perhaps mixed with phases where there are no such
767 chunks at all. And in well-behaved long-lived programs,
768 controlling release of large blocks via trimming versus mapping
769 is usually faster.
770
771 However, in most programs, these parameters serve mainly as
772 protection against the system-level effects of carrying around
773 massive amounts of unneeded memory. Since frequent calls to
774 sbrk, mmap, and munmap otherwise degrade performance, the default
775 parameters are set to relatively high values that serve only as
776 safeguards.
777
778 The default trim value is high enough to cause trimming only in
779 fairly extreme (by current memory consumption standards) cases.
780 It must be greater than page size to have any useful effect. To
781 disable trimming completely, you can set to (unsigned long)(-1);
782
783
784 */
785
786
787 #ifndef DEFAULT_TOP_PAD
788 #define DEFAULT_TOP_PAD (0)
789 #endif
790
791 /*
792 M_TOP_PAD is the amount of extra `padding' space to allocate or
793 retain whenever sbrk is called. It is used in two ways internally:
794
795 * When sbrk is called to extend the top of the arena to satisfy
796 a new malloc request, this much padding is added to the sbrk
797 request.
798
799 * When malloc_trim is called automatically from free(),
800 it is used as the `pad' argument.
801
802 In both cases, the actual amount of padding is rounded
803 so that the end of the arena is always a system page boundary.
804
805 The main reason for using padding is to avoid calling sbrk so
806 often. Having even a small pad greatly reduces the likelihood
807 that nearly every malloc request during program start-up (or
808 after trimming) will invoke sbrk, which needlessly wastes
809 time.
810
811 Automatic rounding-up to page-size units is normally sufficient
812 to avoid measurable overhead, so the default is 0. However, in
813 systems where sbrk is relatively slow, it can pay to increase
814 this value, at the expense of carrying around more memory than
815 the program needs.
816
817 */
818
819
820 #ifndef DEFAULT_MMAP_THRESHOLD
821 #define DEFAULT_MMAP_THRESHOLD (128 * 1024)
822 #endif
823
824 /*
825
826 M_MMAP_THRESHOLD is the request size threshold for using mmap()
827 to service a request. Requests of at least this size that cannot
828 be allocated using already-existing space will be serviced via mmap.
829 (If enough normal freed space already exists it is used instead.)
830
831 Using mmap segregates relatively large chunks of memory so that
832 they can be individually obtained and released from the host
833 system. A request serviced through mmap is never reused by any
834 other request (at least not directly; the system may just so
835 happen to remap successive requests to the same locations).
836
837 Segregating space in this way has the benefit that mmapped space
838 can ALWAYS be individually released back to the system, which
839 helps keep the system level memory demands of a long-lived
840 program low. Mapped memory can never become `locked' between
841 other chunks, as can happen with normally allocated chunks, which
842 menas that even trimming via malloc_trim would not release them.
843
844 However, it has the disadvantages that:
845
846 1. The space cannot be reclaimed, consolidated, and then
847 used to service later requests, as happens with normal chunks.
848 2. It can lead to more wastage because of mmap page alignment
849 requirements
850 3. It causes malloc performance to be more dependent on host
851 system memory management support routines which may vary in
852 implementation quality and may impose arbitrary
853 limitations. Generally, servicing a request via normal
854 malloc steps is faster than going through a system's mmap.
855
856 All together, these considerations should lead you to use mmap
857 only for relatively large requests.
858
859
860 */
861
862
863
864 #ifndef DEFAULT_MMAP_MAX
865 #if HAVE_MMAP
866 #define DEFAULT_MMAP_MAX (64)
867 #else
868 #define DEFAULT_MMAP_MAX (0)
869 #endif
870 #endif
871
872 /*
873 M_MMAP_MAX is the maximum number of requests to simultaneously
874 service using mmap. This parameter exists because:
875
876 1. Some systems have a limited number of internal tables for
877 use by mmap.
878 2. In most systems, overreliance on mmap can degrade overall
879 performance.
880 3. If a program allocates many large regions, it is probably
881 better off using normal sbrk-based allocation routines that
882 can reclaim and reallocate normal heap memory. Using a
883 small value allows transition into this mode after the
884 first few allocations.
885
886 Setting to 0 disables all use of mmap. If HAVE_MMAP is not set,
887 the default value is 0, and attempts to set it to non-zero values
888 in mallopt will fail.
889 */
890
891
892
893
894 /*
895
896 Special defines for linux libc
897
898 Except when compiled using these special defines for Linux libc
899 using weak aliases, this malloc is NOT designed to work in
900 multithreaded applications. No semaphores or other concurrency
901 control are provided to ensure that multiple malloc or free calls
902 don't run at the same time, which could be disasterous. A single
903 semaphore could be used across malloc, realloc, and free (which is
904 essentially the effect of the linux weak alias approach). It would
905 be hard to obtain finer granularity.
906
907 */
908
909
910 #ifdef INTERNAL_LINUX_C_LIB
911
912 #if __STD_C
913
914 Void_t * __default_morecore_init (ptrdiff_t);
915 Void_t *(*__morecore)(ptrdiff_t) = __default_morecore_init;
916
917 #else
918
919 Void_t * __default_morecore_init ();
920 Void_t *(*__morecore)() = __default_morecore_init;
921
922 #endif
923
924 #define MORECORE (*__morecore)
925 #define MORECORE_FAILURE 0
926 #define MORECORE_CLEARS 1
927
928 #else /* INTERNAL_LINUX_C_LIB */
929
930 #if __STD_C
931 /* extern Void_t* sbrk(ptrdiff_t);*/
932 #else
933 extern Void_t* sbrk();
934 #endif
935
936 #ifndef MORECORE
937 #define MORECORE sbrk
938 #endif
939
940 #ifndef MORECORE_FAILURE
941 #define MORECORE_FAILURE -1
942 #endif
943
944 #ifndef MORECORE_CLEARS
945 #define MORECORE_CLEARS 0
946 #endif
947
948 #endif /* INTERNAL_LINUX_C_LIB */
949
950 #if defined(INTERNAL_LINUX_C_LIB) && defined(__ELF__)
951
952 #define cALLOc __libc_calloc
953 #define fREe __libc_free
954 #define mALLOc __libc_malloc
955 #define mEMALIGn __libc_memalign
956 #define rEALLOc __libc_realloc
957 #define vALLOc __libc_valloc
958 #define pvALLOc __libc_pvalloc
959 #define mALLINFo __libc_mallinfo
960 #define mALLOPt __libc_mallopt
961
962 #pragma weak calloc = __libc_calloc
963 #pragma weak free = __libc_free
964 #pragma weak cfree = __libc_free
965 #pragma weak malloc = __libc_malloc
966 #pragma weak memalign = __libc_memalign
967 #pragma weak realloc = __libc_realloc
968 #pragma weak valloc = __libc_valloc
969 #pragma weak pvalloc = __libc_pvalloc
970 #pragma weak mallinfo = __libc_mallinfo
971 #pragma weak mallopt = __libc_mallopt
972
973 #else
974
975 #ifndef cALLOc
976 #define cALLOc calloc
977 #endif
978 #ifndef fREe
979 #define fREe free
980 #endif
981 #ifndef mALLOc
982 #define mALLOc malloc
983 #endif
984 #ifndef mEMALIGn
985 #define mEMALIGn memalign
986 #endif
987 #ifndef rEALLOc
988 #define rEALLOc realloc
989 #endif
990 #ifndef vALLOc
991 #define vALLOc valloc
992 #endif
993 #ifndef pvALLOc
994 #define pvALLOc pvalloc
995 #endif
996 #ifndef mALLINFo
997 #define mALLINFo mallinfo
998 #endif
999 #ifndef mALLOPt
1000 #define mALLOPt mallopt
1001 #endif
1002
1003 #endif
1004
1005 /* Public routines */
1006
1007 #ifdef DEBUG2
1008 #define malloc(size) malloc_dbg(size, __FILE__, __LINE__)
1009 #define free(p) free_dbg(p, __FILE__, __LINE__)
1010 #define realloc(p, size) realloc_dbg(p, size, __FILE__, __LINE__)
1011 #define calloc(n, size) calloc_dbg(n, size, __FILE__, __LINE__)
1012 #define memalign(align, size) memalign_dbg(align, size, __FILE__, __LINE__)
1013 #define valloc(size) valloc_dbg(size, __FILE__, __LINE__)
1014 #define pvalloc(size) pvalloc_dbg(size, __FILE__, __LINE__)
1015 #define cfree(p) cfree_dbg(p, __FILE__, __LINE__)
1016 #define malloc_trim(pad) malloc_trim_dbg(pad, __FILE__, __LINE__)
1017 #define malloc_usable_size(p) malloc_usable_size_dbg(p, __FILE__, __LINE__)
1018 #define malloc_stats(void) malloc_stats_dbg(__FILE__, __LINE__)
1019 #define mallopt(flag, val) mallopt_dbg(flag, val, __FILE__, __LINE__)
1020 #define mallinfo(void) mallinfo_dbg(__FILE__, __LINE__)
1021
1022 #if __STD_C
1023 Void_t* malloc_dbg(size_t, const char *, int);
1024 void free_dbg(Void_t*, const char *, int);
1025 Void_t* realloc_dbg(Void_t*, size_t, const char *, int);
1026 Void_t* calloc_dbg(size_t, size_t, const char *, int);
1027 Void_t* memalign_dbg(size_t, size_t, const char *, int);
1028 Void_t* valloc_dbg(size_t, const char *, int);
1029 Void_t* pvalloc_dbg(size_t, const char *, int);
1030 void cfree_dbg(Void_t*, const char *, int);
1031 int malloc_trim_dbg(size_t, const char *, int);
1032 size_t malloc_usable_size_dbg(Void_t*, const char *, int);
1033 void malloc_stats_dbg(const char *, int);
1034 int mallopt_dbg(int, int, const char *, int);
1035 struct mallinfo mallinfo_dbg(const char *, int);
1036 #else
1037 Void_t* malloc_dbg();
1038 void free_dbg();
1039 Void_t* realloc_dbg();
1040 Void_t* calloc_dbg();
1041 Void_t* memalign_dbg();
1042 Void_t* valloc_dbg();
1043 Void_t* pvalloc_dbg();
1044 void cfree_dbg();
1045 int malloc_trim_dbg();
1046 size_t malloc_usable_size_dbg();
1047 void malloc_stats_dbg();
1048 int mallopt_dbg();
1049 struct mallinfo mallinfo_dbg();
1050 #endif /* !__STD_C */
1051
1052 #else /* !DEBUG2 */
1053
1054 #if __STD_C
1055
1056 Void_t* mALLOc(size_t);
1057 void fREe(Void_t*);
1058 Void_t* rEALLOc(Void_t*, size_t);
1059 Void_t* cALLOc(size_t, size_t);
1060 Void_t* mEMALIGn(size_t, size_t);
1061 Void_t* vALLOc(size_t);
1062 Void_t* pvALLOc(size_t);
1063 void cfree(Void_t*);
1064 int malloc_trim(size_t);
1065 size_t malloc_usable_size(Void_t*);
1066 void malloc_stats(void);
1067 int mALLOPt(int, int);
1068 struct mallinfo mALLINFo(void);
1069 #else
1070 Void_t* mALLOc();
1071 void fREe();
1072 Void_t* rEALLOc();
1073 Void_t* cALLOc();
1074 Void_t* mEMALIGn();
1075 Void_t* vALLOc();
1076 Void_t* pvALLOc();
1077 void cfree();
1078 int malloc_trim();
1079 size_t malloc_usable_size();
1080 void malloc_stats();
1081 int mALLOPt();
1082 struct mallinfo mALLINFo();
1083 #endif
1084 #endif /* !DEBUG2 */
1085
1086 #ifdef __cplusplus
1087 }; /* end of extern "C" */
1088 #endif
1089
1090 /* ---------- To make a malloc.h, end cutting here ------------ */
1091
1092 #ifdef DEBUG2
1093
1094 #ifdef __cplusplus
1095 extern "C" {
1096 #endif
1097
1098 #undef malloc
1099 #undef free
1100 #undef realloc
1101 #undef calloc
1102 #undef memalign
1103 #undef valloc
1104 #undef pvalloc
1105 #undef cfree
1106 #undef malloc_trim
1107 #undef malloc_usable_size
1108 #undef malloc_stats
1109 #undef mallopt
1110 #undef mallinfo
1111
1112 #if __STD_C
1113 Void_t* mALLOc(size_t);
1114 void fREe(Void_t*);
1115 Void_t* rEALLOc(Void_t*, size_t);
1116 Void_t* cALLOc(size_t, size_t);
1117 Void_t* mEMALIGn(size_t, size_t);
1118 Void_t* vALLOc(size_t);
1119 Void_t* pvALLOc(size_t);
1120 void cfree(Void_t*);
1121 int malloc_trim(size_t);
1122 size_t malloc_usable_size(Void_t*);
1123 void malloc_stats(void);
1124 int mALLOPt(int, int);
1125 struct mallinfo mALLINFo(void);
1126 #else
1127 Void_t* mALLOc();
1128 void fREe();
1129 Void_t* rEALLOc();
1130 Void_t* cALLOc();
1131 Void_t* mEMALIGn();
1132 Void_t* vALLOc();
1133 Void_t* pvALLOc();
1134 void cfree();
1135 int malloc_trim();
1136 size_t malloc_usable_size();
1137 void malloc_stats();
1138 int mALLOPt();
1139 struct mallinfo mALLINFo();
1140 #endif
1141
1142 #include <ctype.h> /* isprint() */
1143 #ifdef DEBUG3
1144 #include <stdlib.h> /* atexit() */
1145 #endif
1146
1147 #ifdef __cplusplus
1148 }; /* end of extern "C" */
1149 #endif
1150
1151 #endif /* DEBUG2 */
1152
1153 /*
1154 Emulation of sbrk for WIN32
1155 All code within the ifdef WIN32 is untested by me.
1156 */
1157
1158
1159 #ifdef WIN32
1160
1161 #define AlignPage(add) (((add) + (malloc_getpagesize-1)) & \
1162 ~(malloc_getpagesize-1))
1163
1164 /* resrve 64MB to insure large contiguous space */
1165 #define RESERVED_SIZE (1024*1024*64)
1166 #define NEXT_SIZE (2048*1024)
1167 #define TOP_MEMORY ((unsigned long)2*1024*1024*1024)
1168
1169 struct GmListElement;
1170 typedef struct GmListElement GmListElement;
1171
1172 struct GmListElement
1173 {
1174 GmListElement* next;
1175 void* base;
1176 };
1177
1178 static GmListElement* head = 0;
1179 static unsigned int gNextAddress = 0;
1180 static unsigned int gAddressBase = 0;
1181 static unsigned int gAllocatedSize = 0;
1182
1183 static
1184 GmListElement* makeGmListElement (void* bas)
1185 {
1186 GmListElement* this;
1187 this = (GmListElement*)(void*)LocalAlloc (0, sizeof (GmListElement));
1188 ASSERT (this);
1189 if (this)
1190 {
1191 this->base = bas;
1192 this->next = head;
1193 head = this;
1194 }
1195 return this;
1196 }
1197
1198 void gcleanup ()
1199 {
1200 BOOL rval;
1201 ASSERT ( (head == NULL) || (head->base == (void*)gAddressBase));
1202 if (gAddressBase && (gNextAddress - gAddressBase))
1203 {
1204 rval = VirtualFree ((void*)gAddressBase,
1205 gNextAddress - gAddressBase,
1206 MEM_DECOMMIT);
1207 ASSERT (rval);
1208 }
1209 while (head)
1210 {
1211 GmListElement* next = head->next;
1212 rval = VirtualFree (head->base, 0, MEM_RELEASE);
1213 ASSERT (rval);
1214 LocalFree (head);
1215 head = next;
1216 }
1217 }
1218
1219 static
1220 void* findRegion (void* start_address, unsigned long size)
1221 {
1222 MEMORY_BASIC_INFORMATION info;
1223 while ((unsigned long)start_address < TOP_MEMORY)
1224 {
1225 VirtualQuery (start_address, &info, sizeof (info));
1226 if (info.State != MEM_FREE)
1227 start_address = (char*)info.BaseAddress + info.RegionSize;
1228 else if (info.RegionSize >= size)
1229 return start_address;
1230 else
1231 start_address = (char*)info.BaseAddress + info.RegionSize;
1232 }
1233 return NULL;
1234
1235 }
1236
1237
1238 void* wsbrk (long size)
1239 {
1240 void* tmp;
1241 if (size > 0)
1242 {
1243 if (gAddressBase == 0)
1244 {
1245 gAllocatedSize = max (RESERVED_SIZE, AlignPage (size));
1246 gNextAddress = gAddressBase =
1247 (unsigned int)VirtualAlloc (NULL, gAllocatedSize,
1248 MEM_RESERVE, PAGE_NOACCESS);
1249 } else if (AlignPage (gNextAddress + size) > (gAddressBase +
1250 gAllocatedSize))
1251 {
1252 long new_size = max (NEXT_SIZE, AlignPage (size));
1253 void* new_address = (void*)(gAddressBase+gAllocatedSize);
1254 do
1255 {
1256 new_address = findRegion (new_address, new_size);
1257
1258 if (new_address == 0)
1259 return (void*)-1;
1260
1261 gAddressBase = gNextAddress =
1262 (unsigned int)VirtualAlloc (new_address, new_size,
1263 MEM_RESERVE, PAGE_NOACCESS);
1264 // repeat in case of race condition
1265 // The region that we found has been snagged
1266 // by another thread
1267 }
1268 while (gAddressBase == 0);
1269
1270 ASSERT (new_address == (void*)gAddressBase);
1271
1272 gAllocatedSize = new_size;
1273
1274 if (!makeGmListElement ((void*)gAddressBase))
1275 return (void*)-1;
1276 }
1277 if ((size + gNextAddress) > AlignPage (gNextAddress))
1278 {
1279 void* res;
1280 res = VirtualAlloc ((void*)AlignPage (gNextAddress),
1281 (size + gNextAddress -
1282 AlignPage (gNextAddress)),
1283 MEM_COMMIT, PAGE_READWRITE);
1284 if (res == 0)
1285 return (void*)-1;
1286 }
1287 tmp = (void*)gNextAddress;
1288 gNextAddress = (unsigned int)tmp + size;
1289 return tmp;
1290 }
1291 else if (size < 0)
1292 {
1293 unsigned int alignedGoal = AlignPage (gNextAddress + size);
1294 /* Trim by releasing the virtual memory */
1295 if (alignedGoal >= gAddressBase)
1296 {
1297 VirtualFree ((void*)alignedGoal, gNextAddress - alignedGoal,
1298 MEM_DECOMMIT);
1299 gNextAddress = gNextAddress + size;
1300 return (void*)gNextAddress;
1301 }
1302 else
1303 {
1304 VirtualFree ((void*)gAddressBase, gNextAddress - gAddressBase,
1305 MEM_DECOMMIT);
1306 gNextAddress = gAddressBase;
1307 return (void*)-1;
1308 }
1309 }
1310 else
1311 {
1312 return (void*)gNextAddress;
1313 }
1314 }
1315
1316 #endif
1317
1318 \f
1319
1320 /*
1321 Type declarations
1322 */
1323
1324 #ifdef DEBUG3
1325 # define MOATWIDTH 4 /* number of guard bytes at each end of
1326 allocated region */
1327 # define MOATFILL 5 /* moat fill character */
1328 # define ALLOCFILL 1 /* fill char for allocated */
1329 # define FREEFILL 2 /* and freed regions */
1330 #endif
1331
1332 typedef struct malloc_chunk
1333 {
1334 INTERNAL_SIZE_T prev_size; /* Size of previous chunk (if free). */
1335 INTERNAL_SIZE_T size; /* Size in bytes, including overhead. */
1336 struct malloc_chunk* fd; /* double links -- used only if free. */
1337 struct malloc_chunk* bk;
1338 #ifdef DEBUG3
1339 const char *file; /* file and */
1340 int line; /* line number of [re]allocation */
1341 size_t pad; /* nr pad bytes at mem end, excluding moat */
1342 int alloced; /* whether the chunk is allocated -- less prone
1343 to segv than inuse(chunk) */
1344 char moat[MOATWIDTH]; /* actual leading moat is last MOATWIDTH bytes
1345 of chunk header; those bytes may follow this
1346 field due to header alignment padding */
1347 #endif
1348 } Chunk;
1349
1350 typedef Chunk* mchunkptr;
1351
1352 /*
1353
1354 malloc_chunk details:
1355
1356 (The following includes lightly edited explanations by Colin Plumb.)
1357
1358 Chunks of memory are maintained using a `boundary tag' method as
1359 described in e.g., Knuth or Standish. (See the paper by Paul
1360 Wilson ftp://ftp.cs.utexas.edu/pub/garbage/allocsrv.ps for a
1361 survey of such techniques.) Sizes of free chunks are stored both
1362 in the front of each chunk and at the end. This makes
1363 consolidating fragmented chunks into bigger chunks very fast. The
1364 size fields also hold bits representing whether chunks are free or
1365 in use.
1366
1367 An allocated chunk looks like this:
1368
1369
1370 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1371 | Size of previous chunk, if allocated | |
1372 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1373 | Size of chunk, in bytes |P|
1374 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1375 | User data starts here... .
1376 . .
1377 . (malloc_usable_space() bytes) .
1378 . |
1379 nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1380 | Size of chunk |
1381 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1382
1383
1384 Where "chunk" is the front of the chunk for the purpose of most of
1385 the malloc code, but "mem" is the pointer that is returned to the
1386 user. "Nextchunk" is the beginning of the next contiguous chunk.
1387
1388 Chunks always begin on even word boundries, so the mem portion
1389 (which is returned to the user) is also on an even word boundary, and
1390 thus double-word aligned.
1391
1392 Free chunks are stored in circular doubly-linked lists, and look like this:
1393
1394 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1395 | Size of previous chunk |
1396 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1397 `head:' | Size of chunk, in bytes |P|
1398 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1399 | Forward pointer to next chunk in list |
1400 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1401 | Back pointer to previous chunk in list |
1402 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1403 | Unused space (may be 0 bytes long) .
1404 . .
1405 . |
1406 nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1407 `foot:' | Size of chunk, in bytes |
1408 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1409
1410 The P (PREV_INUSE) bit, stored in the unused low-order bit of the
1411 chunk size (which is always a multiple of two words), is an in-use
1412 bit for the *previous* chunk. If that bit is *clear*, then the
1413 word before the current chunk size contains the previous chunk
1414 size, and can be used to find the front of the previous chunk.
1415 (The very first chunk allocated always has this bit set,
1416 preventing access to non-existent (or non-owned) memory.)
1417
1418 Note that the `foot' of the current chunk is actually represented
1419 as the prev_size of the NEXT chunk. (This makes it easier to
1420 deal with alignments etc).
1421
1422 The two exceptions to all this are
1423
1424 1. The special chunk `top', which doesn't bother using the
1425 trailing size field since there is no
1426 next contiguous chunk that would have to index off it. (After
1427 initialization, `top' is forced to always exist. If it would
1428 become less than MINSIZE bytes long, it is replenished via
1429 malloc_extend_top.)
1430
1431 2. Chunks allocated via mmap, which have the second-lowest-order
1432 bit (IS_MMAPPED) set in their size fields. Because they are
1433 never merged or traversed from any other chunk, they have no
1434 foot size or inuse information.
1435
1436 Available chunks are kept in any of several places (all declared below):
1437
1438 * `av': An array of chunks serving as bin headers for consolidated
1439 chunks. Each bin is doubly linked. The bins are approximately
1440 proportionally (log) spaced. There are a lot of these bins
1441 (128). This may look excessive, but works very well in
1442 practice. All procedures maintain the invariant that no
1443 consolidated chunk physically borders another one. Chunks in
1444 bins are kept in size order, with ties going to the
1445 approximately least recently used chunk.
1446
1447 The chunks in each bin are maintained in decreasing sorted order by
1448 size. This is irrelevant for the small bins, which all contain
1449 the same-sized chunks, but facilitates best-fit allocation for
1450 larger chunks. (These lists are just sequential. Keeping them in
1451 order almost never requires enough traversal to warrant using
1452 fancier ordered data structures.) Chunks of the same size are
1453 linked with the most recently freed at the front, and allocations
1454 are taken from the back. This results in LRU or FIFO allocation
1455 order, which tends to give each chunk an equal opportunity to be
1456 consolidated with adjacent freed chunks, resulting in larger free
1457 chunks and less fragmentation.
1458
1459 * `top': The top-most available chunk (i.e., the one bordering the
1460 end of available memory) is treated specially. It is never
1461 included in any bin, is used only if no other chunk is
1462 available, and is released back to the system if it is very
1463 large (see M_TRIM_THRESHOLD).
1464
1465 * `last_remainder': A bin holding only the remainder of the
1466 most recently split (non-top) chunk. This bin is checked
1467 before other non-fitting chunks, so as to provide better
1468 locality for runs of sequentially allocated chunks.
1469
1470 * Implicitly, through the host system's memory mapping tables.
1471 If supported, requests greater than a threshold are usually
1472 serviced via calls to mmap, and then later released via munmap.
1473
1474 */
1475
1476
1477
1478 \f
1479
1480
1481 /* sizes, alignments */
1482
1483 #define SIZE_SZ sizeof(INTERNAL_SIZE_T)
1484 #define ALIGNMENT (SIZE_SZ + SIZE_SZ)
1485 #define ALIGN_MASK (ALIGNMENT - 1)
1486 #ifndef DEBUG3
1487 # define MEMOFFSET (2*SIZE_SZ)
1488 # define OVERHEAD SIZE_SZ
1489 # define MMAP_EXTRA SIZE_SZ /* for correct alignment */
1490 # define MINSIZE sizeof(Chunk)
1491 #else
1492 typedef union {
1493 char strut[(sizeof(Chunk) - 1) / ALIGNMENT + 1][ALIGNMENT];
1494 Chunk chunk;
1495 } PaddedChunk;
1496 # define MEMOFFSET sizeof(PaddedChunk)
1497 # define OVERHEAD (MEMOFFSET + MOATWIDTH)
1498 # define MMAP_EXTRA 0
1499 # define MINSIZE ((OVERHEAD + ALIGN_MASK) & ~ALIGN_MASK)
1500 #endif
1501
1502 /* conversion from malloc headers to user pointers, and back */
1503
1504 #define chunk2mem(p) ((Void_t*)((char*)(p) + MEMOFFSET))
1505 #define mem2chunk(mem) ((mchunkptr)((char*)(mem) - MEMOFFSET))
1506
1507 /* pad request bytes into a usable size, including overhead */
1508
1509 #define request2size(req) \
1510 ((long)((req) + OVERHEAD) < (long)MINSIZE ? MINSIZE : \
1511 ((req) + OVERHEAD + ALIGN_MASK) & ~ALIGN_MASK)
1512
1513 /* Check if m has acceptable alignment */
1514
1515 #define aligned_OK(m) (((unsigned long)((m)) & ALIGN_MASK) == 0)
1516
1517
1518 \f
1519
1520 /*
1521 Physical chunk operations
1522 */
1523
1524
1525 /* size field is or'ed with PREV_INUSE when previous adjacent chunk in use */
1526
1527 #define PREV_INUSE 0x1
1528
1529 /* size field is or'ed with IS_MMAPPED if the chunk was obtained with mmap() */
1530
1531 #define IS_MMAPPED 0x2
1532
1533 /* Bits to mask off when extracting size */
1534
1535 #define SIZE_BITS (PREV_INUSE|IS_MMAPPED)
1536
1537
1538 /* Ptr to next physical malloc_chunk. */
1539
1540 #define next_chunk(p) ((mchunkptr)( ((char*)(p)) + ((p)->size & ~PREV_INUSE) ))
1541
1542 /* Ptr to previous physical malloc_chunk */
1543
1544 #define prev_chunk(p)\
1545 ((mchunkptr)( ((char*)(p)) - ((p)->prev_size) ))
1546
1547
1548 /* Treat space at ptr + offset as a chunk */
1549
1550 #define chunk_at_offset(p, s) ((mchunkptr)(((char*)(p)) + (s)))
1551
1552
1553 \f
1554
1555 /*
1556 Dealing with use bits
1557 */
1558
1559 /* extract p's inuse bit */
1560
1561 #define inuse(p)\
1562 ((((mchunkptr)(((char*)(p))+((p)->size & ~PREV_INUSE)))->size) & PREV_INUSE)
1563
1564 /* extract inuse bit of previous chunk */
1565
1566 #define prev_inuse(p) ((p)->size & PREV_INUSE)
1567
1568 /* check for mmap()'ed chunk */
1569
1570 #if HAVE_MMAP
1571 # define chunk_is_mmapped(p) ((p)->size & IS_MMAPPED)
1572 #else
1573 # define chunk_is_mmapped(p) 0
1574 #endif
1575
1576 /* set/clear chunk as in use without otherwise disturbing */
1577
1578 #define set_inuse(p)\
1579 ((mchunkptr)(((char*)(p)) + ((p)->size & ~PREV_INUSE)))->size |= PREV_INUSE
1580
1581 #define clear_inuse(p)\
1582 ((mchunkptr)(((char*)(p)) + ((p)->size & ~PREV_INUSE)))->size &= ~(PREV_INUSE)
1583
1584 /* check/set/clear inuse bits in known places */
1585
1586 #define inuse_bit_at_offset(p, s)\
1587 (((mchunkptr)(((char*)(p)) + (s)))->size & PREV_INUSE)
1588
1589 #define set_inuse_bit_at_offset(p, s)\
1590 (((mchunkptr)(((char*)(p)) + (s)))->size |= PREV_INUSE)
1591
1592 #define clear_inuse_bit_at_offset(p, s)\
1593 (((mchunkptr)(((char*)(p)) + (s)))->size &= ~(PREV_INUSE))
1594
1595
1596 \f
1597
1598 /*
1599 Dealing with size fields
1600 */
1601
1602 /* Get size, ignoring use bits */
1603
1604 #define chunksize(p) ((p)->size & ~(SIZE_BITS))
1605
1606 /* Set size at head, without disturbing its use bit */
1607
1608 #define set_head_size(p, s) ((p)->size = (((p)->size & PREV_INUSE) | (s)))
1609
1610 /* Set size/use ignoring previous bits in header */
1611
1612 #define set_head(p, s) ((p)->size = (s))
1613
1614 /* Set size at footer (only when chunk is not in use) */
1615
1616 #define set_foot(p, s) (((mchunkptr)((char*)(p) + (s)))->prev_size = (s))
1617
1618
1619 \f
1620
1621
1622 /*
1623 Bins
1624
1625 The bins, `av_' are an array of pairs of pointers serving as the
1626 heads of (initially empty) doubly-linked lists of chunks, laid out
1627 in a way so that each pair can be treated as if it were in a
1628 malloc_chunk. (This way, the fd/bk offsets for linking bin heads
1629 and chunks are the same).
1630
1631 Bins for sizes < 512 bytes contain chunks of all the same size, spaced
1632 8 bytes apart. Larger bins are approximately logarithmically
1633 spaced. (See the table below.) The `av_' array is never mentioned
1634 directly in the code, but instead via bin access macros.
1635
1636 Bin layout:
1637
1638 64 bins of size 8
1639 32 bins of size 64
1640 16 bins of size 512
1641 8 bins of size 4096
1642 4 bins of size 32768
1643 2 bins of size 262144
1644 1 bin of size what's left
1645
1646 There is actually a little bit of slop in the numbers in bin_index
1647 for the sake of speed. This makes no difference elsewhere.
1648
1649 The special chunks `top' and `last_remainder' get their own bins,
1650 (this is implemented via yet more trickery with the av_ array),
1651 although `top' is never properly linked to its bin since it is
1652 always handled specially.
1653
1654 */
1655
1656 #define NAV 128 /* number of bins */
1657
1658 typedef Chunk* mbinptr;
1659
1660 /* access macros */
1661
1662 #define bin_at(i) ((mbinptr)((char*)&(av_[2*(i) + 2]) - 2*SIZE_SZ))
1663 #define next_bin(b) ((mbinptr)((char*)(b) + 2 * sizeof(mbinptr)))
1664 #define prev_bin(b) ((mbinptr)((char*)(b) - 2 * sizeof(mbinptr)))
1665
1666 /*
1667 The first 2 bins are never indexed. The corresponding av_ cells are instead
1668 used for bookkeeping. This is not to save space, but to simplify
1669 indexing, maintain locality, and avoid some initialization tests.
1670 */
1671
1672 #define top (bin_at(0)->fd) /* The topmost chunk */
1673 #define last_remainder (bin_at(1)) /* remainder from last split */
1674
1675
1676 /*
1677 Because top initially points to its own bin with initial
1678 zero size, thus forcing extension on the first malloc request,
1679 we avoid having any special code in malloc to check whether
1680 it even exists yet. But we still need to in malloc_extend_top.
1681 */
1682
1683 #define initial_top ((mchunkptr)(bin_at(0)))
1684
1685 /* Helper macro to initialize bins */
1686
1687 #define IAV(i) bin_at(i), bin_at(i)
1688
1689 static mbinptr av_[NAV * 2 + 2] = {
1690 0, 0,
1691 IAV(0), IAV(1), IAV(2), IAV(3), IAV(4), IAV(5), IAV(6), IAV(7),
1692 IAV(8), IAV(9), IAV(10), IAV(11), IAV(12), IAV(13), IAV(14), IAV(15),
1693 IAV(16), IAV(17), IAV(18), IAV(19), IAV(20), IAV(21), IAV(22), IAV(23),
1694 IAV(24), IAV(25), IAV(26), IAV(27), IAV(28), IAV(29), IAV(30), IAV(31),
1695 IAV(32), IAV(33), IAV(34), IAV(35), IAV(36), IAV(37), IAV(38), IAV(39),
1696 IAV(40), IAV(41), IAV(42), IAV(43), IAV(44), IAV(45), IAV(46), IAV(47),
1697 IAV(48), IAV(49), IAV(50), IAV(51), IAV(52), IAV(53), IAV(54), IAV(55),
1698 IAV(56), IAV(57), IAV(58), IAV(59), IAV(60), IAV(61), IAV(62), IAV(63),
1699 IAV(64), IAV(65), IAV(66), IAV(67), IAV(68), IAV(69), IAV(70), IAV(71),
1700 IAV(72), IAV(73), IAV(74), IAV(75), IAV(76), IAV(77), IAV(78), IAV(79),
1701 IAV(80), IAV(81), IAV(82), IAV(83), IAV(84), IAV(85), IAV(86), IAV(87),
1702 IAV(88), IAV(89), IAV(90), IAV(91), IAV(92), IAV(93), IAV(94), IAV(95),
1703 IAV(96), IAV(97), IAV(98), IAV(99), IAV(100), IAV(101), IAV(102), IAV(103),
1704 IAV(104), IAV(105), IAV(106), IAV(107), IAV(108), IAV(109), IAV(110), IAV(111),
1705 IAV(112), IAV(113), IAV(114), IAV(115), IAV(116), IAV(117), IAV(118), IAV(119),
1706 IAV(120), IAV(121), IAV(122), IAV(123), IAV(124), IAV(125), IAV(126), IAV(127)
1707 };
1708
1709 \f
1710
1711 /* field-extraction macros */
1712
1713 #define first(b) ((b)->fd)
1714 #define last(b) ((b)->bk)
1715
1716 /*
1717 Indexing into bins
1718 */
1719
1720 #define bin_index(sz) \
1721 (((((unsigned long)(sz)) >> 9) == 0) ? (((unsigned long)(sz)) >> 3): \
1722 ((((unsigned long)(sz)) >> 9) <= 4) ? 56 + (((unsigned long)(sz)) >> 6): \
1723 ((((unsigned long)(sz)) >> 9) <= 20) ? 91 + (((unsigned long)(sz)) >> 9): \
1724 ((((unsigned long)(sz)) >> 9) <= 84) ? 110 + (((unsigned long)(sz)) >> 12): \
1725 ((((unsigned long)(sz)) >> 9) <= 340) ? 119 + (((unsigned long)(sz)) >> 15): \
1726 ((((unsigned long)(sz)) >> 9) <= 1364) ? 124 + (((unsigned long)(sz)) >> 18): \
1727 126)
1728 /*
1729 bins for chunks < 512 are all spaced 8 bytes apart, and hold
1730 identically sized chunks. This is exploited in malloc.
1731 */
1732
1733 #define MAX_SMALLBIN 63
1734 #define MAX_SMALLBIN_SIZE 512
1735 #define SMALLBIN_WIDTH 8
1736
1737 #define smallbin_index(sz) (((unsigned long)(sz)) >> 3)
1738
1739 /*
1740 Requests are `small' if both the corresponding and the next bin are small
1741 */
1742
1743 #define is_small_request(nb) (nb < MAX_SMALLBIN_SIZE - SMALLBIN_WIDTH)
1744
1745 \f
1746
1747 /*
1748 To help compensate for the large number of bins, a one-level index
1749 structure is used for bin-by-bin searching. `binblocks' is a
1750 one-word bitvector recording whether groups of BINBLOCKWIDTH bins
1751 have any (possibly) non-empty bins, so they can be skipped over
1752 all at once during during traversals. The bits are NOT always
1753 cleared as soon as all bins in a block are empty, but instead only
1754 when all are noticed to be empty during traversal in malloc.
1755 */
1756
1757 #define BINBLOCKWIDTH 4 /* bins per block */
1758
1759 #define binblocks (bin_at(0)->size) /* bitvector of nonempty blocks */
1760
1761 /* bin<->block macros */
1762
1763 #define idx2binblock(ix) ((unsigned)1 << (ix / BINBLOCKWIDTH))
1764 #define mark_binblock(ii) (binblocks |= idx2binblock(ii))
1765 #define clear_binblock(ii) (binblocks &= ~(idx2binblock(ii)))
1766
1767
1768 \f
1769
1770
1771 /* Other static bookkeeping data */
1772
1773 /* variables holding tunable values */
1774
1775 static unsigned long trim_threshold = DEFAULT_TRIM_THRESHOLD;
1776 static unsigned long top_pad = DEFAULT_TOP_PAD;
1777 static unsigned int n_mmaps_max = DEFAULT_MMAP_MAX;
1778 static unsigned long mmap_threshold = DEFAULT_MMAP_THRESHOLD;
1779 #ifdef DEBUG2
1780 static int scanheap = 1;
1781 #endif
1782
1783 /* The first value returned from sbrk */
1784 static char* sbrk_base = (char*)(-1);
1785
1786 /* The maximum memory obtained from system via sbrk */
1787 static unsigned long max_sbrked_mem = 0;
1788
1789 /* The maximum via either sbrk or mmap */
1790 static unsigned long max_total_mem = 0;
1791
1792 /* internal working copy of mallinfo */
1793 static struct mallinfo current_mallinfo = { 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 };
1794
1795 /* The total memory obtained from system via sbrk */
1796 #define sbrked_mem (current_mallinfo.arena)
1797
1798 /* Tracking mmaps */
1799
1800 static unsigned int n_mmaps = 0;
1801 static unsigned long mmapped_mem = 0;
1802 #if HAVE_MMAP
1803 static unsigned int max_n_mmaps = 0;
1804 static unsigned long max_mmapped_mem = 0;
1805 #endif
1806
1807 \f
1808
1809 /*
1810 Debugging support
1811 */
1812
1813 #if DEBUG
1814
1815 #ifndef DEBUG2
1816 # define unless(cond, err, p) assert(cond)
1817 #else
1818 # define unless(cond, err, p) do { if (!(cond)) malloc_err(err, p); } while (0)
1819
1820 /*
1821 * When debug_file is non-null, it and debug_line respectively contain the
1822 * file and line number of the current invocation of malloc(), calloc(),
1823 * realloc(), or free().
1824 */
1825 static const char *debug_file = NULL;
1826 static int debug_line;
1827
1828 /*
1829 * Avoid dereferencing invalid chunk.file pointers by tracking the range of
1830 * valid ones. Could add an "unallocated" flag to init_freed_chunk() for
1831 * more protection, but that's probably not necessary.
1832 */
1833 static const char *debug_file_min = (char *)~0;
1834 static const char *debug_file_max = NULL;
1835
1836 static char *itos(int n)
1837 {
1838 #define NDIGITS (sizeof(int) * 3)
1839 static char s[NDIGITS + 1];
1840 int i = NDIGITS;
1841 do {
1842 s[--i] = '0' + n % 10;
1843 n /= 10;
1844 } while (n);
1845 return s + i;
1846 #undef NDIGITS
1847 }
1848
1849 static int recurs = 0;
1850
1851 static void errprint(const char *file, int line, const char *err)
1852 {
1853 if (recurs++) {
1854 recurs--;
1855 return;
1856 }
1857
1858 if (file) {
1859 write(2, file, strlen(file));
1860 if (line) {
1861 write(2, ":", 1);
1862 write(2, itos(line), strlen(itos(line)));
1863 }
1864 write(2, ": ", 2);
1865 }
1866 write(2, err, strlen(err));
1867 write(2, "\n", 1);
1868 recurs--;
1869 }
1870
1871 static void malloc_err(const char *err, mchunkptr p)
1872 {
1873 /*
1874 * Display ERR on stderr, accompanying it with the caller's file and line
1875 * number if available. If P is non-null, also attempt to display the file
1876 * and line number at which P was most recently [re]allocated.
1877 *
1878 * This function's name begins with "malloc_" to make setting debugger
1879 * breakpoints here more convenient.
1880 */
1881 errprint(debug_file, debug_line, err);
1882
1883 # ifndef DEBUG3
1884 p = 0; /* avoid "unused param" warning */
1885 # else
1886 if (p && p->file &&
1887 /* avoid invalid pointers */
1888 debug_file_min &&
1889 p->file >= debug_file_min &&
1890 p->file <= debug_file_max &&
1891 /* try to avoid garbage file names */
1892 isprint(*p->file))
1893 errprint(p->file, p->line, "in block allocated here");
1894 # endif
1895 }
1896
1897 #undef malloc
1898 #undef free
1899 #undef realloc
1900 #undef memalign
1901 #undef valloc
1902 #undef pvalloc
1903 #undef calloc
1904 #undef cfree
1905 #undef malloc_trim
1906 #undef malloc_usable_size
1907 #undef malloc_stats
1908 #undef mallopt
1909 #undef mallinfo
1910
1911 static void malloc_update_mallinfo(void);
1912
1913 /*
1914 * Define front-end functions for all user-visible entry points that may
1915 * trigger error().
1916 */
1917 #define skel(retdecl, retassign, call, retstmt) \
1918 retdecl \
1919 debug_file = file; \
1920 debug_line = line; \
1921 if (debug_file < debug_file_min) \
1922 debug_file_min = debug_file; \
1923 if (debug_file > debug_file_max) \
1924 debug_file_max = debug_file; \
1925 if (scanheap) \
1926 malloc_update_mallinfo(); \
1927 retassign call; \
1928 if (scanheap) \
1929 malloc_update_mallinfo(); \
1930 debug_file = NULL; \
1931 retstmt
1932
1933 /*
1934 * The final letter of the names of the following macros is either r or v,
1935 * indicating that the macro handles functions with or without a return value,
1936 * respectively.
1937 */
1938 # define skelr(rettype, call) \
1939 skel(rettype ret;, ret = , call, return ret)
1940 /*
1941 * AIX's xlc compiler doesn't like empty macro args, so specify useless but
1942 * compilable retdecl, retassign, and retstmt args:
1943 */
1944 #define skelv(call) \
1945 skel(line += 0;, if (1), call, return)
1946
1947 #define dbgargs const char *file, int line
1948
1949 /*
1950 * Front-end function definitions:
1951 */
1952 Void_t* malloc_dbg(size_t bytes, dbgargs) {
1953 skelr(Void_t*, malloc(bytes));
1954 }
1955 void free_dbg(Void_t *mem, dbgargs) {
1956 skelv(free(mem));
1957 }
1958 Void_t* realloc_dbg(Void_t *oldmem, size_t bytes, dbgargs) {
1959 skelr(Void_t*, realloc(oldmem, bytes));
1960 }
1961 Void_t* memalign_dbg(size_t alignment, size_t bytes, dbgargs) {
1962 skelr(Void_t*, memalign(alignment, bytes));
1963 }
1964 Void_t* valloc_dbg(size_t bytes, dbgargs) {
1965 skelr(Void_t*, valloc(bytes));
1966 }
1967 Void_t* pvalloc_dbg(size_t bytes, dbgargs) {
1968 skelr(Void_t*, pvalloc(bytes));
1969 }
1970 Void_t* calloc_dbg(size_t n, size_t elem_size, dbgargs) {
1971 skelr(Void_t*, calloc(n, elem_size));
1972 }
1973 void cfree_dbg(Void_t *mem, dbgargs) {
1974 skelv(cfree(mem));
1975 }
1976 int malloc_trim_dbg(size_t pad, dbgargs) {
1977 skelr(int, malloc_trim(pad));
1978 }
1979 size_t malloc_usable_size_dbg(Void_t *mem, dbgargs) {
1980 skelr(size_t, malloc_usable_size(mem));
1981 }
1982 void malloc_stats_dbg(dbgargs) {
1983 skelv(malloc_stats());
1984 }
1985 int mallopt_dbg(int flag, int value, dbgargs) {
1986 skelr(int, mallopt(flag, value));
1987 }
1988 struct mallinfo mallinfo_dbg(dbgargs) {
1989 skelr(struct mallinfo, mallinfo());
1990 }
1991
1992 #undef skel
1993 #undef skelr
1994 #undef skelv
1995 #undef dbgargs
1996
1997 #endif /* DEBUG2 */
1998
1999 /*
2000 These routines make a number of assertions about the states
2001 of data structures that should be true at all times. If any
2002 are not true, it's very likely that a user program has somehow
2003 trashed memory. (It's also possible that there is a coding error
2004 in malloc. In which case, please report it!)
2005 */
2006
2007 #ifdef DEBUG3
2008 static int memtest(void *s, int c, size_t n)
2009 {
2010 /*
2011 * Return whether the N-byte memory region starting at S consists
2012 * entirely of bytes with value C.
2013 */
2014 unsigned char *p = (unsigned char *)s;
2015 size_t i;
2016 for (i = 0; i < n; i++)
2017 if (p[i] != (unsigned char)c)
2018 return 0;
2019 return 1;
2020 }
2021 #endif /* DEBUG3 */
2022
2023 #ifndef DEBUG3
2024 #define check_moats(P)
2025 #else
2026 #define check_moats do_check_moats
2027 static void do_check_moats(mchunkptr p)
2028 {
2029 INTERNAL_SIZE_T sz = chunksize(p);
2030 unless(memtest((char *)chunk2mem(p) - MOATWIDTH, MOATFILL,
2031 MOATWIDTH), "region underflow", p);
2032 unless(memtest((char *)p + sz - MOATWIDTH - p->pad, MOATFILL,
2033 MOATWIDTH + p->pad), "region overflow", p);
2034 }
2035 #endif /* DEBUG3 */
2036
2037 #if __STD_C
2038 static void do_check_chunk(mchunkptr p)
2039 #else
2040 static void do_check_chunk(p) mchunkptr p;
2041 #endif
2042 {
2043 /* Try to ensure legal addresses before accessing any chunk fields, in the
2044 * hope of issuing an informative message rather than causing a segv.
2045 *
2046 * The following chunk_is_mmapped() call accesses p->size #if HAVE_MMAP.
2047 * This is unavoidable without maintaining a record of mmapped regions.
2048 */
2049 if (!chunk_is_mmapped(p))
2050 {
2051 INTERNAL_SIZE_T sz;
2052
2053 unless((char*)p >= sbrk_base, "chunk precedes sbrk_base", p);
2054 unless((char*)p + MINSIZE <= (char*)top + chunksize(top),
2055 "chunk past sbrk area", p);
2056
2057 sz = chunksize(p);
2058 if (p != top)
2059 unless((char*)p + sz <= (char*)top, "chunk extends beyond top", p);
2060 else
2061 unless((char*)p + sz <= sbrk_base + sbrked_mem,
2062 "chunk extends past sbrk area", p);
2063 }
2064 check_moats(p);
2065 }
2066
2067 #if __STD_C
2068 static void do_check_free_chunk(mchunkptr p)
2069 #else
2070 static void do_check_free_chunk(p) mchunkptr p;
2071 #endif
2072 {
2073 INTERNAL_SIZE_T sz = chunksize(p);
2074 mchunkptr next = chunk_at_offset(p, sz);
2075
2076 do_check_chunk(p);
2077
2078 /* Check whether it claims to be free ... */
2079 unless(!inuse(p), "free chunk marked inuse", p);
2080
2081 /* Unless a special marker, must have OK fields */
2082 if ((long)sz >= (long)MINSIZE)
2083 {
2084 unless((sz & ALIGN_MASK) == 0, "freed size defies alignment", p);
2085 unless(aligned_OK(chunk2mem(p)), "misaligned freed region", p);
2086 /* ... matching footer field */
2087 unless(next->prev_size == sz, "chunk size mismatch", p);
2088 /* ... and is fully consolidated */
2089 unless(prev_inuse(p), "free chunk not joined with prev", p);
2090 unless(next == top || inuse(next), "free chunk not joined with next", p);
2091
2092 /* ... and has minimally sane links */
2093 unless(p->fd->bk == p, "broken forward link", p);
2094 unless(p->bk->fd == p, "broken backward link", p);
2095 }
2096 else /* markers are always of size SIZE_SZ */
2097 unless(sz == SIZE_SZ, "invalid small chunk size", p);
2098 }
2099
2100 #if __STD_C
2101 static void do_check_inuse_chunk(mchunkptr p)
2102 #else
2103 static void do_check_inuse_chunk(p) mchunkptr p;
2104 #endif
2105 {
2106 mchunkptr next;
2107 do_check_chunk(p);
2108
2109 if (chunk_is_mmapped(p))
2110 return;
2111
2112 /* Check whether it claims to be in use ... */
2113 #ifdef DEBUG3
2114 unless(p->alloced, "memory not allocated", p);
2115 #endif
2116 unless(inuse(p), "memory not allocated", p);
2117
2118 /* ... and is surrounded by OK chunks.
2119 Since more things can be checked with free chunks than inuse ones,
2120 if an inuse chunk borders them and debug is on, it's worth doing them.
2121 */
2122 if (!prev_inuse(p))
2123 {
2124 mchunkptr prv = prev_chunk(p);
2125 unless(next_chunk(prv) == p, "prev link scrambled", p);
2126 do_check_free_chunk(prv);
2127 }
2128 next = next_chunk(p);
2129 if (next == top)
2130 {
2131 unless(prev_inuse(next), "top chunk wrongly thinks prev is unused", p);
2132 unless(chunksize(next) >= MINSIZE, "top chunk too small", p);
2133 }
2134 else if (!inuse(next))
2135 do_check_free_chunk(next);
2136 }
2137
2138 #if __STD_C
2139 static void do_check_malloced_chunk(mchunkptr p, INTERNAL_SIZE_T s)
2140 #else
2141 static void do_check_malloced_chunk(p, s) mchunkptr p; INTERNAL_SIZE_T s;
2142 #endif
2143 {
2144 INTERNAL_SIZE_T sz = chunksize(p);
2145 long room = sz - s;
2146
2147 do_check_inuse_chunk(p);
2148
2149 /* Legal size ... */
2150 unless((long)sz >= (long)MINSIZE, "chunk size too small", p);
2151 unless((sz & ALIGN_MASK) == 0, "malloced size defies alignment", p);
2152 unless(room >= 0, "chunk size too small for contents", p);
2153 unless(room < (long)MINSIZE, "chunk size leaves too much spare room", p);
2154
2155 /* ... and alignment */
2156 unless(aligned_OK(chunk2mem(p)), "misaligned malloced region", p);
2157
2158
2159 /* ... and was allocated at front of an available chunk */
2160 unless(prev_inuse(p), "malloced from the middle of a free chunk", p);
2161 }
2162
2163 #ifdef DEBUG3
2164 static void init_alloced_chunk(mchunkptr p, size_t bytes)
2165 {
2166 Void_t* mem = chunk2mem(p);
2167 p->file = debug_file;
2168 p->line = debug_line;
2169 p->pad = chunksize(p) - OVERHEAD - bytes;
2170 p->alloced = 1;
2171 memset((char *)mem + bytes, MOATFILL, p->pad + MOATWIDTH);
2172 }
2173
2174 static void do_init_malloced_chunk(mchunkptr p, size_t bytes)
2175 {
2176 Void_t* mem = chunk2mem(p);
2177 init_alloced_chunk(p, bytes);
2178 memset((char *)mem - MOATWIDTH, MOATFILL, MOATWIDTH);
2179 memset(mem, ALLOCFILL, bytes);
2180 }
2181
2182 static void do_init_realloced_chunk(mchunkptr p, size_t bytes,
2183 INTERNAL_SIZE_T oldsize)
2184 {
2185 Void_t* mem = chunk2mem(p);
2186 INTERNAL_SIZE_T newsize = chunksize(p);
2187 init_alloced_chunk(p, bytes);
2188 if (oldsize < newsize)
2189 /* This incorrectly leaves the leading pad area of the old trailing moat
2190 * set to MOATFILL rather than ALLOCFILL. An alternative is to save the
2191 * old p->pad in rEALLOc() below and pass it to this function.
2192 */
2193 memset((char *)mem + oldsize - OVERHEAD, ALLOCFILL,
2194 bytes - (oldsize - OVERHEAD));
2195 }
2196
2197 static void do_check_freefill(mchunkptr p, long newsize,
2198 INTERNAL_SIZE_T oldsize)
2199 {
2200 /* The first newsize bytes of oldsize-byte chunk p are about to be
2201 * allocated. Issue a warning if any freefill locations in p that are about
2202 * to be overwritten do not contain the character FREEFILL.
2203 */
2204 size_t bytes, maxbytes;
2205 if (newsize <= 0)
2206 return;
2207 bytes = newsize - MEMOFFSET /* don't check p's header */
2208 + MEMOFFSET; /* header of split-off remainder */
2209 maxbytes = oldsize - OVERHEAD;
2210 if (bytes > maxbytes)
2211 bytes = maxbytes;
2212 unless(memtest(chunk2mem(p), FREEFILL, bytes),
2213 "detected write to freed region", p);
2214 }
2215
2216 static void do_init_freed_chunk(mchunkptr p, INTERNAL_SIZE_T freehead,
2217 INTERNAL_SIZE_T freetail)
2218 {
2219 /* freehead and freetail are the number of bytes at the beginning of p and
2220 * end of p respectively that should already be initialized as free regions.
2221 */
2222 Void_t* mem = chunk2mem(p);
2223 size_t size = chunksize(p);
2224 size_t bytes = size - OVERHEAD;
2225 p->pad = 0;
2226 p->alloced = 0;
2227 memset((char *)mem - MOATWIDTH, MOATFILL, MOATWIDTH);
2228 memset((char *)mem + bytes, MOATFILL, MOATWIDTH);
2229
2230 /* To avoid terrible O(n^2) performance when free() repeatedly grows a free
2231 * chunk, it's important not to free-fill regions that are already
2232 * free-filled.
2233 */
2234 if (freehead + freetail < size) {
2235 Void_t* start = !freehead ? mem : (char *)p + freehead - MOATWIDTH;
2236 size_t len = (char *)p + size - (char *)start -
2237 (!freetail ? MOATWIDTH : freetail - OVERHEAD);
2238 memset(start, FREEFILL, len);
2239 }
2240 }
2241
2242 static void do_init_freeable_chunk(mchunkptr p)
2243 {
2244 /* Arrange for the subsequent fREe(p) not to generate any warnings. */
2245 init_alloced_chunk(p, chunksize(p) - OVERHEAD);
2246 memset((char *)chunk2mem(p) - MOATWIDTH, MOATFILL, MOATWIDTH);
2247 }
2248
2249 static void do_maximize_chunk(mchunkptr p)
2250 {
2251 if (p->pad) {
2252 Void_t* mem = chunk2mem(p);
2253 size_t bytes = chunksize(p) - OVERHEAD - p->pad;
2254 memset((char *)mem + bytes, ALLOCFILL, p->pad);
2255 p->pad = 0;
2256 }
2257 }
2258
2259 static int do_check_init(void)
2260 {
2261 /* Called from the first invocation of malloc_extend_top(), as detected by
2262 * sbrk_base == -1. Return whether this function allocated any memory.
2263 */
2264 static int state = 0; /* 1 => initializing, 2 => initialized */
2265 if (state == 1)
2266 return 0;
2267 unless(state == 0, "multiple calls to check_init", NULL);
2268 state++;
2269 atexit(malloc_update_mallinfo); /* calls malloc on WinNT */
2270 return sbrk_base != (char *)-1;
2271 }
2272 #endif /* DEBUG3 */
2273
2274 static mchunkptr lowest_chunk;
2275
2276 #define check_free_chunk(P) do_check_free_chunk(P)
2277 #define check_inuse_chunk(P) do_check_inuse_chunk(P)
2278 #define check_chunk(P) do_check_chunk(P)
2279 #define check_malloced_chunk(P,N) do_check_malloced_chunk(P,N)
2280 #else /* !DEBUG */
2281 #define check_free_chunk(P)
2282 #define check_inuse_chunk(P)
2283 #define check_chunk(P)
2284 #define check_malloced_chunk(P,N)
2285 #endif /* !DEBUG */
2286
2287 #ifdef DEBUG3
2288 #define check_init do_check_init
2289 #define init_malloced_chunk do_init_malloced_chunk
2290 #define init_realloced_chunk do_init_realloced_chunk
2291 #define check_freefill do_check_freefill
2292 #define init_freed_chunk do_init_freed_chunk
2293 #define init_freeable_chunk do_init_freeable_chunk
2294 #define maximize_chunk do_maximize_chunk
2295 #else
2296 #define check_init() 0
2297 #define init_malloced_chunk(P,B)
2298 #define init_realloced_chunk(P,B,O)
2299 #define check_freefill(P,N,O)
2300 #define init_freed_chunk(P,H,T)
2301 #define init_freeable_chunk(P)
2302 #define maximize_chunk(P)
2303 #endif /* !DEBUG3 */
2304
2305 \f
2306
2307 /*
2308 Macro-based internal utilities
2309 */
2310
2311
2312 /*
2313 Linking chunks in bin lists.
2314 Call these only with variables, not arbitrary expressions, as arguments.
2315 */
2316
2317 /*
2318 Place chunk p of size s in its bin, in size order,
2319 putting it ahead of others of same size.
2320 */
2321
2322
2323 #define frontlink(P, S, IDX, BK, FD) \
2324 { \
2325 if (S < MAX_SMALLBIN_SIZE) \
2326 { \
2327 IDX = smallbin_index(S); \
2328 mark_binblock(IDX); \
2329 BK = bin_at(IDX); \
2330 FD = BK->fd; \
2331 P->bk = BK; \
2332 P->fd = FD; \
2333 FD->bk = BK->fd = P; \
2334 } \
2335 else \
2336 { \
2337 IDX = bin_index(S); \
2338 BK = bin_at(IDX); \
2339 FD = BK->fd; \
2340 if (FD == BK) mark_binblock(IDX); \
2341 else \
2342 { \
2343 while (FD != BK && S < chunksize(FD)) FD = FD->fd; \
2344 BK = FD->bk; \
2345 } \
2346 P->bk = BK; \
2347 P->fd = FD; \
2348 FD->bk = BK->fd = P; \
2349 } \
2350 }
2351
2352
2353 /* take a chunk off a list */
2354
2355 #define unlink(P, BK, FD) \
2356 { \
2357 BK = P->bk; \
2358 FD = P->fd; \
2359 FD->bk = BK; \
2360 BK->fd = FD; \
2361 } \
2362
2363 /* Place p as the last remainder */
2364
2365 #define link_last_remainder(P) \
2366 { \
2367 last_remainder->fd = last_remainder->bk = P; \
2368 P->fd = P->bk = last_remainder; \
2369 }
2370
2371 /* Clear the last_remainder bin */
2372
2373 #define clear_last_remainder \
2374 (last_remainder->fd = last_remainder->bk = last_remainder)
2375
2376
2377
2378 \f
2379
2380
2381 /* Routines dealing with mmap(). */
2382
2383 #if HAVE_MMAP
2384
2385 #if __STD_C
2386 static mchunkptr mmap_chunk(size_t size)
2387 #else
2388 static mchunkptr mmap_chunk(size) size_t size;
2389 #endif
2390 {
2391 size_t page_mask = malloc_getpagesize - 1;
2392 mchunkptr p;
2393
2394 #ifndef MAP_ANONYMOUS
2395 static int fd = -1;
2396 #endif
2397
2398 if(n_mmaps >= n_mmaps_max) return 0; /* too many regions */
2399
2400 size = (size + MMAP_EXTRA + page_mask) & ~page_mask;
2401
2402 #ifdef MAP_ANONYMOUS
2403 p = (mchunkptr)mmap(0, size, PROT_READ|PROT_WRITE,
2404 MAP_PRIVATE|MAP_ANONYMOUS, -1, 0);
2405 #else /* !MAP_ANONYMOUS */
2406 if (fd < 0)
2407 {
2408 fd = open("/dev/zero", O_RDWR);
2409 if(fd < 0) return 0;
2410 }
2411 p = (mchunkptr)mmap(0, size, PROT_READ|PROT_WRITE, MAP_PRIVATE, fd, 0);
2412 #endif
2413
2414 if(p == (mchunkptr)-1) return 0;
2415
2416 n_mmaps++;
2417 if (n_mmaps > max_n_mmaps) max_n_mmaps = n_mmaps;
2418
2419 /* We demand that eight bytes into a page must be 8-byte aligned. */
2420 assert(aligned_OK(chunk2mem(p)));
2421
2422 /* The offset to the start of the mmapped region is stored
2423 * in the prev_size field of the chunk; normally it is zero,
2424 * but that can be changed in memalign().
2425 */
2426 p->prev_size = 0;
2427 set_head(p, size|IS_MMAPPED);
2428
2429 mmapped_mem += size;
2430 if ((unsigned long)mmapped_mem > (unsigned long)max_mmapped_mem)
2431 max_mmapped_mem = mmapped_mem;
2432 if ((unsigned long)(mmapped_mem + sbrked_mem) > (unsigned long)max_total_mem)
2433 max_total_mem = mmapped_mem + sbrked_mem;
2434 return p;
2435 }
2436
2437 #if __STD_C
2438 static void munmap_chunk(mchunkptr p)
2439 #else
2440 static void munmap_chunk(p) mchunkptr p;
2441 #endif
2442 {
2443 INTERNAL_SIZE_T size = chunksize(p);
2444 int ret;
2445
2446 assert (chunk_is_mmapped(p));
2447 assert(! ((char*)p >= sbrk_base && (char*)p < sbrk_base + sbrked_mem));
2448 assert((n_mmaps > 0));
2449 assert(((p->prev_size + size) & (malloc_getpagesize-1)) == 0);
2450
2451 n_mmaps--;
2452 mmapped_mem -= (size + p->prev_size);
2453
2454 ret = munmap((char *)p - p->prev_size, size + p->prev_size);
2455
2456 /* munmap returns non-zero on failure */
2457 assert(ret == 0);
2458 }
2459
2460 #if HAVE_MREMAP
2461
2462 #if __STD_C
2463 static mchunkptr mremap_chunk(mchunkptr p, size_t new_size)
2464 #else
2465 static mchunkptr mremap_chunk(p, new_size) mchunkptr p; size_t new_size;
2466 #endif
2467 {
2468 size_t page_mask = malloc_getpagesize - 1;
2469 INTERNAL_SIZE_T offset = p->prev_size;
2470 INTERNAL_SIZE_T size = chunksize(p);
2471 char *cp;
2472
2473 assert (chunk_is_mmapped(p));
2474 assert(! ((char*)p >= sbrk_base && (char*)p < sbrk_base + sbrked_mem));
2475 assert((n_mmaps > 0));
2476 assert(((size + offset) & (malloc_getpagesize-1)) == 0);
2477
2478 new_size = (new_size + offset + MMAP_EXTRA + page_mask) & ~page_mask;
2479
2480 cp = (char *)mremap((char *)p - offset, size + offset, new_size, 1);
2481
2482 if (cp == (char *)-1) return 0;
2483
2484 p = (mchunkptr)(cp + offset);
2485
2486 assert(aligned_OK(chunk2mem(p)));
2487
2488 assert(p->prev_size == offset);
2489 set_head(p, (new_size - offset)|IS_MMAPPED);
2490
2491 mmapped_mem -= size + offset;
2492 mmapped_mem += new_size;
2493 if ((unsigned long)mmapped_mem > (unsigned long)max_mmapped_mem)
2494 max_mmapped_mem = mmapped_mem;
2495 if ((unsigned long)(mmapped_mem + sbrked_mem) > (unsigned long)max_total_mem)
2496 max_total_mem = mmapped_mem + sbrked_mem;
2497 return p;
2498 }
2499
2500 #endif /* HAVE_MREMAP */
2501
2502 #endif /* HAVE_MMAP */
2503
2504
2505 \f
2506
2507 /*
2508 Extend the top-most chunk by obtaining memory from system.
2509 Main interface to sbrk (but see also malloc_trim).
2510 */
2511
2512 #if __STD_C
2513 static void malloc_extend_top(INTERNAL_SIZE_T nb)
2514 #else
2515 static void malloc_extend_top(nb) INTERNAL_SIZE_T nb;
2516 #endif
2517 {
2518 char* lim; /* return value from sbrk */
2519 INTERNAL_SIZE_T front_misalign; /* unusable bytes at front of sbrked space */
2520 INTERNAL_SIZE_T correction; /* bytes for 2nd sbrk call */
2521 char* new_lim; /* return of 2nd sbrk call */
2522 INTERNAL_SIZE_T top_size; /* new size of top chunk */
2523
2524 mchunkptr old_top = top; /* Record state of old top */
2525 INTERNAL_SIZE_T old_top_size = chunksize(old_top);
2526 char* old_end = (char*)(chunk_at_offset(old_top, old_top_size));
2527
2528 /* Pad request with top_pad plus minimal overhead */
2529
2530 INTERNAL_SIZE_T sbrk_size = nb + top_pad + MINSIZE;
2531 unsigned long pagesz = malloc_getpagesize;
2532
2533 /* If not the first time through, round to preserve page boundary */
2534 /* Otherwise, we need to correct to a page size below anyway. */
2535 /* (We also correct below if an intervening foreign sbrk call.) */
2536
2537 if (sbrk_base != (char*)(-1))
2538 sbrk_size = (sbrk_size + (pagesz - 1)) & ~(pagesz - 1);
2539
2540 else if (check_init()) {
2541 if (chunksize(top) - nb < (long)MINSIZE)
2542 malloc_extend_top(nb);
2543 return;
2544 }
2545
2546 lim = (char*)(MORECORE (sbrk_size));
2547
2548 /* Fail if sbrk failed or if a foreign sbrk call killed our space */
2549 if (lim == (char*)(MORECORE_FAILURE) ||
2550 (lim < old_end && old_top != initial_top))
2551 return;
2552
2553 sbrked_mem += sbrk_size;
2554
2555 if (lim == old_end) /* can just add bytes to current top */
2556 {
2557 top_size = sbrk_size + old_top_size;
2558 set_head(top, top_size | PREV_INUSE);
2559 }
2560 else
2561 {
2562 #ifdef SBRKDBG
2563 INTERNAL_SIZE_T padding = (char *)sbrk (0) - (lim + sbrk_size);
2564 sbrk_size += padding;
2565 sbrked_mem += padding;
2566 #endif
2567
2568 if (sbrk_base == (char*)(-1)) /* First time through. Record base */
2569 sbrk_base = lim;
2570 else /* Someone else called sbrk(). Count those bytes as sbrked_mem. */
2571 sbrked_mem += lim - (char*)old_end;
2572
2573 /* Guarantee alignment of first new chunk made from this space */
2574 front_misalign = (unsigned long)chunk2mem(lim) & ALIGN_MASK;
2575 if (front_misalign > 0)
2576 {
2577 correction = (ALIGNMENT) - front_misalign;
2578 lim += correction;
2579 }
2580 else
2581 correction = 0;
2582
2583 /* Guarantee the next brk will be at a page boundary */
2584 correction += pagesz - ((unsigned long)(lim + sbrk_size) & (pagesz - 1));
2585
2586 /* Allocate correction */
2587 new_lim = (char*)(MORECORE (correction));
2588 if (new_lim == (char*)(MORECORE_FAILURE)) return;
2589
2590 sbrked_mem += correction;
2591
2592 top = (mchunkptr)lim;
2593 top_size = new_lim - lim + correction;
2594 set_head(top, top_size | PREV_INUSE);
2595 #if DEBUG
2596 lowest_chunk = top;
2597 #endif
2598
2599 #ifdef OTHER_SBRKS
2600 if (old_top != initial_top)
2601 {
2602
2603 /* There must have been an intervening foreign sbrk call. */
2604 /* A double fencepost is necessary to prevent consolidation */
2605
2606 /* If not enough space to do this, then user did something very wrong */
2607 if (old_top_size < MINSIZE)
2608 {
2609 set_head(top, PREV_INUSE); /* will force null return from malloc */
2610 return;
2611 }
2612
2613 old_top_size -= 2*SIZE_SZ;
2614 chunk_at_offset(old_top, old_top_size )->size =
2615 SIZE_SZ|PREV_INUSE;
2616 chunk_at_offset(old_top, old_top_size + SIZE_SZ)->size =
2617 SIZE_SZ|PREV_INUSE;
2618 set_head_size(old_top, old_top_size);
2619 /* If possible, release the rest. */
2620 if (old_top_size >= MINSIZE) {
2621 init_freeable_chunk(old_top);
2622 fREe(chunk2mem(old_top));
2623 }
2624 }
2625 #endif /* OTHER_SBRKS */
2626 }
2627
2628 init_freed_chunk(top, old_top == initial_top ? old_top_size : 0, 0);
2629
2630 if ((unsigned long)sbrked_mem > (unsigned long)max_sbrked_mem)
2631 max_sbrked_mem = sbrked_mem;
2632 if ((unsigned long)(mmapped_mem + sbrked_mem) > (unsigned long)max_total_mem)
2633 max_total_mem = mmapped_mem + sbrked_mem;
2634
2635 /* We always land on a page boundary */
2636 assert(((unsigned long)((char*)top + top_size) & (pagesz - 1)) == 0);
2637 }
2638
2639
2640 \f
2641
2642 /* Main public routines */
2643
2644
2645 /*
2646 Malloc Algorthim:
2647
2648 The requested size is first converted into a usable form, `nb'.
2649 This currently means to add 4 bytes overhead plus possibly more to
2650 obtain 8-byte alignment and/or to obtain a size of at least
2651 MINSIZE (currently 16 bytes), the smallest allocatable size.
2652 (All fits are considered `exact' if they are within MINSIZE bytes.)
2653
2654 From there, the first successful of the following steps is taken:
2655
2656 1. The bin corresponding to the request size is scanned, and if
2657 a chunk of exactly the right size is found, it is taken.
2658
2659 2. The most recently remaindered chunk is used if it is big
2660 enough. This is a form of (roving) first fit, used only in
2661 the absence of exact fits. Runs of consecutive requests use
2662 the remainder of the chunk used for the previous such request
2663 whenever possible. This limited use of a first-fit style
2664 allocation strategy tends to give contiguous chunks
2665 coextensive lifetimes, which improves locality and can reduce
2666 fragmentation in the long run.
2667
2668 3. Other bins are scanned in increasing size order, using a
2669 chunk big enough to fulfill the request, and splitting off
2670 any remainder. This search is strictly by best-fit; i.e.,
2671 the smallest (with ties going to approximately the least
2672 recently used) chunk that fits is selected.
2673
2674 4. If large enough, the chunk bordering the end of memory
2675 (`top') is split off. (This use of `top' is in accord with
2676 the best-fit search rule. In effect, `top' is treated as
2677 larger (and thus less well fitting) than any other available
2678 chunk since it can be extended to be as large as necessary
2679 (up to system limitations).
2680
2681 5. If the request size meets the mmap threshold and the
2682 system supports mmap, and there are few enough currently
2683 allocated mmapped regions, and a call to mmap succeeds,
2684 the request is allocated via direct memory mapping.
2685
2686 6. Otherwise, the top of memory is extended by
2687 obtaining more space from the system (normally using sbrk,
2688 but definable to anything else via the MORECORE macro).
2689 Memory is gathered from the system (in system page-sized
2690 units) in a way that allows chunks obtained across different
2691 sbrk calls to be consolidated, but does not require
2692 contiguous memory. Thus, it should be safe to intersperse
2693 mallocs with other sbrk calls.
2694
2695
2696 All allocations are made from the the `lowest' part of any found
2697 chunk. (The implementation invariant is that prev_inuse is
2698 always true of any allocated chunk; i.e., that each allocated
2699 chunk borders either a previously allocated and still in-use chunk,
2700 or the base of its memory arena.)
2701
2702 */
2703
2704 #if __STD_C
2705 Void_t* mALLOc(size_t bytes)
2706 #else
2707 Void_t* mALLOc(bytes) size_t bytes;
2708 #endif
2709 {
2710 mchunkptr victim; /* inspected/selected chunk */
2711 INTERNAL_SIZE_T victim_size; /* its size */
2712 int idx; /* index for bin traversal */
2713 mbinptr bin; /* associated bin */
2714 mchunkptr remainder; /* remainder from a split */
2715 long remainder_size; /* its size */
2716 int remainder_index; /* its bin index */
2717 unsigned long block; /* block traverser bit */
2718 int startidx; /* first bin of a traversed block */
2719 mchunkptr fwd; /* misc temp for linking */
2720 mchunkptr bck; /* misc temp for linking */
2721 mbinptr q; /* misc temp */
2722
2723 INTERNAL_SIZE_T nb = request2size(bytes); /* padded request size; */
2724
2725 /* Check for exact match in a bin */
2726
2727 if (is_small_request(nb)) /* Faster version for small requests */
2728 {
2729 idx = smallbin_index(nb);
2730
2731 /* No traversal or size check necessary for small bins. */
2732
2733 q = bin_at(idx);
2734 victim = last(q);
2735
2736 /* Also scan the next one, since it would have a remainder < MINSIZE */
2737 if (victim == q)
2738 {
2739 q = next_bin(q);
2740 victim = last(q);
2741 }
2742 if (victim != q)
2743 {
2744 victim_size = chunksize(victim);
2745 unlink(victim, bck, fwd);
2746 set_inuse_bit_at_offset(victim, victim_size);
2747 check_freefill(victim, victim_size, victim_size);
2748 init_malloced_chunk(victim, bytes);
2749 check_malloced_chunk(victim, nb);
2750
2751 return chunk2mem(victim);
2752 }
2753
2754 idx += 2; /* Set for bin scan below. We've already scanned 2 bins. */
2755
2756 }
2757 else
2758 {
2759 idx = bin_index(nb);
2760 bin = bin_at(idx);
2761
2762 for (victim = last(bin); victim != bin; victim = victim->bk)
2763 {
2764 victim_size = chunksize(victim);
2765 remainder_size = victim_size - nb;
2766
2767 if (remainder_size >= (long)MINSIZE) /* too big */
2768 {
2769 --idx; /* adjust to rescan below after checking last remainder */
2770 break;
2771 }
2772
2773 else if (remainder_size >= 0) /* exact fit */
2774 {
2775 unlink(victim, bck, fwd);
2776 set_inuse_bit_at_offset(victim, victim_size);
2777 check_freefill(victim, victim_size, victim_size);
2778 init_malloced_chunk(victim, bytes);
2779 check_malloced_chunk(victim, nb);
2780 return chunk2mem(victim);
2781 }
2782 }
2783
2784 ++idx;
2785
2786 }
2787
2788 /* Try to use the last split-off remainder */
2789
2790 if ( (victim = last_remainder->fd) != last_remainder)
2791 {
2792 victim_size = chunksize(victim);
2793 remainder_size = victim_size - nb;
2794
2795 if (remainder_size >= (long)MINSIZE) /* re-split */
2796 {
2797 remainder = chunk_at_offset(victim, nb);
2798 set_head(victim, nb | PREV_INUSE);
2799 check_freefill(victim, nb, victim_size);
2800 init_malloced_chunk(victim, bytes);
2801 link_last_remainder(remainder);
2802 set_head(remainder, remainder_size | PREV_INUSE);
2803 set_foot(remainder, remainder_size);
2804 init_freed_chunk(remainder, remainder_size, 0);
2805 check_malloced_chunk(victim, nb);
2806 return chunk2mem(victim);
2807 }
2808
2809 clear_last_remainder;
2810
2811 if (remainder_size >= 0) /* exhaust */
2812 {
2813 set_inuse_bit_at_offset(victim, victim_size);
2814 check_freefill(victim, victim_size, victim_size);
2815 init_malloced_chunk(victim, bytes);
2816 check_malloced_chunk(victim, nb);
2817 return chunk2mem(victim);
2818 }
2819
2820 /* Else place in bin */
2821
2822 frontlink(victim, victim_size, remainder_index, bck, fwd);
2823 }
2824
2825 /*
2826 If there are any possibly nonempty big-enough blocks,
2827 search for best fitting chunk by scanning bins in blockwidth units.
2828 */
2829
2830 if ( (block = idx2binblock(idx)) <= binblocks)
2831 {
2832
2833 /* Get to the first marked block */
2834
2835 if ( (block & binblocks) == 0)
2836 {
2837 /* force to an even block boundary */
2838 idx = (idx & ~(BINBLOCKWIDTH - 1)) + BINBLOCKWIDTH;
2839 block <<= 1;
2840 while ((block & binblocks) == 0)
2841 {
2842 idx += BINBLOCKWIDTH;
2843 block <<= 1;
2844 }
2845 }
2846
2847 /* For each possibly nonempty block ... */
2848 for (;;)
2849 {
2850 startidx = idx; /* (track incomplete blocks) */
2851 q = bin = bin_at(idx);
2852
2853 /* For each bin in this block ... */
2854 do
2855 {
2856 /* Find and use first big enough chunk ... */
2857
2858 for (victim = last(bin); victim != bin; victim = victim->bk)
2859 {
2860 victim_size = chunksize(victim);
2861 remainder_size = victim_size - nb;
2862
2863 if (remainder_size >= (long)MINSIZE) /* split */
2864 {
2865 remainder = chunk_at_offset(victim, nb);
2866 set_head(victim, nb | PREV_INUSE);
2867 check_freefill(victim, nb, victim_size);
2868 unlink(victim, bck, fwd);
2869 init_malloced_chunk(victim, bytes);
2870 link_last_remainder(remainder);
2871 set_head(remainder, remainder_size | PREV_INUSE);
2872 set_foot(remainder, remainder_size);
2873 init_freed_chunk(remainder, remainder_size, 0);
2874 check_malloced_chunk(victim, nb);
2875 return chunk2mem(victim);
2876 }
2877
2878 else if (remainder_size >= 0) /* take */
2879 {
2880 check_freefill(victim, victim_size, victim_size);
2881 set_inuse_bit_at_offset(victim, victim_size);
2882 unlink(victim, bck, fwd);
2883 init_malloced_chunk(victim, bytes);
2884 check_malloced_chunk(victim, nb);
2885 return chunk2mem(victim);
2886 }
2887
2888 }
2889
2890 bin = next_bin(bin);
2891
2892 } while ((++idx & (BINBLOCKWIDTH - 1)) != 0);
2893
2894 /* Clear out the block bit. */
2895
2896 do /* Possibly backtrack to try to clear a partial block */
2897 {
2898 if ((startidx & (BINBLOCKWIDTH - 1)) == 0)
2899 {
2900 binblocks &= ~block;
2901 break;
2902 }
2903 --startidx;
2904 q = prev_bin(q);
2905 } while (first(q) == q);
2906
2907 /* Get to the next possibly nonempty block */
2908
2909 if ( (block <<= 1) <= binblocks && (block != 0) )
2910 {
2911 while ((block & binblocks) == 0)
2912 {
2913 idx += BINBLOCKWIDTH;
2914 block <<= 1;
2915 }
2916 }
2917 else
2918 break;
2919 }
2920 }
2921
2922
2923 /* Try to use top chunk */
2924
2925 /* Require that there be a remainder, ensuring top always exists */
2926 if ( (remainder_size = chunksize(top) - nb) < (long)MINSIZE)
2927 {
2928
2929 #if HAVE_MMAP
2930 /* If big and would otherwise need to extend, try to use mmap instead */
2931 if ((unsigned long)nb >= (unsigned long)mmap_threshold &&
2932 (victim = mmap_chunk(nb)) != 0) {
2933 init_malloced_chunk(victim, bytes);
2934 return chunk2mem(victim);
2935 }
2936 #endif
2937
2938 /* Try to extend */
2939 malloc_extend_top(nb);
2940 if ( (remainder_size = chunksize(top) - nb) < (long)MINSIZE)
2941 return 0; /* propagate failure */
2942 }
2943
2944 victim = top;
2945 set_head(victim, nb | PREV_INUSE);
2946 check_freefill(victim, nb, nb + remainder_size);
2947 init_malloced_chunk(victim, bytes);
2948 top = chunk_at_offset(victim, nb);
2949 set_head(top, remainder_size | PREV_INUSE);
2950 init_freed_chunk(top, remainder_size, 0);
2951 check_malloced_chunk(victim, nb);
2952 return chunk2mem(victim);
2953
2954 }
2955
2956
2957 \f
2958
2959 /*
2960
2961 free() algorithm :
2962
2963 cases:
2964
2965 1. free(0) has no effect.
2966
2967 2. If the chunk was allocated via mmap, it is release via munmap().
2968
2969 3. If a returned chunk borders the current high end of memory,
2970 it is consolidated into the top, and if the total unused
2971 topmost memory exceeds the trim threshold, malloc_trim is
2972 called.
2973
2974 4. Other chunks are consolidated as they arrive, and
2975 placed in corresponding bins. (This includes the case of
2976 consolidating with the current `last_remainder').
2977
2978 */
2979
2980
2981 #if __STD_C
2982 void fREe(Void_t* mem)
2983 #else
2984 void fREe(mem) Void_t* mem;
2985 #endif
2986 {
2987 mchunkptr p; /* chunk corresponding to mem */
2988 INTERNAL_SIZE_T hd; /* its head field */
2989 INTERNAL_SIZE_T sz; /* its size */
2990 int idx; /* its bin index */
2991 mchunkptr next; /* next contiguous chunk */
2992 INTERNAL_SIZE_T nextsz; /* its size */
2993 INTERNAL_SIZE_T prevsz; /* size of previous contiguous chunk */
2994 mchunkptr bck; /* misc temp for linking */
2995 mchunkptr fwd; /* misc temp for linking */
2996 int islr; /* track whether merging with last_remainder */
2997
2998 if (mem == 0) /* free(0) has no effect */
2999 return;
3000
3001 p = mem2chunk(mem);
3002 check_inuse_chunk(p);
3003
3004 hd = p->size;
3005
3006 #if HAVE_MMAP
3007 if (hd & IS_MMAPPED) /* release mmapped memory. */
3008 {
3009 munmap_chunk(p);
3010 return;
3011 }
3012 #endif
3013
3014 sz = hd & ~PREV_INUSE;
3015 next = chunk_at_offset(p, sz);
3016 nextsz = chunksize(next);
3017 prevsz = 0; /* avoid compiler warnings */
3018
3019 if (next == top) /* merge with top */
3020 {
3021 sz += nextsz;
3022
3023 if (!(hd & PREV_INUSE)) /* consolidate backward */
3024 {
3025 prevsz = p->prev_size;
3026 p = chunk_at_offset(p, -(long)prevsz);
3027 sz += prevsz;
3028 unlink(p, bck, fwd);
3029 }
3030
3031 set_head(p, sz | PREV_INUSE);
3032 top = p;
3033 init_freed_chunk(top, !(hd & PREV_INUSE) ? prevsz : 0, nextsz);
3034 if ((unsigned long)(sz) >= trim_threshold)
3035 malloc_trim(top_pad);
3036 return;
3037 }
3038
3039 set_head(next, nextsz); /* clear inuse bit */
3040
3041 islr = 0;
3042
3043 if (!(hd & PREV_INUSE)) /* consolidate backward */
3044 {
3045 prevsz = p->prev_size;
3046 p = chunk_at_offset(p, -(long)prevsz);
3047 sz += prevsz;
3048
3049 if (p->fd == last_remainder) /* keep as last_remainder */
3050 islr = 1;
3051 else
3052 unlink(p, bck, fwd);
3053 }
3054
3055 if (!(inuse_bit_at_offset(next, nextsz))) /* consolidate forward */
3056 {
3057 sz += nextsz;
3058
3059 if (!islr && next->fd == last_remainder) /* re-insert last_remainder */
3060 {
3061 islr = 1;
3062 link_last_remainder(p);
3063 }
3064 else
3065 unlink(next, bck, fwd);
3066 }
3067
3068
3069 set_head(p, sz | PREV_INUSE);
3070 set_foot(p, sz);
3071 if (!islr)
3072 frontlink(p, sz, idx, bck, fwd);
3073 init_freed_chunk(p, !(hd & PREV_INUSE) ? prevsz : 0,
3074 !inuse_bit_at_offset(next, nextsz) ? nextsz : 0);
3075 }
3076
3077
3078 \f
3079
3080
3081 /*
3082
3083 Realloc algorithm:
3084
3085 Chunks that were obtained via mmap cannot be extended or shrunk
3086 unless HAVE_MREMAP is defined, in which case mremap is used.
3087 Otherwise, if their reallocation is for additional space, they are
3088 copied. If for less, they are just left alone.
3089
3090 Otherwise, if the reallocation is for additional space, and the
3091 chunk can be extended, it is, else a malloc-copy-free sequence is
3092 taken. There are several different ways that a chunk could be
3093 extended. All are tried:
3094
3095 * Extending forward into following adjacent free chunk.
3096 * Shifting backwards, joining preceding adjacent space
3097 * Both shifting backwards and extending forward.
3098 * Extending into newly sbrked space
3099
3100 Unless the #define realloc_ZERO_BYTES_FREES is set, realloc with a
3101 size argument of zero (re)allocates a minimum-sized chunk.
3102
3103 If the reallocation is for less space, and the new request is for
3104 a `small' (<512 bytes) size, then the newly unused space is lopped
3105 off and freed.
3106
3107 The old unix realloc convention of allowing the last-free'd chunk
3108 to be used as an argument to realloc is no longer supported.
3109 I don't know of any programs still relying on this feature,
3110 and allowing it would also allow too many other incorrect
3111 usages of realloc to be sensible.
3112
3113
3114 */
3115
3116
3117 #if __STD_C
3118 Void_t* rEALLOc(Void_t* oldmem, size_t bytes)
3119 #else
3120 Void_t* rEALLOc(oldmem, bytes) Void_t* oldmem; size_t bytes;
3121 #endif
3122 {
3123 INTERNAL_SIZE_T nb; /* padded request size */
3124
3125 mchunkptr oldp; /* chunk corresponding to oldmem */
3126 INTERNAL_SIZE_T oldsize; /* its size */
3127
3128 mchunkptr newp; /* chunk to return */
3129 INTERNAL_SIZE_T newsize; /* its size */
3130 Void_t* newmem; /* corresponding user mem */
3131
3132 mchunkptr next; /* next contiguous chunk after oldp */
3133 INTERNAL_SIZE_T nextsize; /* its size */
3134
3135 mchunkptr prev; /* previous contiguous chunk before oldp */
3136 INTERNAL_SIZE_T prevsize; /* its size */
3137
3138 mchunkptr remainder; /* holds split off extra space from newp */
3139 INTERNAL_SIZE_T remainder_size; /* its size */
3140
3141 mchunkptr bck; /* misc temp for linking */
3142 mchunkptr fwd; /* misc temp for linking */
3143
3144 #ifdef realloc_ZERO_BYTES_FREES
3145 if (bytes == 0) { fREe(oldmem); return 0; }
3146 #endif
3147
3148
3149 /* realloc of null is supposed to be same as malloc */
3150 if (oldmem == 0) return mALLOc(bytes);
3151
3152 newp = oldp = mem2chunk(oldmem);
3153 newsize = oldsize = chunksize(oldp);
3154
3155
3156 nb = request2size(bytes);
3157
3158 check_inuse_chunk(oldp);
3159
3160 #if HAVE_MMAP
3161 if (chunk_is_mmapped(oldp))
3162 {
3163 if (oldsize - MMAP_EXTRA >= nb) {
3164 init_realloced_chunk(oldp, bytes, oldsize);
3165 return oldmem; /* do nothing */
3166 }
3167 #if HAVE_MREMAP
3168 newp = mremap_chunk(oldp, nb);
3169 if (newp) {
3170 init_realloced_chunk(newp, bytes, oldsize);
3171 return chunk2mem(newp);
3172 }
3173 #endif
3174 /* Must alloc, copy, free. */
3175 newmem = mALLOc(bytes);
3176 if (newmem == 0) return 0; /* propagate failure */
3177 malloc_COPY(newmem, oldmem, oldsize - OVERHEAD - MMAP_EXTRA);
3178 munmap_chunk(oldp);
3179 return newmem;
3180 }
3181 #endif
3182
3183 if (oldsize < nb)
3184 {
3185
3186 /* Try expanding forward */
3187
3188 next = chunk_at_offset(oldp, oldsize);
3189 if (next == top || !inuse(next))
3190 {
3191 nextsize = chunksize(next);
3192
3193 /* Forward into top only if a remainder */
3194 if (next == top)
3195 {
3196 if ((long)(nextsize + newsize) >= (long)(nb + MINSIZE))
3197 {
3198 check_freefill(next, nb - oldsize, nextsize);
3199 newsize += nextsize;
3200 top = chunk_at_offset(oldp, nb);
3201 set_head(top, (newsize - nb) | PREV_INUSE);
3202 init_freed_chunk(top, newsize - nb, 0);
3203 set_head_size(oldp, nb);
3204 init_realloced_chunk(oldp, bytes, oldsize);
3205 return chunk2mem(oldp);
3206 }
3207 }
3208
3209 /* Forward into next chunk */
3210 else if (((long)(nextsize + newsize) >= (long)nb))
3211 {
3212 check_freefill(next, nb - oldsize, nextsize);
3213 unlink(next, bck, fwd);
3214 newsize += nextsize;
3215 goto split;
3216 }
3217 }
3218 else
3219 {
3220 next = 0;
3221 nextsize = 0;
3222 }
3223
3224 /* Try shifting backwards. */
3225
3226 if (!prev_inuse(oldp))
3227 {
3228 prev = prev_chunk(oldp);
3229 prevsize = chunksize(prev);
3230
3231 /* try forward + backward first to save a later consolidation */
3232
3233 if (next != 0)
3234 {
3235 /* into top */
3236 if (next == top)
3237 {
3238 if ((long)(nextsize + prevsize + newsize) >= (long)(nb + MINSIZE))
3239 {
3240 check_freefill(prev, nb, prevsize);
3241 check_freefill(next, nb - (prevsize + newsize), nextsize);
3242 unlink(prev, bck, fwd);
3243 newp = prev;
3244 newsize += prevsize + nextsize;
3245 newmem = chunk2mem(newp);
3246 malloc_COPY(newmem, oldmem, oldsize - OVERHEAD);
3247 top = chunk_at_offset(newp, nb);
3248 set_head(top, (newsize - nb) | PREV_INUSE);
3249 init_freed_chunk(top, newsize - nb, 0);
3250 set_head_size(newp, nb);
3251 init_realloced_chunk(newp, bytes, oldsize);
3252 return newmem;
3253 }
3254 }
3255
3256 /* into next chunk */
3257 else if (((long)(nextsize + prevsize + newsize) >= (long)(nb)))
3258 {
3259 check_freefill(prev, nb, prevsize);
3260 check_freefill(next, nb - (prevsize + newsize), nextsize);
3261 unlink(next, bck, fwd);
3262 unlink(prev, bck, fwd);
3263 newp = prev;
3264 newsize += nextsize + prevsize;
3265 newmem = chunk2mem(newp);
3266 malloc_COPY(newmem, oldmem, oldsize - OVERHEAD);
3267 goto split;
3268 }
3269 }
3270
3271 /* backward only */
3272 if (prev != 0 && (long)(prevsize + newsize) >= (long)nb)
3273 {
3274 check_freefill(prev, nb, prevsize);
3275 unlink(prev, bck, fwd);
3276 newp = prev;
3277 newsize += prevsize;
3278 newmem = chunk2mem(newp);
3279 malloc_COPY(newmem, oldmem, oldsize - OVERHEAD);
3280 goto split;
3281 }
3282 }
3283
3284 /* Must allocate */
3285
3286 newmem = mALLOc (bytes);
3287
3288 if (newmem == 0) /* propagate failure */
3289 return 0;
3290
3291 /* Avoid copy if newp is next chunk after oldp. */
3292 /* (This can only happen when new chunk is sbrk'ed.) */
3293
3294 if ( (newp = mem2chunk(newmem)) == next_chunk(oldp))
3295 {
3296 newsize += chunksize(newp);
3297 newp = oldp;
3298 goto split;
3299 }
3300
3301 /* Otherwise copy, free, and exit */
3302 malloc_COPY(newmem, oldmem, oldsize - OVERHEAD);
3303 fREe(oldmem);
3304 return newmem;
3305 }
3306
3307
3308 split: /* split off extra room in old or expanded chunk */
3309
3310 if (newsize - nb >= MINSIZE) /* split off remainder */
3311 {
3312 remainder = chunk_at_offset(newp, nb);
3313 remainder_size = newsize - nb;
3314 set_head_size(newp, nb);
3315 set_head(remainder, remainder_size | PREV_INUSE);
3316 set_inuse_bit_at_offset(remainder, remainder_size);
3317 init_malloced_chunk(remainder, remainder_size - OVERHEAD);
3318 fREe(chunk2mem(remainder)); /* let free() deal with it */
3319 }
3320 else
3321 {
3322 set_head_size(newp, newsize);
3323 set_inuse_bit_at_offset(newp, newsize);
3324 }
3325
3326 init_realloced_chunk(newp, bytes, oldsize);
3327 check_inuse_chunk(newp);
3328 return chunk2mem(newp);
3329 }
3330
3331
3332 \f
3333
3334 /*
3335
3336 memalign algorithm:
3337
3338 memalign requests more than enough space from malloc, finds a spot
3339 within that chunk that meets the alignment request, and then
3340 possibly frees the leading and trailing space.
3341
3342 The alignment argument must be a power of two. This property is not
3343 checked by memalign, so misuse may result in random runtime errors.
3344
3345 8-byte alignment is guaranteed by normal malloc calls, so don't
3346 bother calling memalign with an argument of 8 or less.
3347
3348 Overreliance on memalign is a sure way to fragment space.
3349
3350 */
3351
3352
3353 #if __STD_C
3354 Void_t* mEMALIGn(size_t alignment, size_t bytes)
3355 #else
3356 Void_t* mEMALIGn(alignment, bytes) size_t alignment; size_t bytes;
3357 #endif
3358 {
3359 INTERNAL_SIZE_T nb; /* padded request size */
3360 char* m; /* memory returned by malloc call */
3361 mchunkptr p; /* corresponding chunk */
3362 char* lim; /* alignment point within p */
3363 mchunkptr newp; /* chunk to return */
3364 INTERNAL_SIZE_T newsize; /* its size */
3365 INTERNAL_SIZE_T leadsize; /* leading space befor alignment point */
3366 mchunkptr remainder; /* spare room at end to split off */
3367 long remainder_size; /* its size */
3368
3369 /* If need less alignment than we give anyway, just relay to malloc */
3370
3371 if (alignment <= ALIGNMENT) return mALLOc(bytes);
3372
3373 /* Otherwise, ensure that it is at least a minimum chunk size */
3374
3375 if (alignment < MINSIZE) alignment = MINSIZE;
3376
3377 /* Call malloc with worst case padding to hit alignment. */
3378
3379 nb = request2size(bytes);
3380 m = (char*)mALLOc(nb + alignment + MINSIZE);
3381
3382 if (m == 0) return 0; /* propagate failure */
3383
3384 p = mem2chunk(m);
3385
3386 if ((((unsigned long)(m)) % alignment) == 0) /* aligned */
3387 {
3388 init_realloced_chunk(p, bytes, chunksize(p));
3389 return chunk2mem(p); /* nothing more to do */
3390 }
3391 else /* misaligned */
3392 {
3393 /*
3394 Find an aligned spot inside chunk.
3395 Since we need to give back leading space in a chunk of at
3396 least MINSIZE, if the first calculation places us at
3397 a spot with less than MINSIZE leader, we can move to the
3398 next aligned spot -- we've allocated enough total room so that
3399 this is always possible.
3400 */
3401
3402 lim = (char*)mem2chunk(((unsigned long)(m + alignment - 1)) &
3403 ~(alignment - 1));
3404 if ((lim - (char*)p) < (long)MINSIZE) lim = lim + alignment;
3405
3406 newp = (mchunkptr)lim;
3407 leadsize = lim - (char*)p;
3408 newsize = chunksize(p) - leadsize;
3409
3410 #if HAVE_MMAP
3411 if(chunk_is_mmapped(p))
3412 {
3413 newp->prev_size = p->prev_size + leadsize;
3414 set_head(newp, newsize|IS_MMAPPED);
3415 init_malloced_chunk(newp, bytes);
3416 return chunk2mem(newp);
3417 }
3418 #endif
3419
3420 /* give back leader, use the rest */
3421
3422 set_head(newp, newsize | PREV_INUSE);
3423 set_inuse_bit_at_offset(newp, newsize);
3424 set_head_size(p, leadsize);
3425 init_freeable_chunk(p);
3426 fREe(chunk2mem(p));
3427 p = newp;
3428
3429 assert (newsize >= nb && (((unsigned long)(chunk2mem(p))) % alignment) == 0);
3430 }
3431
3432 /* Also give back spare room at the end */
3433
3434 remainder_size = chunksize(p) - nb;
3435
3436 if (remainder_size >= (long)MINSIZE)
3437 {
3438 remainder = chunk_at_offset(p, nb);
3439 set_head(remainder, remainder_size | PREV_INUSE);
3440 set_head_size(p, nb);
3441 init_freeable_chunk(remainder);
3442 fREe(chunk2mem(remainder));
3443 }
3444
3445 init_malloced_chunk(p, bytes);
3446 check_inuse_chunk(p);
3447 return chunk2mem(p);
3448
3449 }
3450
3451 \f
3452
3453
3454 /*
3455 valloc just invokes memalign with alignment argument equal
3456 to the page size of the system (or as near to this as can
3457 be figured out from all the includes/defines above.)
3458 */
3459
3460 #if __STD_C
3461 Void_t* vALLOc(size_t bytes)
3462 #else
3463 Void_t* vALLOc(bytes) size_t bytes;
3464 #endif
3465 {
3466 return mEMALIGn (malloc_getpagesize, bytes);
3467 }
3468
3469 /*
3470 pvalloc just invokes valloc for the nearest pagesize
3471 that will accommodate request
3472 */
3473
3474
3475 #if __STD_C
3476 Void_t* pvALLOc(size_t bytes)
3477 #else
3478 Void_t* pvALLOc(bytes) size_t bytes;
3479 #endif
3480 {
3481 size_t pagesize = malloc_getpagesize;
3482 return mEMALIGn (pagesize, (bytes + pagesize - 1) & ~(pagesize - 1));
3483 }
3484
3485 /*
3486
3487 calloc calls malloc, then zeroes out the allocated chunk.
3488
3489 */
3490
3491 #if __STD_C
3492 Void_t* cALLOc(size_t n, size_t elem_size)
3493 #else
3494 Void_t* cALLOc(n, elem_size) size_t n; size_t elem_size;
3495 #endif
3496 {
3497 mchunkptr p;
3498 INTERNAL_SIZE_T csz;
3499
3500 INTERNAL_SIZE_T sz = n * elem_size;
3501
3502 /* check if expand_top called, in which case don't need to clear */
3503 #if MORECORE_CLEARS
3504 mchunkptr oldtop = top;
3505 INTERNAL_SIZE_T oldtopsize = chunksize(top);
3506 #endif
3507 Void_t* mem = mALLOc (sz);
3508
3509 if (mem == 0)
3510 return 0;
3511 else
3512 {
3513 p = mem2chunk(mem);
3514
3515 /* Two optional cases in which clearing not necessary */
3516
3517
3518 #if HAVE_MMAP
3519 if (chunk_is_mmapped(p)) return mem;
3520 #endif
3521
3522 csz = chunksize(p);
3523
3524 #if MORECORE_CLEARS
3525 if (p == oldtop && csz > oldtopsize)
3526 {
3527 /* clear only the bytes from non-freshly-sbrked memory */
3528 csz = oldtopsize;
3529 }
3530 #endif
3531
3532 malloc_ZERO(mem, csz - OVERHEAD);
3533 /* reinstate moat fill in pad region */
3534 init_realloced_chunk(p, sz, chunksize(p));
3535 return mem;
3536 }
3537 }
3538
3539 /*
3540
3541 cfree just calls free. It is needed/defined on some systems
3542 that pair it with calloc, presumably for odd historical reasons.
3543
3544 */
3545
3546 #if !defined(INTERNAL_LINUX_C_LIB) || !defined(__ELF__)
3547 #if __STD_C
3548 void cfree(Void_t *mem)
3549 #else
3550 void cfree(mem) Void_t *mem;
3551 #endif
3552 {
3553 free(mem);
3554 }
3555 #endif
3556
3557 \f
3558
3559 /*
3560
3561 Malloc_trim gives memory back to the system (via negative
3562 arguments to sbrk) if there is unused memory at the `high' end of
3563 the malloc pool. You can call this after freeing large blocks of
3564 memory to potentially reduce the system-level memory requirements
3565 of a program. However, it cannot guarantee to reduce memory. Under
3566 some allocation patterns, some large free blocks of memory will be
3567 locked between two used chunks, so they cannot be given back to
3568 the system.
3569
3570 The `pad' argument to malloc_trim represents the amount of free
3571 trailing space to leave untrimmed. If this argument is zero,
3572 only the minimum amount of memory to maintain internal data
3573 structures will be left (one page or less). Non-zero arguments
3574 can be supplied to maintain enough trailing space to service
3575 future expected allocations without having to re-obtain memory
3576 from the system.
3577
3578 Malloc_trim returns 1 if it actually released any memory, else 0.
3579
3580 */
3581
3582 #if __STD_C
3583 int malloc_trim(size_t pad)
3584 #else
3585 int malloc_trim(pad) size_t pad;
3586 #endif
3587 {
3588 long top_size; /* Amount of top-most memory */
3589 long extra; /* Amount to release */
3590 char* current_lim; /* address returned by pre-check sbrk call */
3591 char* new_lim; /* address returned by negative sbrk call */
3592
3593 unsigned long pagesz = malloc_getpagesize;
3594
3595 top_size = chunksize(top);
3596 extra = ((top_size - pad - MINSIZE + (pagesz-1)) / pagesz - 1) * pagesz;
3597
3598 if (extra < (long)pagesz) /* Not enough memory to release */
3599 return 0;
3600
3601 else
3602 {
3603 #ifdef OTHER_SBRKS
3604 /* Test to make sure no one else called sbrk */
3605 current_lim = (char*)(MORECORE (0));
3606 if (current_lim != (char*)(top) + top_size)
3607 return 0; /* Apparently we don't own memory; must fail */
3608
3609 else
3610 #endif
3611 {
3612 new_lim = (char*)(MORECORE (-extra));
3613
3614 if (new_lim == (char*)(MORECORE_FAILURE)) /* sbrk failed? */
3615 {
3616 /* Try to figure out what we have */
3617 current_lim = (char*)(MORECORE (0));
3618 top_size = current_lim - (char*)top;
3619 if (top_size >= (long)MINSIZE) /* if not, we are very very dead! */
3620 {
3621 sbrked_mem = current_lim - sbrk_base;
3622 set_head(top, top_size | PREV_INUSE);
3623 init_freed_chunk(top, top_size, 0);
3624 }
3625 check_chunk(top);
3626 return 0;
3627 }
3628
3629 else
3630 {
3631 /* Success. Adjust top accordingly. */
3632 set_head(top, (top_size - extra) | PREV_INUSE);
3633 sbrked_mem -= extra;
3634 init_freed_chunk(top, top_size - extra, 0);
3635 check_chunk(top);
3636 return 1;
3637 }
3638 }
3639 }
3640 }
3641
3642 \f
3643
3644 /*
3645 malloc_usable_size:
3646
3647 This routine tells you how many bytes you can actually use in an
3648 allocated chunk, which may be more than you requested (although
3649 often not). You can use this many bytes without worrying about
3650 overwriting other allocated objects. Not a particularly great
3651 programming practice, but still sometimes useful.
3652
3653 */
3654
3655 #if __STD_C
3656 size_t malloc_usable_size(Void_t* mem)
3657 #else
3658 size_t malloc_usable_size(mem) Void_t* mem;
3659 #endif
3660 {
3661 mchunkptr p;
3662 if (mem == 0)
3663 return 0;
3664 else
3665 {
3666 p = mem2chunk(mem);
3667 check_inuse_chunk(p);
3668 maximize_chunk(p);
3669 if(!chunk_is_mmapped(p))
3670 {
3671 if (!inuse(p)) return 0;
3672 return chunksize(p) - OVERHEAD;
3673 }
3674 return chunksize(p) - OVERHEAD - MMAP_EXTRA;
3675 }
3676 }
3677
3678
3679 \f
3680
3681 /* Utility to update current_mallinfo for malloc_stats and mallinfo() */
3682
3683 static void malloc_update_mallinfo(void)
3684 {
3685 int i;
3686 mbinptr b;
3687 mchunkptr p;
3688 #if DEBUG
3689 mchunkptr q;
3690 #endif
3691
3692 INTERNAL_SIZE_T avail = chunksize(top);
3693 int navail = avail >= MINSIZE ? 1 : 0;
3694 check_freefill(top, avail, avail);
3695
3696 #if DEBUG
3697 if (lowest_chunk)
3698 for (p = lowest_chunk;
3699 p < top && inuse(p) && chunksize(p) >= MINSIZE;
3700 p = next_chunk(p))
3701 check_inuse_chunk(p);
3702 #endif
3703
3704 for (i = 1; i < NAV; ++i)
3705 {
3706 b = bin_at(i);
3707 for (p = last(b); p != b; p = p->bk)
3708 {
3709 #if DEBUG
3710 check_free_chunk(p);
3711 check_freefill(p, chunksize(p), chunksize(p));
3712 for (q = next_chunk(p);
3713 q < top && inuse(q) && chunksize(q) >= MINSIZE;
3714 q = next_chunk(q))
3715 check_inuse_chunk(q);
3716 #endif
3717 avail += chunksize(p);
3718 navail++;
3719 }
3720 }
3721
3722 current_mallinfo.ordblks = navail;
3723 current_mallinfo.uordblks = sbrked_mem - avail;
3724 current_mallinfo.fordblks = avail;
3725 current_mallinfo.hblks = n_mmaps;
3726 current_mallinfo.hblkhd = mmapped_mem;
3727 current_mallinfo.keepcost = chunksize(top);
3728
3729 }
3730
3731 \f
3732
3733 /*
3734
3735 malloc_stats:
3736
3737 Prints on stderr the amount of space obtain from the system (both
3738 via sbrk and mmap), the maximum amount (which may be more than
3739 current if malloc_trim and/or munmap got called), the maximum
3740 number of simultaneous mmap regions used, and the current number
3741 of bytes allocated via malloc (or realloc, etc) but not yet
3742 freed. (Note that this is the number of bytes allocated, not the
3743 number requested. It will be larger than the number requested
3744 because of alignment and bookkeeping overhead.)
3745
3746 */
3747
3748 void malloc_stats(void)
3749 {
3750 malloc_update_mallinfo();
3751 fprintf(stderr, "max system bytes = %10u\n",
3752 (unsigned int)(max_total_mem));
3753 fprintf(stderr, "system bytes = %10u\n",
3754 (unsigned int)(sbrked_mem + mmapped_mem));
3755 fprintf(stderr, "in use bytes = %10u\n",
3756 (unsigned int)(current_mallinfo.uordblks + mmapped_mem));
3757 #if HAVE_MMAP
3758 fprintf(stderr, "max mmap regions = %10u\n",
3759 (unsigned int)max_n_mmaps);
3760 #endif
3761 }
3762
3763 /*
3764 mallinfo returns a copy of updated current mallinfo.
3765 */
3766
3767 struct mallinfo mALLINFo(void)
3768 {
3769 malloc_update_mallinfo();
3770 return current_mallinfo;
3771 }
3772
3773
3774 \f
3775
3776 /*
3777 mallopt:
3778
3779 mallopt is the general SVID/XPG interface to tunable parameters.
3780 The format is to provide a (parameter-number, parameter-value) pair.
3781 mallopt then sets the corresponding parameter to the argument
3782 value if it can (i.e., so long as the value is meaningful),
3783 and returns 1 if successful else 0.
3784
3785 See descriptions of tunable parameters above.
3786
3787 */
3788
3789 #if __STD_C
3790 int mALLOPt(int param_number, int value)
3791 #else
3792 int mALLOPt(param_number, value) int param_number; int value;
3793 #endif
3794 {
3795 switch(param_number)
3796 {
3797 case M_TRIM_THRESHOLD:
3798 trim_threshold = value; return 1;
3799 case M_TOP_PAD:
3800 top_pad = value; return 1;
3801 case M_MMAP_THRESHOLD:
3802 mmap_threshold = value; return 1;
3803 case M_MMAP_MAX:
3804 #if HAVE_MMAP
3805 n_mmaps_max = value; return 1;
3806 #else
3807 if (value != 0) return 0; else n_mmaps_max = value; return 1;
3808 #endif
3809 case M_SCANHEAP:
3810 #ifdef DEBUG2
3811 scanheap = value;
3812 #endif
3813 return 1;
3814
3815 default:
3816 return 0;
3817 }
3818 }
3819
3820 /*
3821
3822 History:
3823
3824 V2.6.3 Sun May 19 08:17:58 1996 Doug Lea (dl at gee)
3825 * Added pvalloc, as recommended by H.J. Liu
3826 * Added 64bit pointer support mainly from Wolfram Gloger
3827 * Added anonymously donated WIN32 sbrk emulation
3828 * Malloc, calloc, getpagesize: add optimizations from Raymond Nijssen
3829 * malloc_extend_top: fix mask error that caused wastage after
3830 foreign sbrks
3831 * Add linux mremap support code from HJ Liu
3832
3833 V2.6.2 Tue Dec 5 06:52:55 1995 Doug Lea (dl at gee)
3834 * Integrated most documentation with the code.
3835 * Add support for mmap, with help from
3836 Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
3837 * Use last_remainder in more cases.
3838 * Pack bins using idea from colin@nyx10.cs.du.edu
3839 * Use ordered bins instead of best-fit threshhold
3840 * Eliminate block-local decls to simplify tracing and debugging.
3841 * Support another case of realloc via move into top
3842 * Fix error occuring when initial sbrk_base not word-aligned.
3843 * Rely on page size for units instead of SBRK_UNIT to
3844 avoid surprises about sbrk alignment conventions.
3845 * Add mallinfo, mallopt. Thanks to Raymond Nijssen
3846 (raymond@es.ele.tue.nl) for the suggestion.
3847 * Add `pad' argument to malloc_trim and top_pad mallopt parameter.
3848 * More precautions for cases where other routines call sbrk,
3849 courtesy of Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
3850 * Added macros etc., allowing use in linux libc from
3851 H.J. Lu (hjl@gnu.ai.mit.edu)
3852 * Inverted this history list
3853
3854 V2.6.1 Sat Dec 2 14:10:57 1995 Doug Lea (dl at gee)
3855 * Re-tuned and fixed to behave more nicely with V2.6.0 changes.
3856 * Removed all preallocation code since under current scheme
3857 the work required to undo bad preallocations exceeds
3858 the work saved in good cases for most test programs.
3859 * No longer use return list or unconsolidated bins since
3860 no scheme using them consistently outperforms those that don't
3861 given above changes.
3862 * Use best fit for very large chunks to prevent some worst-cases.
3863 * Added some support for debugging
3864
3865 V2.6.0 Sat Nov 4 07:05:23 1995 Doug Lea (dl at gee)
3866 * Removed footers when chunks are in use. Thanks to
3867 Paul Wilson (wilson@cs.texas.edu) for the suggestion.
3868
3869 V2.5.4 Wed Nov 1 07:54:51 1995 Doug Lea (dl at gee)
3870 * Added malloc_trim, with help from Wolfram Gloger
3871 (wmglo@Dent.MED.Uni-Muenchen.DE).
3872
3873 V2.5.3 Tue Apr 26 10:16:01 1994 Doug Lea (dl at g)
3874
3875 V2.5.2 Tue Apr 5 16:20:40 1994 Doug Lea (dl at g)
3876 * realloc: try to expand in both directions
3877 * malloc: swap order of clean-bin strategy;
3878 * realloc: only conditionally expand backwards
3879 * Try not to scavenge used bins
3880 * Use bin counts as a guide to preallocation
3881 * Occasionally bin return list chunks in first scan
3882 * Add a few optimizations from colin@nyx10.cs.du.edu
3883
3884 V2.5.1 Sat Aug 14 15:40:43 1993 Doug Lea (dl at g)
3885 * faster bin computation & slightly different binning
3886 * merged all consolidations to one part of malloc proper
3887 (eliminating old malloc_find_space & malloc_clean_bin)
3888 * Scan 2 returns chunks (not just 1)
3889 * Propagate failure in realloc if malloc returns 0
3890 * Add stuff to allow compilation on non-ANSI compilers
3891 from kpv@research.att.com
3892
3893 V2.5 Sat Aug 7 07:41:59 1993 Doug Lea (dl at g.oswego.edu)
3894 * removed potential for odd address access in prev_chunk
3895 * removed dependency on getpagesize.h
3896 * misc cosmetics and a bit more internal documentation
3897 * anticosmetics: mangled names in macros to evade debugger strangeness
3898 * tested on sparc, hp-700, dec-mips, rs6000
3899 with gcc & native cc (hp, dec only) allowing
3900 Detlefs & Zorn comparison study (in SIGPLAN Notices.)
3901
3902 Trial version Fri Aug 28 13:14:29 1992 Doug Lea (dl at g.oswego.edu)
3903 * Based loosely on libg++-1.2X malloc. (It retains some of the overall
3904 structure of old version, but most details differ.)
3905
3906 */
This page took 0.368764 seconds and 5 git commands to generate.