]> sourceware.org Git - newlib-cygwin.git/blame - winsup/cygwin/malloc.cc
* include/cygwin/version.h: Bump DLL minor number.
[newlib-cygwin.git] / winsup / cygwin / malloc.cc
CommitLineData
c7e2187a
CF
1/*
2 This is a version (aka dlmalloc) of malloc/free/realloc written by
3 Doug Lea and released to the public domain. Use, modify, and
4 redistribute this code without permission or acknowledgement in any
5 way you wish. Send questions, comments, complaints, performance
6 data, etc to dl@cs.oswego.edu
7
a80add95 8* VERSION 2.7.2 Sat Aug 17 09:07:30 2002 Doug Lea (dl at gee)
c7e2187a
CF
9
10 Note: There may be an updated version of this malloc obtainable at
11 ftp://gee.cs.oswego.edu/pub/misc/malloc.c
12 Check before installing!
13
14* Quickstart
15
16 This library is all in one file to simplify the most common usage:
17 ftp it, compile it (-O), and link it into another program. All
18 of the compile-time options default to reasonable values for use on
19 most unix platforms. Compile -DWIN32 for reasonable defaults on windows.
20 You might later want to step through various compile-time and dynamic
21 tuning options.
22
23 For convenience, an include file for code using this malloc is at:
24 ftp://gee.cs.oswego.edu/pub/misc/malloc-2.7.1.h
25 You don't really need this .h file unless you call functions not
26 defined in your system include files. The .h file contains only the
27 excerpts from this file needed for using this malloc on ANSI C/C++
28 systems, so long as you haven't changed compile-time options about
29 naming and tuning parameters. If you do, then you can create your
30 own malloc.h that does include all settings by cutting at the point
31 indicated below.
32
33* Why use this malloc?
34
35 This is not the fastest, most space-conserving, most portable, or
36 most tunable malloc ever written. However it is among the fastest
37 while also being among the most space-conserving, portable and tunable.
38 Consistent balance across these factors results in a good general-purpose
39 allocator for malloc-intensive programs.
40
41 The main properties of the algorithms are:
42 * For large (>= 512 bytes) requests, it is a pure best-fit allocator,
43 with ties normally decided via FIFO (i.e. least recently used).
44 * For small (<= 64 bytes by default) requests, it is a caching
45 allocator, that maintains pools of quickly recycled chunks.
46 * In between, and for combinations of large and small requests, it does
47 the best it can trying to meet both goals at once.
48 * For very large requests (>= 128KB by default), it relies on system
49 memory mapping facilities, if supported.
50
51 For a longer but slightly out of date high-level description, see
52 http://gee.cs.oswego.edu/dl/html/malloc.html
53
54 You may already by default be using a C library containing a malloc
55 that is based on some version of this malloc (for example in
56 linux). You might still want to use the one in this file in order to
57 customize settings or to avoid overheads associated with library
58 versions.
59
60* Contents, described in more detail in "description of public routines" below.
61
62 Standard (ANSI/SVID/...) functions:
63 malloc(size_t n);
64 calloc(size_t n_elements, size_t element_size);
65 free(Void_t* p);
66 realloc(Void_t* p, size_t n);
67 memalign(size_t alignment, size_t n);
68 valloc(size_t n);
69 mallinfo()
70 mallopt(int parameter_number, int parameter_value)
71
72 Additional functions:
73 independent_calloc(size_t n_elements, size_t size, Void_t* chunks[]);
74 independent_comalloc(size_t n_elements, size_t sizes[], Void_t* chunks[]);
75 pvalloc(size_t n);
76 cfree(Void_t* p);
77 malloc_trim(size_t pad);
78 malloc_usable_size(Void_t* p);
79 malloc_stats();
80
81* Vital statistics:
82
83 Supported pointer representation: 4 or 8 bytes
84 Supported size_t representation: 4 or 8 bytes
85 Note that size_t is allowed to be 4 bytes even if pointers are 8.
86 You can adjust this by defining INTERNAL_SIZE_T
87
88 Alignment: 2 * sizeof(size_t) (default)
89 (i.e., 8 byte alignment with 4byte size_t). This suffices for
90 nearly all current machines and C compilers. However, you can
91 define MALLOC_ALIGNMENT to be wider than this if necessary.
92
93 Minimum overhead per allocated chunk: 4 or 8 bytes
94 Each malloced chunk has a hidden word of overhead holding size
95 and status information.
96
97 Minimum allocated size: 4-byte ptrs: 16 bytes (including 4 overhead)
98 8-byte ptrs: 24/32 bytes (including, 4/8 overhead)
99
100 When a chunk is freed, 12 (for 4byte ptrs) or 20 (for 8 byte
101 ptrs but 4 byte size) or 24 (for 8/8) additional bytes are
102 needed; 4 (8) for a trailing size field and 8 (16) bytes for
103 free list pointers. Thus, the minimum allocatable size is
104 16/24/32 bytes.
105
106 Even a request for zero bytes (i.e., malloc(0)) returns a
107 pointer to something of the minimum allocatable size.
108
109 The maximum overhead wastage (i.e., number of extra bytes
110 allocated than were requested in malloc) is less than or equal
111 to the minimum size, except for requests >= mmap_threshold that
112 are serviced via mmap(), where the worst case wastage is 2 *
113 sizeof(size_t) bytes plus the remainder from a system page (the
114 minimal mmap unit); typically 4096 or 8192 bytes.
115
116 Maximum allocated size: 4-byte size_t: 2^32 minus about two pages
117 8-byte size_t: 2^64 minus about two pages
118
119 It is assumed that (possibly signed) size_t values suffice to
120 represent chunk sizes. `Possibly signed' is due to the fact
121 that `size_t' may be defined on a system as either a signed or
122 an unsigned type. The ISO C standard says that it must be
123 unsigned, but a few systems are known not to adhere to this.
124 Additionally, even when size_t is unsigned, sbrk (which is by
125 default used to obtain memory from system) accepts signed
126 arguments, and may not be able to handle size_t-wide arguments
127 with negative sign bit. Generally, values that would
128 appear as negative after accounting for overhead and alignment
129 are supported only via mmap(), which does not have this
130 limitation.
131
132 Requests for sizes outside the allowed range will perform an optional
133 failure action and then return null. (Requests may also
134 also fail because a system is out of memory.)
135
136 Thread-safety: NOT thread-safe unless USE_MALLOC_LOCK defined
137
138 When USE_MALLOC_LOCK is defined, wrappers are created to
139 surround every public call with either a pthread mutex or
140 a win32 spinlock (depending on WIN32). This is not
141 especially fast, and can be a major bottleneck.
142 It is designed only to provide minimal protection
143 in concurrent environments, and to provide a basis for
144 extensions. If you are using malloc in a concurrent program,
145 you would be far better off obtaining ptmalloc, which is
146 derived from a version of this malloc, and is well-tuned for
147 concurrent programs. (See http://www.malloc.de) Note that
148 even when USE_MALLOC_LOCK is defined, you can can guarantee
149 full thread-safety only if no threads acquire memory through
150 direct calls to MORECORE or other system-level allocators.
151
152 Compliance: I believe it is compliant with the 1997 Single Unix Specification
153 (See http://www.opennc.org). Also SVID/XPG, ANSI C, and probably
154 others as well.
155
156* Synopsis of compile-time options:
157
158 People have reported using previous versions of this malloc on all
159 versions of Unix, sometimes by tweaking some of the defines
160 below. It has been tested most extensively on Solaris and
161 Linux. It is also reported to work on WIN32 platforms.
162 People also report using it in stand-alone embedded systems.
163
164 The implementation is in straight, hand-tuned ANSI C. It is not
165 at all modular. (Sorry!) It uses a lot of macros. To be at all
166 usable, this code should be compiled using an optimizing compiler
167 (for example gcc -O3) that can simplify expressions and control
168 paths. (FAQ: some macros import variables as arguments rather than
169 declare locals because people reported that some debuggers
170 otherwise get confused.)
171
172 OPTION DEFAULT VALUE
173
174 Compilation Environment options:
175
176 __STD_C derived from C compiler defines
177 WIN32 NOT defined
178 HAVE_MEMCPY defined
179 USE_MEMCPY 1 if HAVE_MEMCPY is defined
180 HAVE_MMAP defined as 1
181 MMAP_CLEARS 1
182 HAVE_MREMAP 0 unless linux defined
183 malloc_getpagesize derived from system #includes, or 4096 if not
184 HAVE_USR_INCLUDE_MALLOC_H NOT defined
185 LACKS_UNISTD_H NOT defined unless WIN32
186 LACKS_SYS_PARAM_H NOT defined unless WIN32
187 LACKS_SYS_MMAN_H NOT defined unless WIN32
188 LACKS_FCNTL_H NOT defined
189
190 Changing default word sizes:
191
192 INTERNAL_SIZE_T size_t
193 MALLOC_ALIGNMENT 2 * sizeof(INTERNAL_SIZE_T)
194 PTR_UINT unsigned long
195 CHUNK_SIZE_T unsigned long
196
197 Configuration and functionality options:
198
199 USE_DL_PREFIX NOT defined
200 USE_PUBLIC_MALLOC_WRAPPERS NOT defined
201 USE_MALLOC_LOCK NOT defined
202 DEBUG NOT defined
203 REALLOC_ZERO_BYTES_FREES NOT defined
204 MALLOC_FAILURE_ACTION errno = ENOMEM, if __STD_C defined, else no-op
205 TRIM_FASTBINS 0
206 FIRST_SORTED_BIN_SIZE 512
207
208 Options for customizing MORECORE:
209
210 MORECORE sbrk
211 MORECORE_CONTIGUOUS 1
212 MORECORE_CANNOT_TRIM NOT defined
213 MMAP_AS_MORECORE_SIZE (1024 * 1024)
214
215 Tuning options that are also dynamically changeable via mallopt:
216
217 DEFAULT_MXFAST 64
218 DEFAULT_TRIM_THRESHOLD 256 * 1024
219 DEFAULT_TOP_PAD 0
220 DEFAULT_MMAP_THRESHOLD 256 * 1024
221 DEFAULT_MMAP_MAX 65536
222
223 There are several other #defined constants and macros that you
224 probably don't want to touch unless you are extending or adapting malloc.
225*/
226
227/*
228 WIN32 sets up defaults for MS environment and compilers.
229 Otherwise defaults are for unix.
230*/
231
232/* #define WIN32 */
233
234#ifdef WIN32
235
236#define WIN32_LEAN_AND_MEAN
237#include <windows.h>
238
239/* Win32 doesn't supply or need the following headers */
240#define LACKS_UNISTD_H
241#define LACKS_SYS_PARAM_H
242#define LACKS_SYS_MMAN_H
243
244/* Use the supplied emulation of sbrk */
245#define MORECORE sbrk
246#define MORECORE_CONTIGUOUS 1
247#define MORECORE_FAILURE ((void*)(-1))
248
249/* Use the supplied emulation of mmap and munmap */
250#define HAVE_MMAP 1
251#define MUNMAP_FAILURE (-1)
252#define MMAP_CLEARS 1
253
254/* These values don't really matter in windows mmap emulation */
255#define MAP_PRIVATE 1
256#define MAP_ANONYMOUS 2
257#define PROT_READ 1
258#define PROT_WRITE 2
259
260/* Emulation functions defined at the end of this file */
261
262/* If USE_MALLOC_LOCK, use supplied critical-section-based lock functions */
263#ifdef USE_MALLOC_LOCK
264static int slwait(int *sl);
265static int slrelease(int *sl);
266#endif
267
268static long getpagesize(void);
269static long getregionsize(void);
270static void *sbrk(long size);
271static void *mmap(void *ptr, long size, long prot, long type, long handle, long arg);
272static long munmap(void *ptr, long size);
273
274static void vminfo (unsigned long*free, unsigned long*reserved, unsigned long*committed);
275static int cpuinfo (int whole, unsigned long*kernel, unsigned long*user);
276
277#endif
278
279/*
280 __STD_C should be nonzero if using ANSI-standard C compiler, a C++
281 compiler, or a C compiler sufficiently close to ANSI to get away
282 with it.
283*/
284
285#ifndef __STD_C
286#if defined(__STDC__) || defined(_cplusplus)
287#define __STD_C 1
288#else
289#define __STD_C 0
290#endif
291#endif /*__STD_C*/
292
293
294/*
295 Void_t* is the pointer type that malloc should say it returns
296*/
297
298#ifndef Void_t
299#if (__STD_C || defined(WIN32))
300#define Void_t void
301#else
302#define Void_t char
303#endif
304#endif /*Void_t*/
305
306#if __STD_C
307#include <stddef.h> /* for size_t */
308#else
309#include <sys/types.h>
310#endif
311
312#include "cygmalloc.h"
313
314#ifdef __cplusplus
315extern "C" {
316#endif
317
318/* define LACKS_UNISTD_H if your system does not have a <unistd.h>. */
319
320/* #define LACKS_UNISTD_H */
321
322#ifndef LACKS_UNISTD_H
323#include <unistd.h>
324#endif
325
326/* define LACKS_SYS_PARAM_H if your system does not have a <sys/param.h>. */
327
328/* #define LACKS_SYS_PARAM_H */
329
330
331#include <stdio.h> /* needed for malloc_stats */
332#include <errno.h> /* needed for optional MALLOC_FAILURE_ACTION */
333
334
335/*
336 Debugging:
337
338 Because freed chunks may be overwritten with bookkeeping fields, this
339 malloc will often die when freed memory is overwritten by user
340 programs. This can be very effective (albeit in an annoying way)
341 in helping track down dangling pointers.
342
343 If you compile with -DDEBUG, a number of assertion checks are
344 enabled that will catch more memory errors. You probably won't be
345 able to make much sense of the actual assertion errors, but they
346 should help you locate incorrectly overwritten memory. The
347 checking is fairly extensive, and will slow down execution
348 noticeably. Calling malloc_stats or mallinfo with DEBUG set will
349 attempt to check every non-mmapped allocated and free chunk in the
350 course of computing the summmaries. (By nature, mmapped regions
351 cannot be checked very much automatically.)
352
353 Setting DEBUG may also be helpful if you are trying to modify
354 this code. The assertions in the check routines spell out in more
355 detail the assumptions and invariants underlying the algorithms.
356
357 Setting DEBUG does NOT provide an automated mechanism for checking
358 that all accesses to malloced memory stay within their
359 bounds. However, there are several add-ons and adaptations of this
360 or other mallocs available that do this.
361*/
362
363#if DEBUG
364#include <assert.h>
365#else
366#define assert(x) ((void)0)
367#endif
368
369/*
370 The unsigned integer type used for comparing any two chunk sizes.
371 This should be at least as wide as size_t, but should not be signed.
372*/
373
374#ifndef CHUNK_SIZE_T
375#define CHUNK_SIZE_T unsigned long
376#endif
377
378/*
379 The unsigned integer type used to hold addresses when they are are
380 manipulated as integers. Except that it is not defined on all
381 systems, intptr_t would suffice.
382*/
383#ifndef PTR_UINT
384#define PTR_UINT unsigned long
385#endif
386
387
388/*
389 INTERNAL_SIZE_T is the word-size used for internal bookkeeping
390 of chunk sizes.
391
392 The default version is the same as size_t.
393
394 While not strictly necessary, it is best to define this as an
395 unsigned type, even if size_t is a signed type. This may avoid some
396 artificial size limitations on some systems.
397
398 On a 64-bit machine, you may be able to reduce malloc overhead by
399 defining INTERNAL_SIZE_T to be a 32 bit `unsigned int' at the
400 expense of not being able to handle more than 2^32 of malloced
401 space. If this limitation is acceptable, you are encouraged to set
402 this unless you are on a platform requiring 16byte alignments. In
403 this case the alignment requirements turn out to negate any
404 potential advantages of decreasing size_t word size.
405
406 Implementors: Beware of the possible combinations of:
407 - INTERNAL_SIZE_T might be signed or unsigned, might be 32 or 64 bits,
408 and might be the same width as int or as long
409 - size_t might have different width and signedness as INTERNAL_SIZE_T
410 - int and long might be 32 or 64 bits, and might be the same width
411 To deal with this, most comparisons and difference computations
412 among INTERNAL_SIZE_Ts should cast them to CHUNK_SIZE_T, being
413 aware of the fact that casting an unsigned int to a wider long does
414 not sign-extend. (This also makes checking for negative numbers
415 awkward.) Some of these casts result in harmless compiler warnings
416 on some systems.
417*/
418
419#ifndef INTERNAL_SIZE_T
420#define INTERNAL_SIZE_T size_t
421#endif
422
423/* The corresponding word size */
424#define SIZE_SZ (sizeof(INTERNAL_SIZE_T))
425
426
427
428/*
429 MALLOC_ALIGNMENT is the minimum alignment for malloc'ed chunks.
430 It must be a power of two at least 2 * SIZE_SZ, even on machines
431 for which smaller alignments would suffice. It may be defined as
432 larger than this though. Note however that code and data structures
433 are optimized for the case of 8-byte alignment.
434*/
435
436
437#ifndef MALLOC_ALIGNMENT
438#define MALLOC_ALIGNMENT (2 * SIZE_SZ)
439#endif
440
441/* The corresponding bit mask value */
442#define MALLOC_ALIGN_MASK (MALLOC_ALIGNMENT - 1)
443
444
445
446/*
447 REALLOC_ZERO_BYTES_FREES should be set if a call to
448 realloc with zero bytes should be the same as a call to free.
449 Some people think it should. Otherwise, since this malloc
450 returns a unique pointer for malloc(0), so does realloc(p, 0).
451*/
452
453/* #define REALLOC_ZERO_BYTES_FREES */
454
455/*
456 TRIM_FASTBINS controls whether free() of a very small chunk can
457 immediately lead to trimming. Setting to true (1) can reduce memory
458 footprint, but will almost always slow down programs that use a lot
459 of small chunks.
460
461 Define this only if you are willing to give up some speed to more
462 aggressively reduce system-level memory footprint when releasing
463 memory in programs that use many small chunks. You can get
464 essentially the same effect by setting MXFAST to 0, but this can
465 lead to even greater slowdowns in programs using many small chunks.
466 TRIM_FASTBINS is an in-between compile-time option, that disables
467 only those chunks bordering topmost memory from being placed in
468 fastbins.
469*/
470
471#ifndef TRIM_FASTBINS
472#define TRIM_FASTBINS 0
473#endif
474
475
476/*
477 USE_DL_PREFIX will prefix all public routines with the string 'dl'.
478 This is necessary when you only want to use this malloc in one part
479 of a program, using your regular system malloc elsewhere.
480*/
481
482/* #define USE_DL_PREFIX */
483
484
485/*
486 USE_MALLOC_LOCK causes wrapper functions to surround each
487 callable routine with pthread mutex lock/unlock.
488
489 USE_MALLOC_LOCK forces USE_PUBLIC_MALLOC_WRAPPERS to be defined
490*/
491
492
493/* #define USE_MALLOC_LOCK */
494
495
496/*
497 If USE_PUBLIC_MALLOC_WRAPPERS is defined, every public routine is
498 actually a wrapper function that first calls MALLOC_PREACTION, then
499 calls the internal routine, and follows it with
500 MALLOC_POSTACTION. This is needed for locking, but you can also use
501 this, without USE_MALLOC_LOCK, for purposes of interception,
502 instrumentation, etc. It is a sad fact that using wrappers often
503 noticeably degrades performance of malloc-intensive programs.
504*/
505
506#ifdef USE_MALLOC_LOCK
507#define USE_PUBLIC_MALLOC_WRAPPERS
508#else
509/* #define USE_PUBLIC_MALLOC_WRAPPERS */
510#endif
511
512
513/*
514 Two-phase name translation.
515 All of the actual routines are given mangled names.
516 When wrappers are used, they become the public callable versions.
517 When DL_PREFIX is used, the callable names are prefixed.
518*/
519
520#ifndef USE_PUBLIC_MALLOC_WRAPPERS
521#define cALLOc public_cALLOc
522#define fREe public_fREe
523#define cFREe public_cFREe
524#define mALLOc public_mALLOc
525#define mEMALIGn public_mEMALIGn
526#define rEALLOc public_rEALLOc
527#define vALLOc public_vALLOc
528#define pVALLOc public_pVALLOc
529#define mALLINFo public_mALLINFo
530#define mALLOPt public_mALLOPt
531#define mTRIm public_mTRIm
532#define mSTATs public_mSTATs
533#define mUSABLe public_mUSABLe
534#define iCALLOc public_iCALLOc
535#define iCOMALLOc public_iCOMALLOc
536#endif
537
538#ifdef USE_DL_PREFIX
539#define public_cALLOc dlcalloc
540#define public_fREe dlfree
541#define public_cFREe dlcfree
542#define public_mALLOc dlmalloc
543#define public_mEMALIGn dlmemalign
544#define public_rEALLOc dlrealloc
545#define public_vALLOc dlvalloc
546#define public_pVALLOc dlpvalloc
547#define public_mALLINFo dlmallinfo
548#define public_mALLOPt dlmallopt
549#define public_mTRIm dlmalloc_trim
550#define public_mSTATs dlmalloc_stats
551#define public_mUSABLe dlmalloc_usable_size
552#define public_iCALLOc dlindependent_calloc
553#define public_iCOMALLOc dlindependent_comalloc
554#else /* USE_DL_PREFIX */
555#define public_cALLOc calloc
556#define public_fREe free
557#define public_cFREe cfree
558#define public_mALLOc malloc
559#define public_mEMALIGn memalign
560#define public_rEALLOc realloc
561#define public_vALLOc valloc
562#define public_pVALLOc pvalloc
563#define public_mALLINFo mallinfo
564#define public_mALLOPt mallopt
565#define public_mTRIm malloc_trim
566#define public_mSTATs malloc_stats
567#define public_mUSABLe malloc_usable_size
568#define public_iCALLOc independent_calloc
569#define public_iCOMALLOc independent_comalloc
570#endif /* USE_DL_PREFIX */
571
572
573/*
574 HAVE_MEMCPY should be defined if you are not otherwise using
575 ANSI STD C, but still have memcpy and memset in your C library
576 and want to use them in calloc and realloc. Otherwise simple
577 macro versions are defined below.
578
579 USE_MEMCPY should be defined as 1 if you actually want to
580 have memset and memcpy called. People report that the macro
581 versions are faster than libc versions on some systems.
582
583 Even if USE_MEMCPY is set to 1, loops to copy/clear small chunks
584 (of <= 36 bytes) are manually unrolled in realloc and calloc.
585*/
586
587#define HAVE_MEMCPY
588
589#ifndef USE_MEMCPY
590#ifdef HAVE_MEMCPY
591#define USE_MEMCPY 1
592#else
593#define USE_MEMCPY 0
594#endif
595#endif
596
597
598#if (__STD_C || defined(HAVE_MEMCPY))
599
600#ifdef WIN32
601/* On Win32 memset and memcpy are already declared in windows.h */
602#else
603#if __STD_C
604void* memset(void*, int, size_t);
605void* memcpy(void*, const void*, size_t);
606#else
607Void_t* memset();
608Void_t* memcpy();
609#endif
610#endif
611#endif
612
613/*
614 MALLOC_FAILURE_ACTION is the action to take before "return 0" when
615 malloc fails to be able to return memory, either because memory is
616 exhausted or because of illegal arguments.
617
618 By default, sets errno if running on STD_C platform, else does nothing.
619*/
620
621#ifndef MALLOC_FAILURE_ACTION
622#if __STD_C
623#define MALLOC_FAILURE_ACTION \
624 errno = ENOMEM;
625
626#else
627#define MALLOC_FAILURE_ACTION
628#endif
629#endif
630
631/*
632 MORECORE-related declarations. By default, rely on sbrk
633*/
634
635
636#ifdef LACKS_UNISTD_H
637#if !defined(__FreeBSD__) && !defined(__OpenBSD__) && !defined(__NetBSD__)
638#if __STD_C
639extern Void_t* sbrk(ptrdiff_t);
640#else
641extern Void_t* sbrk();
642#endif
643#endif
644#endif
645
646/*
647 MORECORE is the name of the routine to call to obtain more memory
648 from the system. See below for general guidance on writing
649 alternative MORECORE functions, as well as a version for WIN32 and a
650 sample version for pre-OSX macos.
651*/
652
653#ifndef MORECORE
654#define MORECORE sbrk
655#endif
656
657/*
658 MORECORE_FAILURE is the value returned upon failure of MORECORE
659 as well as mmap. Since it cannot be an otherwise valid memory address,
660 and must reflect values of standard sys calls, you probably ought not
661 try to redefine it.
662*/
663
664#ifndef MORECORE_FAILURE
665#define MORECORE_FAILURE (-1)
666#endif
667
668/*
669 If MORECORE_CONTIGUOUS is true, take advantage of fact that
670 consecutive calls to MORECORE with positive arguments always return
671 contiguous increasing addresses. This is true of unix sbrk. Even
672 if not defined, when regions happen to be contiguous, malloc will
673 permit allocations spanning regions obtained from different
674 calls. But defining this when applicable enables some stronger
675 consistency checks and space efficiencies.
676*/
677
678#ifndef MORECORE_CONTIGUOUS
679#define MORECORE_CONTIGUOUS 1
680#endif
681
682/*
683 Define MORECORE_CANNOT_TRIM if your version of MORECORE
684 cannot release space back to the system when given negative
685 arguments. This is generally necessary only if you are using
686 a hand-crafted MORECORE function that cannot handle negative arguments.
687*/
688
689/* #define MORECORE_CANNOT_TRIM */
690
691
692/*
693 Define HAVE_MMAP as true to optionally make malloc() use mmap() to
694 allocate very large blocks. These will be returned to the
695 operating system immediately after a free(). Also, if mmap
696 is available, it is used as a backup strategy in cases where
697 MORECORE fails to provide space from system.
698
699 This malloc is best tuned to work with mmap for large requests.
700 If you do not have mmap, operations involving very large chunks (1MB
701 or so) may be slower than you'd like.
702*/
703
704#ifndef HAVE_MMAP
705#define HAVE_MMAP 1
706#endif
707
708#if HAVE_MMAP
709/*
710 Standard unix mmap using /dev/zero clears memory so calloc doesn't
711 need to.
712*/
713
714#ifndef MMAP_CLEARS
715#define MMAP_CLEARS 1
716#endif
717
718#else /* no mmap */
719#ifndef MMAP_CLEARS
720#define MMAP_CLEARS 0
721#endif
722#endif
723
724
725/*
726 MMAP_AS_MORECORE_SIZE is the minimum mmap size argument to use if
727 sbrk fails, and mmap is used as a backup (which is done only if
728 HAVE_MMAP). The value must be a multiple of page size. This
729 backup strategy generally applies only when systems have "holes" in
730 address space, so sbrk cannot perform contiguous expansion, but
731 there is still space available on system. On systems for which
732 this is known to be useful (i.e. most linux kernels), this occurs
733 only when programs allocate huge amounts of memory. Between this,
734 and the fact that mmap regions tend to be limited, the size should
735 be large, to avoid too many mmap calls and thus avoid running out
736 of kernel resources.
737*/
738
739#ifndef MMAP_AS_MORECORE_SIZE
740#define MMAP_AS_MORECORE_SIZE (1024 * 1024)
741#endif
742
743/*
744 Define HAVE_MREMAP to make realloc() use mremap() to re-allocate
745 large blocks. This is currently only possible on Linux with
746 kernel versions newer than 1.3.77.
747*/
748
749#ifndef HAVE_MREMAP
750#ifdef linux
751#define HAVE_MREMAP 1
752#else
753#define HAVE_MREMAP 0
754#endif
755
756#endif /* HAVE_MMAP */
757
758
759/*
760 The system page size. To the extent possible, this malloc manages
761 memory from the system in page-size units. Note that this value is
762 cached during initialization into a field of malloc_state. So even
763 if malloc_getpagesize is a function, it is only called once.
764
765 The following mechanics for getpagesize were adapted from bsd/gnu
766 getpagesize.h. If none of the system-probes here apply, a value of
767 4096 is used, which should be OK: If they don't apply, then using
768 the actual value probably doesn't impact performance.
769*/
770
771
772#ifndef malloc_getpagesize
773
774#ifndef LACKS_UNISTD_H
775# include <unistd.h>
776#endif
777
778# ifdef _SC_PAGESIZE /* some SVR4 systems omit an underscore */
779# ifndef _SC_PAGE_SIZE
780# define _SC_PAGE_SIZE _SC_PAGESIZE
781# endif
782# endif
783
784# ifdef _SC_PAGE_SIZE
785# define malloc_getpagesize sysconf(_SC_PAGE_SIZE)
786# else
787# if defined(BSD) || defined(DGUX) || defined(HAVE_GETPAGESIZE)
788 extern size_t getpagesize();
789# define malloc_getpagesize getpagesize()
790# else
791# ifdef WIN32 /* use supplied emulation of getpagesize */
792# define malloc_getpagesize getpagesize()
793# else
794# ifndef LACKS_SYS_PARAM_H
795# include <sys/param.h>
796# endif
797# ifdef EXEC_PAGESIZE
798# define malloc_getpagesize EXEC_PAGESIZE
799# else
800# ifdef NBPG
801# ifndef CLSIZE
802# define malloc_getpagesize NBPG
803# else
804# define malloc_getpagesize (NBPG * CLSIZE)
805# endif
806# else
807# ifdef NBPC
808# define malloc_getpagesize NBPC
809# else
810# ifdef PAGESIZE
811# define malloc_getpagesize PAGESIZE
812# else /* just guess */
813# define malloc_getpagesize (4096)
814# endif
815# endif
816# endif
817# endif
818# endif
819# endif
820# endif
821#endif
822
823/*
824 This version of malloc supports the standard SVID/XPG mallinfo
825 routine that returns a struct containing usage properties and
826 statistics. It should work on any SVID/XPG compliant system that has
827 a /usr/include/malloc.h defining struct mallinfo. (If you'd like to
828 install such a thing yourself, cut out the preliminary declarations
829 as described above and below and save them in a malloc.h file. But
830 there's no compelling reason to bother to do this.)
831
832 The main declaration needed is the mallinfo struct that is returned
833 (by-copy) by mallinfo(). The SVID/XPG malloinfo struct contains a
834 bunch of fields that are not even meaningful in this version of
835 malloc. These fields are are instead filled by mallinfo() with
836 other numbers that might be of interest.
837
838 HAVE_USR_INCLUDE_MALLOC_H should be set if you have a
839 /usr/include/malloc.h file that includes a declaration of struct
840 mallinfo. If so, it is included; else an SVID2/XPG2 compliant
841 version is declared below. These must be precisely the same for
842 mallinfo() to work. The original SVID version of this struct,
843 defined on most systems with mallinfo, declares all fields as
844 ints. But some others define as unsigned long. If your system
845 defines the fields using a type of different width than listed here,
846 you must #include your system version and #define
847 HAVE_USR_INCLUDE_MALLOC_H.
848*/
849
850/* #define HAVE_USR_INCLUDE_MALLOC_H */
851
852#ifdef HAVE_USR_INCLUDE_MALLOC_H
853#include "/usr/include/malloc.h"
854#else
855
856/* SVID2/XPG mallinfo structure */
857
858struct mallinfo {
859 int arena; /* non-mmapped space allocated from system */
860 int ordblks; /* number of free chunks */
861 int smblks; /* number of fastbin blocks */
862 int hblks; /* number of mmapped regions */
863 int hblkhd; /* space in mmapped regions */
864 int usmblks; /* maximum total allocated space */
865 int fsmblks; /* space available in freed fastbin blocks */
866 int uordblks; /* total allocated space */
867 int fordblks; /* total free space */
868 int keepcost; /* top-most, releasable (via malloc_trim) space */
869};
870
871/*
872 SVID/XPG defines four standard parameter numbers for mallopt,
873 normally defined in malloc.h. Only one of these (M_MXFAST) is used
874 in this malloc. The others (M_NLBLKS, M_GRAIN, M_KEEP) don't apply,
875 so setting them has no effect. But this malloc also supports other
876 options in mallopt described below.
877*/
878#endif
879
880
881/* ---------- description of public routines ------------ */
882
883/*
884 malloc(size_t n)
885 Returns a pointer to a newly allocated chunk of at least n bytes, or null
886 if no space is available. Additionally, on failure, errno is
887 set to ENOMEM on ANSI C systems.
888
889 If n is zero, malloc returns a minumum-sized chunk. (The minimum
890 size is 16 bytes on most 32bit systems, and 24 or 32 bytes on 64bit
891 systems.) On most systems, size_t is an unsigned type, so calls
892 with negative arguments are interpreted as requests for huge amounts
893 of space, which will often fail. The maximum supported value of n
894 differs across systems, but is in all cases less than the maximum
895 representable value of a size_t.
896*/
897#if __STD_C
898Void_t* public_mALLOc(size_t);
899#else
900Void_t* public_mALLOc();
901#endif
902
903/*
904 free(Void_t* p)
905 Releases the chunk of memory pointed to by p, that had been previously
906 allocated using malloc or a related routine such as realloc.
907 It has no effect if p is null. It can have arbitrary (i.e., bad!)
908 effects if p has already been freed.
909
910 Unless disabled (using mallopt), freeing very large spaces will
911 when possible, automatically trigger operations that give
912 back unused memory to the system, thus reducing program footprint.
913*/
914#if __STD_C
915void public_fREe(Void_t*);
916#else
917void public_fREe();
918#endif
919
920/*
921 calloc(size_t n_elements, size_t element_size);
922 Returns a pointer to n_elements * element_size bytes, with all locations
923 set to zero.
924*/
925#if __STD_C
926Void_t* public_cALLOc(size_t, size_t);
927#else
928Void_t* public_cALLOc();
929#endif
930
931/*
932 realloc(Void_t* p, size_t n)
933 Returns a pointer to a chunk of size n that contains the same data
934 as does chunk p up to the minimum of (n, p's size) bytes, or null
935 if no space is available.
936
937 The returned pointer may or may not be the same as p. The algorithm
938 prefers extending p when possible, otherwise it employs the
939 equivalent of a malloc-copy-free sequence.
940
941 If p is null, realloc is equivalent to malloc.
942
943 If space is not available, realloc returns null, errno is set (if on
944 ANSI) and p is NOT freed.
945
946 if n is for fewer bytes than already held by p, the newly unused
947 space is lopped off and freed if possible. Unless the #define
948 REALLOC_ZERO_BYTES_FREES is set, realloc with a size argument of
949 zero (re)allocates a minimum-sized chunk.
950
951 Large chunks that were internally obtained via mmap will always
952 be reallocated using malloc-copy-free sequences unless
953 the system supports MREMAP (currently only linux).
954
955 The old unix realloc convention of allowing the last-free'd chunk
956 to be used as an argument to realloc is not supported.
957*/
958#if __STD_C
959Void_t* public_rEALLOc(Void_t*, size_t);
960#else
961Void_t* public_rEALLOc();
962#endif
963
964/*
965 memalign(size_t alignment, size_t n);
966 Returns a pointer to a newly allocated chunk of n bytes, aligned
967 in accord with the alignment argument.
968
969 The alignment argument should be a power of two. If the argument is
970 not a power of two, the nearest greater power is used.
971 8-byte alignment is guaranteed by normal malloc calls, so don't
972 bother calling memalign with an argument of 8 or less.
973
974 Overreliance on memalign is a sure way to fragment space.
975*/
976#if __STD_C
977Void_t* public_mEMALIGn(size_t, size_t);
978#else
979Void_t* public_mEMALIGn();
980#endif
981
982/*
983 valloc(size_t n);
984 Equivalent to memalign(pagesize, n), where pagesize is the page
985 size of the system. If the pagesize is unknown, 4096 is used.
986*/
987#if __STD_C
988Void_t* public_vALLOc(size_t);
989#else
990Void_t* public_vALLOc();
991#endif
992
993
994
995/*
996 mallopt(int parameter_number, int parameter_value)
997 Sets tunable parameters The format is to provide a
998 (parameter-number, parameter-value) pair. mallopt then sets the
999 corresponding parameter to the argument value if it can (i.e., so
1000 long as the value is meaningful), and returns 1 if successful else
1001 0. SVID/XPG/ANSI defines four standard param numbers for mallopt,
1002 normally defined in malloc.h. Only one of these (M_MXFAST) is used
1003 in this malloc. The others (M_NLBLKS, M_GRAIN, M_KEEP) don't apply,
1004 so setting them has no effect. But this malloc also supports four
1005 other options in mallopt. See below for details. Briefly, supported
1006 parameters are as follows (listed defaults are for "typical"
1007 configurations).
1008
1009 Symbol param # default allowed param values
1010 M_MXFAST 1 64 0-80 (0 disables fastbins)
1011 M_TRIM_THRESHOLD -1 256*1024 any (-1U disables trimming)
1012 M_TOP_PAD -2 0 any
1013 M_MMAP_THRESHOLD -3 256*1024 any (or 0 if no MMAP support)
1014 M_MMAP_MAX -4 65536 any (0 disables use of mmap)
1015*/
1016#if __STD_C
1017int public_mALLOPt(int, int);
1018#else
1019int public_mALLOPt();
1020#endif
1021
1022
1023/*
1024 mallinfo()
1025 Returns (by copy) a struct containing various summary statistics:
1026
1027 arena: current total non-mmapped bytes allocated from system
1028 ordblks: the number of free chunks
1029 smblks: the number of fastbin blocks (i.e., small chunks that
1030 have been freed but not use resused or consolidated)
1031 hblks: current number of mmapped regions
1032 hblkhd: total bytes held in mmapped regions
1033 usmblks: the maximum total allocated space. This will be greater
1034 than current total if trimming has occurred.
1035 fsmblks: total bytes held in fastbin blocks
1036 uordblks: current total allocated space (normal or mmapped)
1037 fordblks: total free space
1038 keepcost: the maximum number of bytes that could ideally be released
1039 back to system via malloc_trim. ("ideally" means that
1040 it ignores page restrictions etc.)
1041
1042 Because these fields are ints, but internal bookkeeping may
1043 be kept as longs, the reported values may wrap around zero and
1044 thus be inaccurate.
1045*/
1046#if __STD_C
1047struct mallinfo public_mALLINFo(void);
1048#else
1049struct mallinfo public_mALLINFo();
1050#endif
1051
1052/*
1053 independent_calloc(size_t n_elements, size_t element_size, Void_t* chunks[]);
1054
1055 independent_calloc is similar to calloc, but instead of returning a
1056 single cleared space, it returns an array of pointers to n_elements
1057 independent elements that can hold contents of size elem_size, each
1058 of which starts out cleared, and can be independently freed,
1059 realloc'ed etc. The elements are guaranteed to be adjacently
1060 allocated (this is not guaranteed to occur with multiple callocs or
1061 mallocs), which may also improve cache locality in some
1062 applications.
1063
1064 The "chunks" argument is optional (i.e., may be null, which is
1065 probably the most typical usage). If it is null, the returned array
1066 is itself dynamically allocated and should also be freed when it is
1067 no longer needed. Otherwise, the chunks array must be of at least
1068 n_elements in length. It is filled in with the pointers to the
1069 chunks.
1070
1071 In either case, independent_calloc returns this pointer array, or
1072 null if the allocation failed. If n_elements is zero and "chunks"
1073 is null, it returns a chunk representing an array with zero elements
1074 (which should be freed if not wanted).
1075
1076 Each element must be individually freed when it is no longer
1077 needed. If you'd like to instead be able to free all at once, you
1078 should instead use regular calloc and assign pointers into this
1079 space to represent elements. (In this case though, you cannot
1080 independently free elements.)
1081
1082 independent_calloc simplifies and speeds up implementations of many
1083 kinds of pools. It may also be useful when constructing large data
1084 structures that initially have a fixed number of fixed-sized nodes,
1085 but the number is not known at compile time, and some of the nodes
1086 may later need to be freed. For example:
1087
1088 struct Node { int item; struct Node* next; };
1089
1090 struct Node* build_list() {
1091 struct Node** pool;
1092 int n = read_number_of_nodes_needed();
1093 if (n <= 0) return 0;
1094 pool = (struct Node**)(independent_calloc(n, sizeof(struct Node), 0);
1095 if (pool == 0) die();
1096 // organize into a linked list...
1097 struct Node* first = pool[0];
1098 for (i = 0; i < n-1; ++i)
1099 pool[i]->next = pool[i+1];
1100 free(pool); // Can now free the array (or not, if it is needed later)
1101 return first;
1102 }
1103*/
1104#if __STD_C
1105Void_t** public_iCALLOc(size_t, size_t, Void_t**);
1106#else
1107Void_t** public_iCALLOc();
1108#endif
1109
1110/*
1111 independent_comalloc(size_t n_elements, size_t sizes[], Void_t* chunks[]);
1112
1113 independent_comalloc allocates, all at once, a set of n_elements
1114 chunks with sizes indicated in the "sizes" array. It returns
1115 an array of pointers to these elements, each of which can be
1116 independently freed, realloc'ed etc. The elements are guaranteed to
1117 be adjacently allocated (this is not guaranteed to occur with
1118 multiple callocs or mallocs), which may also improve cache locality
1119 in some applications.
1120
1121 The "chunks" argument is optional (i.e., may be null). If it is null
1122 the returned array is itself dynamically allocated and should also
1123 be freed when it is no longer needed. Otherwise, the chunks array
1124 must be of at least n_elements in length. It is filled in with the
1125 pointers to the chunks.
1126
1127 In either case, independent_comalloc returns this pointer array, or
1128 null if the allocation failed. If n_elements is zero and chunks is
1129 null, it returns a chunk representing an array with zero elements
1130 (which should be freed if not wanted).
1131
1132 Each element must be individually freed when it is no longer
1133 needed. If you'd like to instead be able to free all at once, you
1134 should instead use a single regular malloc, and assign pointers at
1135 particular offsets in the aggregate space. (In this case though, you
1136 cannot independently free elements.)
1137
1138 independent_comallac differs from independent_calloc in that each
1139 element may have a different size, and also that it does not
1140 automatically clear elements.
1141
1142 independent_comalloc can be used to speed up allocation in cases
1143 where several structs or objects must always be allocated at the
1144 same time. For example:
1145
1146 struct Head { ... }
1147 struct Foot { ... }
1148
1149 void send_message(char* msg) {
1150 int msglen = strlen(msg);
1151 size_t sizes[3] = { sizeof(struct Head), msglen, sizeof(struct Foot) };
1152 void* chunks[3];
1153 if (independent_comalloc(3, sizes, chunks) == 0)
1154 die();
1155 struct Head* head = (struct Head*)(chunks[0]);
1156 char* body = (char*)(chunks[1]);
1157 struct Foot* foot = (struct Foot*)(chunks[2]);
1158 // ...
1159 }
1160
1161 In general though, independent_comalloc is worth using only for
1162 larger values of n_elements. For small values, you probably won't
1163 detect enough difference from series of malloc calls to bother.
1164
1165 Overuse of independent_comalloc can increase overall memory usage,
1166 since it cannot reuse existing noncontiguous small chunks that
1167 might be available for some of the elements.
1168*/
1169#if __STD_C
1170Void_t** public_iCOMALLOc(size_t, size_t*, Void_t**);
1171#else
1172Void_t** public_iCOMALLOc();
1173#endif
1174
1175
1176/*
1177 pvalloc(size_t n);
1178 Equivalent to valloc(minimum-page-that-holds(n)), that is,
1179 round up n to nearest pagesize.
1180 */
1181#if __STD_C
1182Void_t* public_pVALLOc(size_t);
1183#else
1184Void_t* public_pVALLOc();
1185#endif
1186
1187/*
1188 cfree(Void_t* p);
1189 Equivalent to free(p).
1190
1191 cfree is needed/defined on some systems that pair it with calloc,
1192 for odd historical reasons (such as: cfree is used in example
1193 code in the first edition of K&R).
1194*/
1195#if __STD_C
1196void public_cFREe(Void_t*);
1197#else
1198void public_cFREe();
1199#endif
1200
1201/*
1202 malloc_trim(size_t pad);
1203
1204 If possible, gives memory back to the system (via negative
1205 arguments to sbrk) if there is unused memory at the `high' end of
1206 the malloc pool. You can call this after freeing large blocks of
1207 memory to potentially reduce the system-level memory requirements
1208 of a program. However, it cannot guarantee to reduce memory. Under
1209 some allocation patterns, some large free blocks of memory will be
1210 locked between two used chunks, so they cannot be given back to
1211 the system.
1212
1213 The `pad' argument to malloc_trim represents the amount of free
1214 trailing space to leave untrimmed. If this argument is zero,
1215 only the minimum amount of memory to maintain internal data
1216 structures will be left (one page or less). Non-zero arguments
1217 can be supplied to maintain enough trailing space to service
1218 future expected allocations without having to re-obtain memory
1219 from the system.
1220
1221 Malloc_trim returns 1 if it actually released any memory, else 0.
1222 On systems that do not support "negative sbrks", it will always
1223 rreturn 0.
1224*/
1225#if __STD_C
1226int public_mTRIm(size_t);
1227#else
1228int public_mTRIm();
1229#endif
1230
1231/*
1232 malloc_usable_size(Void_t* p);
1233
1234 Returns the number of bytes you can actually use in
1235 an allocated chunk, which may be more than you requested (although
1236 often not) due to alignment and minimum size constraints.
1237 You can use this many bytes without worrying about
1238 overwriting other allocated objects. This is not a particularly great
1239 programming practice. malloc_usable_size can be more useful in
1240 debugging and assertions, for example:
1241
1242 p = malloc(n);
1243 assert(malloc_usable_size(p) >= 256);
1244
1245*/
1246#if __STD_C
1247size_t public_mUSABLe(Void_t*);
1248#else
1249size_t public_mUSABLe();
1250#endif
1251
1252/*
1253 malloc_stats();
1254 Prints on stderr the amount of space obtained from the system (both
1255 via sbrk and mmap), the maximum amount (which may be more than
1256 current if malloc_trim and/or munmap got called), and the current
1257 number of bytes allocated via malloc (or realloc, etc) but not yet
1258 freed. Note that this is the number of bytes allocated, not the
1259 number requested. It will be larger than the number requested
1260 because of alignment and bookkeeping overhead. Because it includes
1261 alignment wastage as being in use, this figure may be greater than
1262 zero even when no user-level chunks are allocated.
1263
1264 The reported current and maximum system memory can be inaccurate if
1265 a program makes other calls to system memory allocation functions
1266 (normally sbrk) outside of malloc.
1267
1268 malloc_stats prints only the most commonly interesting statistics.
1269 More information can be obtained by calling mallinfo.
1270
1271*/
1272#if __STD_C
1273void public_mSTATs();
1274#else
1275void public_mSTATs();
1276#endif
1277
1278/* mallopt tuning options */
1279
1280/*
1281 M_MXFAST is the maximum request size used for "fastbins", special bins
1282 that hold returned chunks without consolidating their spaces. This
1283 enables future requests for chunks of the same size to be handled
1284 very quickly, but can increase fragmentation, and thus increase the
1285 overall memory footprint of a program.
1286
1287 This malloc manages fastbins very conservatively yet still
1288 efficiently, so fragmentation is rarely a problem for values less
1289 than or equal to the default. The maximum supported value of MXFAST
1290 is 80. You wouldn't want it any higher than this anyway. Fastbins
1291 are designed especially for use with many small structs, objects or
1292 strings -- the default handles structs/objects/arrays with sizes up
1293 to 16 4byte fields, or small strings representing words, tokens,
1294 etc. Using fastbins for larger objects normally worsens
1295 fragmentation without improving speed.
1296
1297 M_MXFAST is set in REQUEST size units. It is internally used in
1298 chunksize units, which adds padding and alignment. You can reduce
1299 M_MXFAST to 0 to disable all use of fastbins. This causes the malloc
1300 algorithm to be a closer approximation of fifo-best-fit in all cases,
1301 not just for larger requests, but will generally cause it to be
1302 slower.
1303*/
1304
1305
1306/* M_MXFAST is a standard SVID/XPG tuning option, usually listed in malloc.h */
1307#ifndef M_MXFAST
1308#define M_MXFAST 1
1309#endif
1310
1311#ifndef DEFAULT_MXFAST
1312#define DEFAULT_MXFAST 64
1313#endif
1314
1315
1316/*
1317 M_TRIM_THRESHOLD is the maximum amount of unused top-most memory
1318 to keep before releasing via malloc_trim in free().
1319
1320 Automatic trimming is mainly useful in long-lived programs.
1321 Because trimming via sbrk can be slow on some systems, and can
1322 sometimes be wasteful (in cases where programs immediately
1323 afterward allocate more large chunks) the value should be high
1324 enough so that your overall system performance would improve by
1325 releasing this much memory.
1326
1327 The trim threshold and the mmap control parameters (see below)
1328 can be traded off with one another. Trimming and mmapping are
1329 two different ways of releasing unused memory back to the
1330 system. Between these two, it is often possible to keep
1331 system-level demands of a long-lived program down to a bare
1332 minimum. For example, in one test suite of sessions measuring
1333 the XF86 X server on Linux, using a trim threshold of 128K and a
1334 mmap threshold of 192K led to near-minimal long term resource
1335 consumption.
1336
1337 If you are using this malloc in a long-lived program, it should
1338 pay to experiment with these values. As a rough guide, you
1339 might set to a value close to the average size of a process
1340 (program) running on your system. Releasing this much memory
1341 would allow such a process to run in memory. Generally, it's
1342 worth it to tune for trimming rather tham memory mapping when a
1343 program undergoes phases where several large chunks are
1344 allocated and released in ways that can reuse each other's
1345 storage, perhaps mixed with phases where there are no such
1346 chunks at all. And in well-behaved long-lived programs,
1347 controlling release of large blocks via trimming versus mapping
1348 is usually faster.
1349
1350 However, in most programs, these parameters serve mainly as
1351 protection against the system-level effects of carrying around
1352 massive amounts of unneeded memory. Since frequent calls to
1353 sbrk, mmap, and munmap otherwise degrade performance, the default
1354 parameters are set to relatively high values that serve only as
1355 safeguards.
1356
1357 The trim value must be greater than page size to have any useful
1358 effect. To disable trimming completely, you can set to
1359 (unsigned long)(-1)
1360
1361 Trim settings interact with fastbin (MXFAST) settings: Unless
1362 TRIM_FASTBINS is defined, automatic trimming never takes place upon
1363 freeing a chunk with size less than or equal to MXFAST. Trimming is
1364 instead delayed until subsequent freeing of larger chunks. However,
1365 you can still force an attempted trim by calling malloc_trim.
1366
1367 Also, trimming is not generally possible in cases where
1368 the main arena is obtained via mmap.
1369
1370 Note that the trick some people use of mallocing a huge space and
1371 then freeing it at program startup, in an attempt to reserve system
1372 memory, doesn't have the intended effect under automatic trimming,
1373 since that memory will immediately be returned to the system.
1374*/
1375
1376#define M_TRIM_THRESHOLD -1
1377
1378#ifndef DEFAULT_TRIM_THRESHOLD
1379#define DEFAULT_TRIM_THRESHOLD (256 * 1024)
1380#endif
1381
1382/*
1383 M_TOP_PAD is the amount of extra `padding' space to allocate or
1384 retain whenever sbrk is called. It is used in two ways internally:
1385
1386 * When sbrk is called to extend the top of the arena to satisfy
1387 a new malloc request, this much padding is added to the sbrk
1388 request.
1389
1390 * When malloc_trim is called automatically from free(),
1391 it is used as the `pad' argument.
1392
1393 In both cases, the actual amount of padding is rounded
1394 so that the end of the arena is always a system page boundary.
1395
1396 The main reason for using padding is to avoid calling sbrk so
1397 often. Having even a small pad greatly reduces the likelihood
1398 that nearly every malloc request during program start-up (or
1399 after trimming) will invoke sbrk, which needlessly wastes
1400 time.
1401
1402 Automatic rounding-up to page-size units is normally sufficient
1403 to avoid measurable overhead, so the default is 0. However, in
1404 systems where sbrk is relatively slow, it can pay to increase
1405 this value, at the expense of carrying around more memory than
1406 the program needs.
1407*/
1408
1409#define M_TOP_PAD -2
1410
1411#ifndef DEFAULT_TOP_PAD
1412#define DEFAULT_TOP_PAD (0)
1413#endif
1414
1415/*
1416 M_MMAP_THRESHOLD is the request size threshold for using mmap()
1417 to service a request. Requests of at least this size that cannot
1418 be allocated using already-existing space will be serviced via mmap.
1419 (If enough normal freed space already exists it is used instead.)
1420
1421 Using mmap segregates relatively large chunks of memory so that
1422 they can be individually obtained and released from the host
1423 system. A request serviced through mmap is never reused by any
1424 other request (at least not directly; the system may just so
1425 happen to remap successive requests to the same locations).
1426
1427 Segregating space in this way has the benefits that:
1428
1429 1. Mmapped space can ALWAYS be individually released back
1430 to the system, which helps keep the system level memory
1431 demands of a long-lived program low.
1432 2. Mapped memory can never become `locked' between
1433 other chunks, as can happen with normally allocated chunks, which
1434 means that even trimming via malloc_trim would not release them.
1435 3. On some systems with "holes" in address spaces, mmap can obtain
1436 memory that sbrk cannot.
1437
1438 However, it has the disadvantages that:
1439
1440 1. The space cannot be reclaimed, consolidated, and then
1441 used to service later requests, as happens with normal chunks.
1442 2. It can lead to more wastage because of mmap page alignment
1443 requirements
1444 3. It causes malloc performance to be more dependent on host
1445 system memory management support routines which may vary in
1446 implementation quality and may impose arbitrary
1447 limitations. Generally, servicing a request via normal
1448 malloc steps is faster than going through a system's mmap.
1449
1450 The advantages of mmap nearly always outweigh disadvantages for
1451 "large" chunks, but the value of "large" varies across systems. The
1452 default is an empirically derived value that works well in most
1453 systems.
1454*/
1455
1456#define M_MMAP_THRESHOLD -3
1457
ad80bc42 1458#define DEFAULT_MMAP_THRESHOLD (16 * 1024 * 1024)
bc5b63ed 1459
c7e2187a
CF
1460#ifndef DEFAULT_MMAP_THRESHOLD
1461#define DEFAULT_MMAP_THRESHOLD (256 * 1024)
1462#endif
1463
1464/*
1465 M_MMAP_MAX is the maximum number of requests to simultaneously
1466 service using mmap. This parameter exists because
1467. Some systems have a limited number of internal tables for
1468 use by mmap, and using more than a few of them may degrade
1469 performance.
1470
1471 The default is set to a value that serves only as a safeguard.
1472 Setting to 0 disables use of mmap for servicing large requests. If
1473 HAVE_MMAP is not set, the default value is 0, and attempts to set it
1474 to non-zero values in mallopt will fail.
1475*/
1476
1477#define M_MMAP_MAX -4
1478
1479#ifndef DEFAULT_MMAP_MAX
1480#if HAVE_MMAP
1481#define DEFAULT_MMAP_MAX (65536)
1482#else
1483#define DEFAULT_MMAP_MAX (0)
1484#endif
1485#endif
1486
1487#ifdef __cplusplus
1488}; /* end of extern "C" */
1489#endif
1490
1491/*
1492 ========================================================================
1493 To make a fully customizable malloc.h header file, cut everything
1494 above this line, put into file malloc.h, edit to suit, and #include it
1495 on the next line, as well as in programs that use this malloc.
1496 ========================================================================
1497*/
1498
1499/* #include "malloc.h" */
1500
1501/* --------------------- public wrappers ---------------------- */
1502
1503#ifdef USE_PUBLIC_MALLOC_WRAPPERS
1504
1505/* Declare all routines as internal */
1506#if __STD_C
1507static Void_t* mALLOc(size_t);
1508static void fREe(Void_t*);
1509static Void_t* rEALLOc(Void_t*, size_t);
1510static Void_t* mEMALIGn(size_t, size_t);
1511static Void_t* vALLOc(size_t);
1512static Void_t* pVALLOc(size_t);
1513static Void_t* cALLOc(size_t, size_t);
1514static Void_t** iCALLOc(size_t, size_t, Void_t**);
1515static Void_t** iCOMALLOc(size_t, size_t*, Void_t**);
1516static void cFREe(Void_t*);
1517static int mTRIm(size_t);
1518static size_t mUSABLe(Void_t*);
1519static void mSTATs();
1520static int mALLOPt(int, int);
1521static struct mallinfo mALLINFo(void);
1522#else
1523static Void_t* mALLOc();
1524static void fREe();
1525static Void_t* rEALLOc();
1526static Void_t* mEMALIGn();
1527static Void_t* vALLOc();
1528static Void_t* pVALLOc();
1529static Void_t* cALLOc();
1530static Void_t** iCALLOc();
1531static Void_t** iCOMALLOc();
1532static void cFREe();
1533static int mTRIm();
1534static size_t mUSABLe();
1535static void mSTATs();
1536static int mALLOPt();
1537static struct mallinfo mALLINFo();
1538#endif
1539
1540/*
1541 MALLOC_PREACTION and MALLOC_POSTACTION should be
1542 defined to return 0 on success, and nonzero on failure.
1543 The return value of MALLOC_POSTACTION is currently ignored
1544 in wrapper functions since there is no reasonable default
1545 action to take on failure.
1546*/
1547
1548
1549#ifdef USE_MALLOC_LOCK
1550
1551#ifdef WIN32
1552
1553static int mALLOC_MUTEx;
1554#define MALLOC_PREACTION slwait(&mALLOC_MUTEx)
1555#define MALLOC_POSTACTION slrelease(&mALLOC_MUTEx)
1556
1557#else
1558
1559#include <pthread.h>
1560
1561static pthread_mutex_t mALLOC_MUTEx = PTHREAD_MUTEX_INITIALIZER;
1562
1563#define MALLOC_PREACTION pthread_mutex_lock(&mALLOC_MUTEx)
1564#define MALLOC_POSTACTION pthread_mutex_unlock(&mALLOC_MUTEx)
1565
1566#endif /* USE_MALLOC_LOCK */
1567
1568#else
1569
1570/* Substitute anything you like for these */
1571
a80add95
CF
1572#define MALLOC_PREACTION (0)
1573#define MALLOC_POSTACTION (0)
c7e2187a
CF
1574
1575#endif
1576
1577Void_t* public_mALLOc(size_t bytes) {
1578 Void_t* m;
1579 if (MALLOC_PREACTION != 0) {
1580 return 0;
1581 }
1582 m = mALLOc(bytes);
1583 if (MALLOC_POSTACTION != 0) {
1584 }
1585 return m;
1586}
1587
1588void public_fREe(Void_t* m) {
1589 if (MALLOC_PREACTION != 0) {
1590 return;
1591 }
1592 fREe(m);
1593 if (MALLOC_POSTACTION != 0) {
1594 }
1595}
1596
1597Void_t* public_rEALLOc(Void_t* m, size_t bytes) {
1598 if (MALLOC_PREACTION != 0) {
1599 return 0;
1600 }
1601 m = rEALLOc(m, bytes);
1602 if (MALLOC_POSTACTION != 0) {
1603 }
1604 return m;
1605}
1606
1607Void_t* public_mEMALIGn(size_t alignment, size_t bytes) {
1608 Void_t* m;
1609 if (MALLOC_PREACTION != 0) {
1610 return 0;
1611 }
1612 m = mEMALIGn(alignment, bytes);
1613 if (MALLOC_POSTACTION != 0) {
1614 }
1615 return m;
1616}
1617
1618Void_t* public_vALLOc(size_t bytes) {
1619 Void_t* m;
1620 if (MALLOC_PREACTION != 0) {
1621 return 0;
1622 }
1623 m = vALLOc(bytes);
1624 if (MALLOC_POSTACTION != 0) {
1625 }
1626 return m;
1627}
1628
1629#ifdef NEED_PVALLOC
1630Void_t* public_pVALLOc(size_t bytes) {
1631 Void_t* m;
1632 if (MALLOC_PREACTION != 0) {
1633 return 0;
1634 }
1635 m = pVALLOc(bytes);
1636 if (MALLOC_POSTACTION != 0) {
1637 }
1638 return m;
1639}
1640#endif
1641
1642Void_t* public_cALLOc(size_t n, size_t elem_size) {
1643 Void_t* m;
1644 if (MALLOC_PREACTION != 0) {
1645 return 0;
1646 }
1647 m = cALLOc(n, elem_size);
1648 if (MALLOC_POSTACTION != 0) {
1649 }
1650 return m;
1651}
1652
1653
1654Void_t** public_iCALLOc(size_t n, size_t elem_size, Void_t** chunks) {
1655 Void_t** m;
1656 if (MALLOC_PREACTION != 0) {
1657 return 0;
1658 }
1659 m = iCALLOc(n, elem_size, chunks);
1660 if (MALLOC_POSTACTION != 0) {
1661 }
1662 return m;
1663}
1664
1665Void_t** public_iCOMALLOc(size_t n, size_t sizes[], Void_t** chunks) {
1666 Void_t** m;
1667 if (MALLOC_PREACTION != 0) {
1668 return 0;
1669 }
1670 m = iCOMALLOc(n, sizes, chunks);
1671 if (MALLOC_POSTACTION != 0) {
1672 }
1673 return m;
1674}
1675
1676void public_cFREe(Void_t* m) {
1677 if (MALLOC_PREACTION != 0) {
1678 return;
1679 }
1680 cFREe(m);
1681 if (MALLOC_POSTACTION != 0) {
1682 }
1683}
1684
1685int public_mTRIm(size_t s) {
1686 int result;
1687 if (MALLOC_PREACTION != 0) {
1688 return 0;
1689 }
1690 result = mTRIm(s);
1691 if (MALLOC_POSTACTION != 0) {
1692 }
1693 return result;
1694}
1695
1696size_t public_mUSABLe(Void_t* m) {
1697 size_t result;
1698 if (MALLOC_PREACTION != 0) {
1699 return 0;
1700 }
1701 result = mUSABLe(m);
1702 if (MALLOC_POSTACTION != 0) {
1703 }
1704 return result;
1705}
1706
1707void public_mSTATs() {
1708 if (MALLOC_PREACTION != 0) {
1709 return;
1710 }
1711 mSTATs();
1712 if (MALLOC_POSTACTION != 0) {
1713 }
1714}
1715
1716struct mallinfo public_mALLINFo() {
1717 struct mallinfo m;
1718 if (MALLOC_PREACTION != 0) {
1719 struct mallinfo nm = { 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 };
1720 return nm;
1721 }
1722 m = mALLINFo();
1723 if (MALLOC_POSTACTION != 0) {
1724 }
1725 return m;
1726}
1727
1728int public_mALLOPt(int p, int v) {
1729 int result;
1730 if (MALLOC_PREACTION != 0) {
1731 return 0;
1732 }
1733 result = mALLOPt(p, v);
1734 if (MALLOC_POSTACTION != 0) {
1735 }
1736 return result;
1737}
1738
1739#endif
1740
1741
1742
1743/* ------------- Optional versions of memcopy ---------------- */
1744
1745
1746#if USE_MEMCPY
1747
1748/*
1749 Note: memcpy is ONLY invoked with non-overlapping regions,
1750 so the (usually slower) memmove is not needed.
1751*/
1752
1753#define MALLOC_COPY(dest, src, nbytes) memcpy(dest, src, nbytes)
1754#define MALLOC_ZERO(dest, nbytes) memset(dest, 0, nbytes)
1755
1756#else /* !USE_MEMCPY */
1757
1758/* Use Duff's device for good zeroing/copying performance. */
1759
1760#define MALLOC_ZERO(charp, nbytes) \
1761do { \
1762 INTERNAL_SIZE_T* mzp = (INTERNAL_SIZE_T*)(charp); \
1763 CHUNK_SIZE_T mctmp = (nbytes)/sizeof(INTERNAL_SIZE_T); \
1764 long mcn; \
1765 if (mctmp < 8) mcn = 0; else { mcn = (mctmp-1)/8; mctmp %= 8; } \
1766 switch (mctmp) { \
1767 case 0: for(;;) { *mzp++ = 0; \
1768 case 7: *mzp++ = 0; \
1769 case 6: *mzp++ = 0; \
1770 case 5: *mzp++ = 0; \
1771 case 4: *mzp++ = 0; \
1772 case 3: *mzp++ = 0; \
1773 case 2: *mzp++ = 0; \
1774 case 1: *mzp++ = 0; if(mcn <= 0) break; mcn--; } \
1775 } \
1776} while(0)
1777
1778#define MALLOC_COPY(dest,src,nbytes) \
1779do { \
1780 INTERNAL_SIZE_T* mcsrc = (INTERNAL_SIZE_T*) src; \
1781 INTERNAL_SIZE_T* mcdst = (INTERNAL_SIZE_T*) dest; \
1782 CHUNK_SIZE_T mctmp = (nbytes)/sizeof(INTERNAL_SIZE_T); \
1783 long mcn; \
1784 if (mctmp < 8) mcn = 0; else { mcn = (mctmp-1)/8; mctmp %= 8; } \
1785 switch (mctmp) { \
1786 case 0: for(;;) { *mcdst++ = *mcsrc++; \
1787 case 7: *mcdst++ = *mcsrc++; \
1788 case 6: *mcdst++ = *mcsrc++; \
1789 case 5: *mcdst++ = *mcsrc++; \
1790 case 4: *mcdst++ = *mcsrc++; \
1791 case 3: *mcdst++ = *mcsrc++; \
1792 case 2: *mcdst++ = *mcsrc++; \
1793 case 1: *mcdst++ = *mcsrc++; if(mcn <= 0) break; mcn--; } \
1794 } \
1795} while(0)
1796
1797#endif
1798
1799/* ------------------ MMAP support ------------------ */
1800
1801
1802#if HAVE_MMAP
1803
1804#ifndef LACKS_FCNTL_H
1805#include <fcntl.h>
1806#endif
1807
1808#ifndef LACKS_SYS_MMAN_H
1809#include <sys/mman.h>
1810#endif
1811
1812#if !defined(MAP_ANONYMOUS) && defined(MAP_ANON)
1813#define MAP_ANONYMOUS MAP_ANON
1814#endif
1815
1816/*
1817 Nearly all versions of mmap support MAP_ANONYMOUS,
1818 so the following is unlikely to be needed, but is
1819 supplied just in case.
1820*/
1821
1822#ifndef MAP_ANONYMOUS
1823
1824static int dev_zero_fd = -1; /* Cached file descriptor for /dev/zero. */
1825
1826#define MMAP(addr, size, prot, flags) ((dev_zero_fd < 0) ? \
1827 (dev_zero_fd = open("/dev/zero", O_RDWR), \
1828 mmap((addr), (size), (prot), (flags), dev_zero_fd, 0)) : \
1829 mmap((addr), (size), (prot), (flags), dev_zero_fd, 0))
1830
1831#else
1832
1833#define MMAP(addr, size, prot, flags) \
1834 (mmap((addr), (size), (prot), (flags)|MAP_ANONYMOUS, -1, 0))
1835
1836#endif
1837
1838
1839#endif /* HAVE_MMAP */
1840
1841
1842/*
1843 ----------------------- Chunk representations -----------------------
1844*/
1845
1846
1847/*
1848 This struct declaration is misleading (but accurate and necessary).
1849 It declares a "view" into memory allowing access to necessary
1850 fields at known offsets from a given base. See explanation below.
1851*/
1852
1853struct malloc_chunk {
1854
1855 INTERNAL_SIZE_T prev_size; /* Size of previous chunk (if free). */
1856 INTERNAL_SIZE_T size; /* Size in bytes, including overhead. */
1857
1858 struct malloc_chunk* fd; /* double links -- used only if free. */
1859 struct malloc_chunk* bk;
1860};
1861
1862
1863typedef struct malloc_chunk* mchunkptr;
1864
1865/*
1866 malloc_chunk details:
1867
1868 (The following includes lightly edited explanations by Colin Plumb.)
1869
1870 Chunks of memory are maintained using a `boundary tag' method as
1871 described in e.g., Knuth or Standish. (See the paper by Paul
1872 Wilson ftp://ftp.cs.utexas.edu/pub/garbage/allocsrv.ps for a
1873 survey of such techniques.) Sizes of free chunks are stored both
1874 in the front of each chunk and at the end. This makes
1875 consolidating fragmented chunks into bigger chunks very fast. The
1876 size fields also hold bits representing whether chunks are free or
1877 in use.
1878
1879 An allocated chunk looks like this:
1880
1881
1882 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1883 | Size of previous chunk, if allocated | |
1884 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1885 | Size of chunk, in bytes |P|
1886 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1887 | User data starts here... .
1888 . .
1889 . (malloc_usable_space() bytes) .
1890 . |
1891nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1892 | Size of chunk |
1893 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1894
1895
1896 Where "chunk" is the front of the chunk for the purpose of most of
1897 the malloc code, but "mem" is the pointer that is returned to the
1898 user. "Nextchunk" is the beginning of the next contiguous chunk.
1899
1900 Chunks always begin on even word boundries, so the mem portion
1901 (which is returned to the user) is also on an even word boundary, and
1902 thus at least double-word aligned.
1903
1904 Free chunks are stored in circular doubly-linked lists, and look like this:
1905
1906 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1907 | Size of previous chunk |
1908 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1909 `head:' | Size of chunk, in bytes |P|
1910 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1911 | Forward pointer to next chunk in list |
1912 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1913 | Back pointer to previous chunk in list |
1914 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1915 | Unused space (may be 0 bytes long) .
1916 . .
1917 . |
1918nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1919 `foot:' | Size of chunk, in bytes |
1920 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1921
1922 The P (PREV_INUSE) bit, stored in the unused low-order bit of the
1923 chunk size (which is always a multiple of two words), is an in-use
1924 bit for the *previous* chunk. If that bit is *clear*, then the
1925 word before the current chunk size contains the previous chunk
1926 size, and can be used to find the front of the previous chunk.
1927 The very first chunk allocated always has this bit set,
1928 preventing access to non-existent (or non-owned) memory. If
1929 prev_inuse is set for any given chunk, then you CANNOT determine
1930 the size of the previous chunk, and might even get a memory
1931 addressing fault when trying to do so.
1932
1933 Note that the `foot' of the current chunk is actually represented
1934 as the prev_size of the NEXT chunk. This makes it easier to
1935 deal with alignments etc but can be very confusing when trying
1936 to extend or adapt this code.
1937
1938 The two exceptions to all this are
1939
1940 1. The special chunk `top' doesn't bother using the
1941 trailing size field since there is no next contiguous chunk
1942 that would have to index off it. After initialization, `top'
1943 is forced to always exist. If it would become less than
1944 MINSIZE bytes long, it is replenished.
1945
1946 2. Chunks allocated via mmap, which have the second-lowest-order
1947 bit (IS_MMAPPED) set in their size fields. Because they are
1948 allocated one-by-one, each must contain its own trailing size field.
1949
1950*/
1951
1952/*
1953 ---------- Size and alignment checks and conversions ----------
1954*/
1955
1956/* conversion from malloc headers to user pointers, and back */
1957
1958#define chunk2mem(p) ((Void_t*)((char*)(p) + 2*SIZE_SZ))
1959#define mem2chunk(mem) ((mchunkptr)((char*)(mem) - 2*SIZE_SZ))
1960
1961/* The smallest possible chunk */
1962#define MIN_CHUNK_SIZE (sizeof(struct malloc_chunk))
1963
1964/* The smallest size we can malloc is an aligned minimal chunk */
1965
1966#define MINSIZE \
1967 (CHUNK_SIZE_T)(((MIN_CHUNK_SIZE+MALLOC_ALIGN_MASK) & ~MALLOC_ALIGN_MASK))
1968
1969/* Check if m has acceptable alignment */
1970
1971#define aligned_OK(m) (((PTR_UINT)((m)) & (MALLOC_ALIGN_MASK)) == 0)
1972
1973
1974/*
1975 Check if a request is so large that it would wrap around zero when
1976 padded and aligned. To simplify some other code, the bound is made
1977 low enough so that adding MINSIZE will also not wrap around sero.
1978*/
1979
1980#define REQUEST_OUT_OF_RANGE(req) \
1981 ((CHUNK_SIZE_T)(req) >= \
1982 (CHUNK_SIZE_T)(INTERNAL_SIZE_T)(-2 * MINSIZE))
1983
1984/* pad request bytes into a usable size -- internal version */
1985
1986#define request2size(req) \
1987 (((req) + SIZE_SZ + MALLOC_ALIGN_MASK < MINSIZE) ? \
1988 MINSIZE : \
1989 ((req) + SIZE_SZ + MALLOC_ALIGN_MASK) & ~MALLOC_ALIGN_MASK)
1990
1991/* Same, except also perform argument check */
1992
1993#define checked_request2size(req, sz) \
1994 if (REQUEST_OUT_OF_RANGE(req)) { \
1995 MALLOC_FAILURE_ACTION; \
1996 return 0; \
1997 } \
1998 (sz) = request2size(req);
1999
2000/*
2001 --------------- Physical chunk operations ---------------
2002*/
2003
2004
2005/* size field is or'ed with PREV_INUSE when previous adjacent chunk in use */
2006#define PREV_INUSE 0x1
2007
2008/* extract inuse bit of previous chunk */
2009#define prev_inuse(p) ((p)->size & PREV_INUSE)
2010
2011
2012/* size field is or'ed with IS_MMAPPED if the chunk was obtained with mmap() */
2013#define IS_MMAPPED 0x2
2014
2015/* check for mmap()'ed chunk */
2016#define chunk_is_mmapped(p) ((p)->size & IS_MMAPPED)
2017
2018/*
2019 Bits to mask off when extracting size
2020
2021 Note: IS_MMAPPED is intentionally not masked off from size field in
2022 macros for which mmapped chunks should never be seen. This should
2023 cause helpful core dumps to occur if it is tried by accident by
2024 people extending or adapting this malloc.
2025*/
2026#define SIZE_BITS (PREV_INUSE|IS_MMAPPED)
2027
2028/* Get size, ignoring use bits */
2029#define chunksize(p) ((p)->size & ~(SIZE_BITS))
2030
2031
2032/* Ptr to next physical malloc_chunk. */
2033#define next_chunk(p) ((mchunkptr)( ((char*)(p)) + ((p)->size & ~PREV_INUSE) ))
2034
2035/* Ptr to previous physical malloc_chunk */
2036#define prev_chunk(p) ((mchunkptr)( ((char*)(p)) - ((p)->prev_size) ))
2037
2038/* Treat space at ptr + offset as a chunk */
2039#define chunk_at_offset(p, s) ((mchunkptr)(((char*)(p)) + (s)))
2040
2041/* extract p's inuse bit */
2042#define inuse(p)\
2043((((mchunkptr)(((char*)(p))+((p)->size & ~PREV_INUSE)))->size) & PREV_INUSE)
2044
2045/* set/clear chunk as being inuse without otherwise disturbing */
2046#define set_inuse(p)\
2047((mchunkptr)(((char*)(p)) + ((p)->size & ~PREV_INUSE)))->size |= PREV_INUSE
2048
2049#define clear_inuse(p)\
2050((mchunkptr)(((char*)(p)) + ((p)->size & ~PREV_INUSE)))->size &= ~(PREV_INUSE)
2051
2052
2053/* check/set/clear inuse bits in known places */
2054#define inuse_bit_at_offset(p, s)\
2055 (((mchunkptr)(((char*)(p)) + (s)))->size & PREV_INUSE)
2056
2057#define set_inuse_bit_at_offset(p, s)\
2058 (((mchunkptr)(((char*)(p)) + (s)))->size |= PREV_INUSE)
2059
2060#define clear_inuse_bit_at_offset(p, s)\
2061 (((mchunkptr)(((char*)(p)) + (s)))->size &= ~(PREV_INUSE))
2062
2063
2064/* Set size at head, without disturbing its use bit */
2065#define set_head_size(p, s) ((p)->size = (((p)->size & PREV_INUSE) | (s)))
2066
2067/* Set size/use field */
2068#define set_head(p, s) ((p)->size = (s))
2069
2070/* Set size at footer (only when chunk is not in use) */
2071#define set_foot(p, s) (((mchunkptr)((char*)(p) + (s)))->prev_size = (s))
2072
2073
2074/*
2075 -------------------- Internal data structures --------------------
2076
2077 All internal state is held in an instance of malloc_state defined
2078 below. There are no other static variables, except in two optional
2079 cases:
2080 * If USE_MALLOC_LOCK is defined, the mALLOC_MUTEx declared above.
2081 * If HAVE_MMAP is true, but mmap doesn't support
2082 MAP_ANONYMOUS, a dummy file descriptor for mmap.
2083
2084 Beware of lots of tricks that minimize the total bookkeeping space
2085 requirements. The result is a little over 1K bytes (for 4byte
2086 pointers and size_t.)
2087*/
2088
2089/*
2090 Bins
2091
2092 An array of bin headers for free chunks. Each bin is doubly
2093 linked. The bins are approximately proportionally (log) spaced.
2094 There are a lot of these bins (128). This may look excessive, but
2095 works very well in practice. Most bins hold sizes that are
2096 unusual as malloc request sizes, but are more usual for fragments
2097 and consolidated sets of chunks, which is what these bins hold, so
2098 they can be found quickly. All procedures maintain the invariant
2099 that no consolidated chunk physically borders another one, so each
2100 chunk in a list is known to be preceeded and followed by either
2101 inuse chunks or the ends of memory.
2102
2103 Chunks in bins are kept in size order, with ties going to the
2104 approximately least recently used chunk. Ordering isn't needed
2105 for the small bins, which all contain the same-sized chunks, but
2106 facilitates best-fit allocation for larger chunks. These lists
2107 are just sequential. Keeping them in order almost never requires
2108 enough traversal to warrant using fancier ordered data
2109 structures.
2110
2111 Chunks of the same size are linked with the most
2112 recently freed at the front, and allocations are taken from the
2113 back. This results in LRU (FIFO) allocation order, which tends
2114 to give each chunk an equal opportunity to be consolidated with
2115 adjacent freed chunks, resulting in larger free chunks and less
2116 fragmentation.
2117
2118 To simplify use in double-linked lists, each bin header acts
2119 as a malloc_chunk. This avoids special-casing for headers.
2120 But to conserve space and improve locality, we allocate
2121 only the fd/bk pointers of bins, and then use repositioning tricks
2122 to treat these as the fields of a malloc_chunk*.
2123*/
2124
2125typedef struct malloc_chunk* mbinptr;
2126
2127/* addressing -- note that bin_at(0) does not exist */
2128#define bin_at(m, i) ((mbinptr)((char*)&((m)->bins[(i)<<1]) - (SIZE_SZ<<1)))
2129
2130/* analog of ++bin */
2131#define next_bin(b) ((mbinptr)((char*)(b) + (sizeof(mchunkptr)<<1)))
2132
2133/* Reminders about list directionality within bins */
2134#define first(b) ((b)->fd)
2135#define last(b) ((b)->bk)
2136
2137/* Take a chunk off a bin list */
2138#define unlink(P, BK, FD) { \
2139 FD = P->fd; \
2140 BK = P->bk; \
2141 FD->bk = BK; \
2142 BK->fd = FD; \
2143}
2144
2145/*
2146 Indexing
2147
2148 Bins for sizes < 512 bytes contain chunks of all the same size, spaced
2149 8 bytes apart. Larger bins are approximately logarithmically spaced:
2150
2151 64 bins of size 8
2152 32 bins of size 64
2153 16 bins of size 512
2154 8 bins of size 4096
2155 4 bins of size 32768
2156 2 bins of size 262144
2157 1 bin of size what's left
2158
2159 The bins top out around 1MB because we expect to service large
2160 requests via mmap.
2161*/
2162
2163#define NBINS 96
2164#define NSMALLBINS 32
2165#define SMALLBIN_WIDTH 8
2166#define MIN_LARGE_SIZE 256
2167
2168#define in_smallbin_range(sz) \
2169 ((CHUNK_SIZE_T)(sz) < (CHUNK_SIZE_T)MIN_LARGE_SIZE)
2170
2171#define smallbin_index(sz) (((unsigned)(sz)) >> 3)
2172
2173/*
2174 Compute index for size. We expect this to be inlined when
2175 compiled with optimization, else not, which works out well.
2176*/
2177static int largebin_index(unsigned int sz) {
2178 unsigned int x = sz >> SMALLBIN_WIDTH;
2179 unsigned int m; /* bit position of highest set bit of m */
2180
2181 if (x >= 0x10000) return NBINS-1;
2182
2183 /* On intel, use BSRL instruction to find highest bit */
2184#if defined(__GNUC__) && defined(i386)
2185
2186 __asm__("bsrl %1,%0\n\t"
2187 : "=r" (m)
2188 : "g" (x));
2189
2190#else
2191 {
2192 /*
2193 Based on branch-free nlz algorithm in chapter 5 of Henry
2194 S. Warren Jr's book "Hacker's Delight".
2195 */
2196
2197 unsigned int n = ((x - 0x100) >> 16) & 8;
2198 x <<= n;
2199 m = ((x - 0x1000) >> 16) & 4;
2200 n += m;
2201 x <<= m;
2202 m = ((x - 0x4000) >> 16) & 2;
2203 n += m;
2204 x = (x << m) >> 14;
2205 m = 13 - n + (x & ~(x>>1));
2206 }
2207#endif
2208
2209 /* Use next 2 bits to create finer-granularity bins */
2210 return NSMALLBINS + (m << 2) + ((sz >> (m + 6)) & 3);
2211}
2212
2213#define bin_index(sz) \
2214 ((in_smallbin_range(sz)) ? smallbin_index(sz) : largebin_index(sz))
2215
2216/*
2217 FIRST_SORTED_BIN_SIZE is the chunk size corresponding to the
2218 first bin that is maintained in sorted order. This must
2219 be the smallest size corresponding to a given bin.
2220
2221 Normally, this should be MIN_LARGE_SIZE. But you can weaken
2222 best fit guarantees to sometimes speed up malloc by increasing value.
2223 Doing this means that malloc may choose a chunk that is
2224 non-best-fitting by up to the width of the bin.
2225
2226 Some useful cutoff values:
2227 512 - all bins sorted
2228 2560 - leaves bins <= 64 bytes wide unsorted
2229 12288 - leaves bins <= 512 bytes wide unsorted
2230 65536 - leaves bins <= 4096 bytes wide unsorted
2231 262144 - leaves bins <= 32768 bytes wide unsorted
2232 -1 - no bins sorted (not recommended!)
2233*/
2234
2235#define FIRST_SORTED_BIN_SIZE MIN_LARGE_SIZE
2236/* #define FIRST_SORTED_BIN_SIZE 65536 */
2237
2238/*
2239 Unsorted chunks
2240
2241 All remainders from chunk splits, as well as all returned chunks,
2242 are first placed in the "unsorted" bin. They are then placed
2243 in regular bins after malloc gives them ONE chance to be used before
2244 binning. So, basically, the unsorted_chunks list acts as a queue,
2245 with chunks being placed on it in free (and malloc_consolidate),
2246 and taken off (to be either used or placed in bins) in malloc.
2247*/
2248
2249/* The otherwise unindexable 1-bin is used to hold unsorted chunks. */
2250#define unsorted_chunks(M) (bin_at(M, 1))
2251
2252/*
2253 Top
2254
2255 The top-most available chunk (i.e., the one bordering the end of
2256 available memory) is treated specially. It is never included in
2257 any bin, is used only if no other chunk is available, and is
2258 released back to the system if it is very large (see
2259 M_TRIM_THRESHOLD). Because top initially
2260 points to its own bin with initial zero size, thus forcing
2261 extension on the first malloc request, we avoid having any special
2262 code in malloc to check whether it even exists yet. But we still
2263 need to do so when getting memory from system, so we make
2264 initial_top treat the bin as a legal but unusable chunk during the
2265 interval between initialization and the first call to
2266 sYSMALLOc. (This is somewhat delicate, since it relies on
2267 the 2 preceding words to be zero during this interval as well.)
2268*/
2269
2270/* Conveniently, the unsorted bin can be used as dummy top on first call */
2271#define initial_top(M) (unsorted_chunks(M))
2272
2273/*
2274 Binmap
2275
2276 To help compensate for the large number of bins, a one-level index
2277 structure is used for bin-by-bin searching. `binmap' is a
2278 bitvector recording whether bins are definitely empty so they can
2279 be skipped over during during traversals. The bits are NOT always
2280 cleared as soon as bins are empty, but instead only
2281 when they are noticed to be empty during traversal in malloc.
2282*/
2283
2284/* Conservatively use 32 bits per map word, even if on 64bit system */
2285#define BINMAPSHIFT 5
2286#define BITSPERMAP (1U << BINMAPSHIFT)
2287#define BINMAPSIZE (NBINS / BITSPERMAP)
2288
2289#define idx2block(i) ((i) >> BINMAPSHIFT)
2290#define idx2bit(i) ((1U << ((i) & ((1U << BINMAPSHIFT)-1))))
2291
2292#define mark_bin(m,i) ((m)->binmap[idx2block(i)] |= idx2bit(i))
2293#define unmark_bin(m,i) ((m)->binmap[idx2block(i)] &= ~(idx2bit(i)))
2294#define get_binmap(m,i) ((m)->binmap[idx2block(i)] & idx2bit(i))
2295
2296/*
2297 Fastbins
2298
2299 An array of lists holding recently freed small chunks. Fastbins
2300 are not doubly linked. It is faster to single-link them, and
2301 since chunks are never removed from the middles of these lists,
2302 double linking is not necessary. Also, unlike regular bins, they
2303 are not even processed in FIFO order (they use faster LIFO) since
2304 ordering doesn't much matter in the transient contexts in which
2305 fastbins are normally used.
2306
2307 Chunks in fastbins keep their inuse bit set, so they cannot
2308 be consolidated with other free chunks. malloc_consolidate
2309 releases all chunks in fastbins and consolidates them with
2310 other free chunks.
2311*/
2312
2313typedef struct malloc_chunk* mfastbinptr;
2314
2315/* offset 2 to use otherwise unindexable first 2 bins */
2316#define fastbin_index(sz) ((((unsigned int)(sz)) >> 3) - 2)
2317
2318/* The maximum fastbin request size we support */
2319#define MAX_FAST_SIZE 80
2320
2321#define NFASTBINS (fastbin_index(request2size(MAX_FAST_SIZE))+1)
2322
2323/*
2324 FASTBIN_CONSOLIDATION_THRESHOLD is the size of a chunk in free()
2325 that triggers automatic consolidation of possibly-surrounding
2326 fastbin chunks. This is a heuristic, so the exact value should not
2327 matter too much. It is defined at half the default trim threshold as a
2328 compromise heuristic to only attempt consolidation if it is likely
2329 to lead to trimming. However, it is not dynamically tunable, since
2330 consolidation reduces fragmentation surrounding loarge chunks even
2331 if trimming is not used.
2332*/
2333
2334#define FASTBIN_CONSOLIDATION_THRESHOLD \
2335 ((unsigned long)(DEFAULT_TRIM_THRESHOLD) >> 1)
2336
2337/*
2338 Since the lowest 2 bits in max_fast don't matter in size comparisons,
2339 they are used as flags.
2340*/
2341
2342/*
2343 ANYCHUNKS_BIT held in max_fast indicates that there may be any
2344 freed chunks at all. It is set true when entering a chunk into any
2345 bin.
2346*/
2347
2348#define ANYCHUNKS_BIT (1U)
2349
2350#define have_anychunks(M) (((M)->max_fast & ANYCHUNKS_BIT))
2351#define set_anychunks(M) ((M)->max_fast |= ANYCHUNKS_BIT)
2352#define clear_anychunks(M) ((M)->max_fast &= ~ANYCHUNKS_BIT)
2353
2354/*
2355 FASTCHUNKS_BIT held in max_fast indicates that there are probably
2356 some fastbin chunks. It is set true on entering a chunk into any
2357 fastbin, and cleared only in malloc_consolidate.
2358*/
2359
2360#define FASTCHUNKS_BIT (2U)
2361
2362#define have_fastchunks(M) (((M)->max_fast & FASTCHUNKS_BIT))
2363#define set_fastchunks(M) ((M)->max_fast |= (FASTCHUNKS_BIT|ANYCHUNKS_BIT))
2364#define clear_fastchunks(M) ((M)->max_fast &= ~(FASTCHUNKS_BIT))
2365
2366/*
2367 Set value of max_fast.
2368 Use impossibly small value if 0.
2369*/
2370
2371#define set_max_fast(M, s) \
2372 (M)->max_fast = (((s) == 0)? SMALLBIN_WIDTH: request2size(s)) | \
2373 ((M)->max_fast & (FASTCHUNKS_BIT|ANYCHUNKS_BIT))
2374
2375#define get_max_fast(M) \
2376 ((M)->max_fast & ~(FASTCHUNKS_BIT | ANYCHUNKS_BIT))
2377
2378
2379/*
2380 morecore_properties is a status word holding dynamically discovered
2381 or controlled properties of the morecore function
2382*/
2383
2384#define MORECORE_CONTIGUOUS_BIT (1U)
2385
2386#define contiguous(M) \
2387 (((M)->morecore_properties & MORECORE_CONTIGUOUS_BIT))
2388#define noncontiguous(M) \
2389 (((M)->morecore_properties & MORECORE_CONTIGUOUS_BIT) == 0)
2390#define set_contiguous(M) \
2391 ((M)->morecore_properties |= MORECORE_CONTIGUOUS_BIT)
2392#define set_noncontiguous(M) \
2393 ((M)->morecore_properties &= ~MORECORE_CONTIGUOUS_BIT)
2394
2395
2396/*
2397 ----------- Internal state representation and initialization -----------
2398*/
2399
2400struct malloc_state {
2401
2402 /* The maximum chunk size to be eligible for fastbin */
2403 INTERNAL_SIZE_T max_fast; /* low 2 bits used as flags */
2404
2405 /* Fastbins */
2406 mfastbinptr fastbins[NFASTBINS];
2407
2408 /* Base of the topmost chunk -- not otherwise kept in a bin */
2409 mchunkptr top;
2410
2411 /* The remainder from the most recent split of a small request */
2412 mchunkptr last_remainder;
2413
2414 /* Normal bins packed as described above */
2415 mchunkptr bins[NBINS * 2];
2416
a80add95
CF
2417 /* Bitmap of bins. Trailing zero map handles cases of largest binned size */
2418 unsigned int binmap[BINMAPSIZE+1];
c7e2187a
CF
2419
2420 /* Tunable parameters */
2421 CHUNK_SIZE_T trim_threshold;
2422 INTERNAL_SIZE_T top_pad;
2423 INTERNAL_SIZE_T mmap_threshold;
2424
2425 /* Memory map support */
2426 int n_mmaps;
2427 int n_mmaps_max;
2428 int max_n_mmaps;
2429
2430 /* Cache malloc_getpagesize */
2431 unsigned int pagesize;
2432
2433 /* Track properties of MORECORE */
2434 unsigned int morecore_properties;
2435
2436 /* Statistics */
2437 INTERNAL_SIZE_T mmapped_mem;
2438 INTERNAL_SIZE_T sbrked_mem;
2439 INTERNAL_SIZE_T max_sbrked_mem;
2440 INTERNAL_SIZE_T max_mmapped_mem;
2441 INTERNAL_SIZE_T max_total_mem;
2442};
2443
2444typedef struct malloc_state *mstate;
2445
2446/*
2447 There is exactly one instance of this struct in this malloc.
2448 If you are adapting this malloc in a way that does NOT use a static
2449 malloc_state, you MUST explicitly zero-fill it before using. This
2450 malloc relies on the property that malloc_state is initialized to
2451 all zeroes (as is true of C statics).
2452*/
2453
2454static struct malloc_state av_; /* never directly referenced */
2455
2456/*
2457 All uses of av_ are via get_malloc_state().
2458 At most one "call" to get_malloc_state is made per invocation of
2459 the public versions of malloc and free, but other routines
2460 that in turn invoke malloc and/or free may call more then once.
2461 Also, it is called in check* routines if DEBUG is set.
2462*/
2463
2464#define get_malloc_state() (&(av_))
2465
2466/*
2467 Initialize a malloc_state struct.
2468
2469 This is called only from within malloc_consolidate, which needs
2470 be called in the same contexts anyway. It is never called directly
2471 outside of malloc_consolidate because some optimizing compilers try
2472 to inline it at all call points, which turns out not to be an
2473 optimization at all. (Inlining it in malloc_consolidate is fine though.)
2474*/
2475
2476#if __STD_C
2477static void malloc_init_state(mstate av)
2478#else
2479static void malloc_init_state(av) mstate av;
2480#endif
2481{
2482 int i;
2483 mbinptr bin;
2484
2485 /* Establish circular links for normal bins */
2486 for (i = 1; i < NBINS; ++i) {
2487 bin = bin_at(av,i);
2488 bin->fd = bin->bk = bin;
2489 }
2490
2491 av->top_pad = DEFAULT_TOP_PAD;
2492 av->n_mmaps_max = DEFAULT_MMAP_MAX;
2493 av->mmap_threshold = DEFAULT_MMAP_THRESHOLD;
2494 av->trim_threshold = DEFAULT_TRIM_THRESHOLD;
2495
2496#if MORECORE_CONTIGUOUS
2497 set_contiguous(av);
2498#else
2499 set_noncontiguous(av);
2500#endif
2501
2502
2503 set_max_fast(av, DEFAULT_MXFAST);
2504
2505 av->top = initial_top(av);
2506 av->pagesize = malloc_getpagesize;
2507}
2508
2509/*
2510 Other internal utilities operating on mstates
2511*/
2512
2513#if __STD_C
2514static Void_t* sYSMALLOc(INTERNAL_SIZE_T, mstate);
78dd8bdd 2515#ifndef MORECORE_CANNOT_TRIM
c7e2187a 2516static int sYSTRIm(size_t, mstate);
78dd8bdd 2517#endif
c7e2187a
CF
2518static void malloc_consolidate(mstate);
2519#ifdef NEED_INDEPENDENT
2520static Void_t** iALLOc(size_t, size_t*, int, Void_t**);
2521#endif
2522#else
2523static Void_t* sYSMALLOc();
78dd8bdd 2524#ifndef MORECORE_CANNOT_TRIM
c7e2187a 2525static int sYSTRIm();
78dd8bdd 2526#endif
c7e2187a
CF
2527static void malloc_consolidate();
2528#ifdef NEED_INDEPENDENT
2529static Void_t** iALLOc();
2530#endif
2531#endif
2532
2533/*
2534 Debugging support
2535
2536 These routines make a number of assertions about the states
2537 of data structures that should be true at all times. If any
2538 are not true, it's very likely that a user program has somehow
2539 trashed memory. (It's also possible that there is a coding error
2540 in malloc. In which case, please report it!)
2541*/
2542
2543#if ! DEBUG
2544
2545#define check_chunk(P)
2546#define check_free_chunk(P)
2547#define check_inuse_chunk(P)
2548#define check_remalloced_chunk(P,N)
2549#define check_malloced_chunk(P,N)
2550#define check_malloc_state()
2551
2552#else
2553#define check_chunk(P) do_check_chunk(P)
2554#define check_free_chunk(P) do_check_free_chunk(P)
2555#define check_inuse_chunk(P) do_check_inuse_chunk(P)
2556#define check_remalloced_chunk(P,N) do_check_remalloced_chunk(P,N)
2557#define check_malloced_chunk(P,N) do_check_malloced_chunk(P,N)
2558#define check_malloc_state() do_check_malloc_state()
2559
2560/*
2561 Properties of all chunks
2562*/
2563
2564#if __STD_C
2565static void do_check_chunk(mchunkptr p)
2566#else
2567static void do_check_chunk(p) mchunkptr p;
2568#endif
2569{
2570 mstate av = get_malloc_state();
2571 CHUNK_SIZE_T sz = chunksize(p);
2572 /* min and max possible addresses assuming contiguous allocation */
2573 char* max_address = (char*)(av->top) + chunksize(av->top);
2574 char* min_address = max_address - av->sbrked_mem;
2575
2576 if (!chunk_is_mmapped(p)) {
2577
2578 /* Has legal address ... */
2579 if (p != av->top) {
2580 if (contiguous(av)) {
2581 assert(((char*)p) >= min_address);
2582 assert(((char*)p + sz) <= ((char*)(av->top)));
2583 }
2584 }
2585 else {
2586 /* top size is always at least MINSIZE */
2587 assert((CHUNK_SIZE_T)(sz) >= MINSIZE);
2588 /* top predecessor always marked inuse */
2589 assert(prev_inuse(p));
2590 }
2591
2592 }
2593 else {
2594#if HAVE_MMAP
2595 /* address is outside main heap */
2596 if (contiguous(av) && av->top != initial_top(av)) {
2597 assert(((char*)p) < min_address || ((char*)p) > max_address);
2598 }
2599 /* chunk is page-aligned */
2600 assert(((p->prev_size + sz) & (av->pagesize-1)) == 0);
2601 /* mem is aligned */
2602 assert(aligned_OK(chunk2mem(p)));
2603#else
2604 /* force an appropriate assert violation if debug set */
2605 assert(!chunk_is_mmapped(p));
2606#endif
2607 }
2608}
2609
2610/*
2611 Properties of free chunks
2612*/
2613
2614#if __STD_C
2615static void do_check_free_chunk(mchunkptr p)
2616#else
2617static void do_check_free_chunk(p) mchunkptr p;
2618#endif
2619{
2620 mstate av = get_malloc_state();
2621
2622 INTERNAL_SIZE_T sz = p->size & ~PREV_INUSE;
2623 mchunkptr next = chunk_at_offset(p, sz);
2624
2625 do_check_chunk(p);
2626
2627 /* Chunk must claim to be free ... */
2628 assert(!inuse(p));
2629 assert (!chunk_is_mmapped(p));
2630
2631 /* Unless a special marker, must have OK fields */
2632 if ((CHUNK_SIZE_T)(sz) >= MINSIZE)
2633 {
2634 assert((sz & MALLOC_ALIGN_MASK) == 0);
2635 assert(aligned_OK(chunk2mem(p)));
2636 /* ... matching footer field */
2637 assert(next->prev_size == sz);
2638 /* ... and is fully consolidated */
2639 assert(prev_inuse(p));
2640 assert (next == av->top || inuse(next));
2641
2642 /* ... and has minimally sane links */
2643 assert(p->fd->bk == p);
2644 assert(p->bk->fd == p);
2645 }
2646 else /* markers are always of size SIZE_SZ */
2647 assert(sz == SIZE_SZ);
2648}
2649
2650/*
2651 Properties of inuse chunks
2652*/
2653
2654#if __STD_C
2655static void do_check_inuse_chunk(mchunkptr p)
2656#else
2657static void do_check_inuse_chunk(p) mchunkptr p;
2658#endif
2659{
2660 mstate av = get_malloc_state();
2661 mchunkptr next;
2662 do_check_chunk(p);
2663
2664 if (chunk_is_mmapped(p))
2665 return; /* mmapped chunks have no next/prev */
2666
2667 /* Check whether it claims to be in use ... */
2668 assert(inuse(p));
2669
2670 next = next_chunk(p);
2671
2672 /* ... and is surrounded by OK chunks.
2673 Since more things can be checked with free chunks than inuse ones,
2674 if an inuse chunk borders them and debug is on, it's worth doing them.
2675 */
2676 if (!prev_inuse(p)) {
2677 /* Note that we cannot even look at prev unless it is not inuse */
2678 mchunkptr prv = prev_chunk(p);
2679 assert(next_chunk(prv) == p);
2680 do_check_free_chunk(prv);
2681 }
2682
2683 if (next == av->top) {
2684 assert(prev_inuse(next));
2685 assert(chunksize(next) >= MINSIZE);
2686 }
2687 else if (!inuse(next))
2688 do_check_free_chunk(next);
2689}
2690
2691/*
2692 Properties of chunks recycled from fastbins
2693*/
2694
2695#if __STD_C
2696static void do_check_remalloced_chunk(mchunkptr p, INTERNAL_SIZE_T s)
2697#else
2698static void do_check_remalloced_chunk(p, s) mchunkptr p; INTERNAL_SIZE_T s;
2699#endif
2700{
2701 INTERNAL_SIZE_T sz = p->size & ~PREV_INUSE;
2702
2703 do_check_inuse_chunk(p);
2704
2705 /* Legal size ... */
2706 assert((sz & MALLOC_ALIGN_MASK) == 0);
2707 assert((CHUNK_SIZE_T)(sz) >= MINSIZE);
2708 /* ... and alignment */
2709 assert(aligned_OK(chunk2mem(p)));
2710 /* chunk is less than MINSIZE more than request */
2711 assert((long)(sz) - (long)(s) >= 0);
2712 assert((long)(sz) - (long)(s + MINSIZE) < 0);
2713}
2714
2715/*
2716 Properties of nonrecycled chunks at the point they are malloced
2717*/
2718
2719#if __STD_C
2720static void do_check_malloced_chunk(mchunkptr p, INTERNAL_SIZE_T s)
2721#else
2722static void do_check_malloced_chunk(p, s) mchunkptr p; INTERNAL_SIZE_T s;
2723#endif
2724{
2725 /* same as recycled case ... */
2726 do_check_remalloced_chunk(p, s);
2727
2728 /*
2729 ... plus, must obey implementation invariant that prev_inuse is
2730 always true of any allocated chunk; i.e., that each allocated
2731 chunk borders either a previously allocated and still in-use
2732 chunk, or the base of its memory arena. This is ensured
2733 by making all allocations from the the `lowest' part of any found
2734 chunk. This does not necessarily hold however for chunks
2735 recycled via fastbins.
2736 */
2737
2738 assert(prev_inuse(p));
2739}
2740
2741
2742/*
2743 Properties of malloc_state.
2744
2745 This may be useful for debugging malloc, as well as detecting user
2746 programmer errors that somehow write into malloc_state.
2747
2748 If you are extending or experimenting with this malloc, you can
2749 probably figure out how to hack this routine to print out or
2750 display chunk addresses, sizes, bins, and other instrumentation.
2751*/
2752
2753static void do_check_malloc_state()
2754{
2755 mstate av = get_malloc_state();
2756 int i;
2757 mchunkptr p;
2758 mchunkptr q;
2759 mbinptr b;
2760 unsigned int binbit;
2761 int empty;
2762 unsigned int idx;
2763 INTERNAL_SIZE_T size;
2764 CHUNK_SIZE_T total = 0;
2765 int max_fast_bin;
2766
2767 /* internal size_t must be no wider than pointer type */
2768 assert(sizeof(INTERNAL_SIZE_T) <= sizeof(char*));
2769
2770 /* alignment is a power of 2 */
2771 assert((MALLOC_ALIGNMENT & (MALLOC_ALIGNMENT-1)) == 0);
2772
2773 /* cannot run remaining checks until fully initialized */
2774 if (av->top == 0 || av->top == initial_top(av))
2775 return;
2776
2777 /* pagesize is a power of 2 */
2778 assert((av->pagesize & (av->pagesize-1)) == 0);
2779
2780 /* properties of fastbins */
2781
2782 /* max_fast is in allowed range */
2783 assert(get_max_fast(av) <= request2size(MAX_FAST_SIZE));
2784
2785 max_fast_bin = fastbin_index(av->max_fast);
2786
2787 for (i = 0; i < NFASTBINS; ++i) {
2788 p = av->fastbins[i];
2789
2790 /* all bins past max_fast are empty */
2791 if (i > max_fast_bin)
2792 assert(p == 0);
2793
2794 while (p != 0) {
2795 /* each chunk claims to be inuse */
2796 do_check_inuse_chunk(p);
2797 total += chunksize(p);
2798 /* chunk belongs in this bin */
2799 assert(fastbin_index(chunksize(p)) == i);
2800 p = p->fd;
2801 }
2802 }
2803
2804 if (total != 0)
2805 assert(have_fastchunks(av));
2806 else if (!have_fastchunks(av))
2807 assert(total == 0);
2808
2809 /* check normal bins */
2810 for (i = 1; i < NBINS; ++i) {
2811 b = bin_at(av,i);
2812
2813 /* binmap is accurate (except for bin 1 == unsorted_chunks) */
2814 if (i >= 2) {
2815 binbit = get_binmap(av,i);
2816 empty = last(b) == b;
2817 if (!binbit)
2818 assert(empty);
2819 else if (!empty)
2820 assert(binbit);
2821 }
2822
2823 for (p = last(b); p != b; p = p->bk) {
2824 /* each chunk claims to be free */
2825 do_check_free_chunk(p);
2826 size = chunksize(p);
2827 total += size;
2828 if (i >= 2) {
2829 /* chunk belongs in bin */
2830 idx = bin_index(size);
2831 assert(idx == i);
2832 /* lists are sorted */
2833 if ((CHUNK_SIZE_T) size >= (CHUNK_SIZE_T)(FIRST_SORTED_BIN_SIZE)) {
2834 assert(p->bk == b ||
2835 (CHUNK_SIZE_T)chunksize(p->bk) >=
2836 (CHUNK_SIZE_T)chunksize(p));
2837 }
2838 }
2839 /* chunk is followed by a legal chain of inuse chunks */
2840 for (q = next_chunk(p);
2841 (q != av->top && inuse(q) &&
2842 (CHUNK_SIZE_T)(chunksize(q)) >= MINSIZE);
2843 q = next_chunk(q))
2844 do_check_inuse_chunk(q);
2845 }
2846 }
2847
2848 /* top chunk is OK */
2849 check_chunk(av->top);
2850
2851 /* sanity checks for statistics */
2852
2853 assert(total <= (CHUNK_SIZE_T)(av->max_total_mem));
2854 assert(av->n_mmaps >= 0);
2855 assert(av->n_mmaps <= av->max_n_mmaps);
2856
2857 assert((CHUNK_SIZE_T)(av->sbrked_mem) <=
2858 (CHUNK_SIZE_T)(av->max_sbrked_mem));
2859
2860 assert((CHUNK_SIZE_T)(av->mmapped_mem) <=
2861 (CHUNK_SIZE_T)(av->max_mmapped_mem));
2862
2863 assert((CHUNK_SIZE_T)(av->max_total_mem) >=
2864 (CHUNK_SIZE_T)(av->mmapped_mem) + (CHUNK_SIZE_T)(av->sbrked_mem));
2865}
2866#endif
2867
2868
2869/* ----------- Routines dealing with system allocation -------------- */
2870
2871/*
2872 sysmalloc handles malloc cases requiring more memory from the system.
2873 On entry, it is assumed that av->top does not have enough
2874 space to service request for nb bytes, thus requiring that av->top
2875 be extended or replaced.
2876*/
2877
2878#if __STD_C
2879static Void_t* sYSMALLOc(INTERNAL_SIZE_T nb, mstate av)
2880#else
2881static Void_t* sYSMALLOc(nb, av) INTERNAL_SIZE_T nb; mstate av;
2882#endif
2883{
2884 mchunkptr old_top; /* incoming value of av->top */
2885 INTERNAL_SIZE_T old_size; /* its size */
2886 char* old_end; /* its end address */
2887
2888 long size; /* arg to first MORECORE or mmap call */
2889 char* brk; /* return value from MORECORE */
2890
2891 long correction; /* arg to 2nd MORECORE call */
2892 char* snd_brk; /* 2nd return val */
2893
2894 INTERNAL_SIZE_T front_misalign; /* unusable bytes at front of new space */
2895 INTERNAL_SIZE_T end_misalign; /* partial page left at end of new space */
2896 char* aligned_brk; /* aligned offset into brk */
2897
2898 mchunkptr p; /* the allocated/returned chunk */
2899 mchunkptr remainder; /* remainder from allocation */
2900 CHUNK_SIZE_T remainder_size; /* its size */
2901
2902 CHUNK_SIZE_T sum; /* for updating stats */
2903
2904 size_t pagemask = av->pagesize - 1;
2905
2906 /*
2907 If there is space available in fastbins, consolidate and retry
2908 malloc from scratch rather than getting memory from system. This
2909 can occur only if nb is in smallbin range so we didn't consolidate
2910 upon entry to malloc. It is much easier to handle this case here
2911 than in malloc proper.
2912 */
2913
2914 if (have_fastchunks(av)) {
2915 assert(in_smallbin_range(nb));
2916 malloc_consolidate(av);
2917 return mALLOc(nb - MALLOC_ALIGN_MASK);
2918 }
2919
2920
2921#if HAVE_MMAP
2922
2923 /*
2924 If have mmap, and the request size meets the mmap threshold, and
2925 the system supports mmap, and there are few enough currently
2926 allocated mmapped regions, try to directly map this request
2927 rather than expanding top.
2928 */
2929
2930 if ((CHUNK_SIZE_T)(nb) >= (CHUNK_SIZE_T)(av->mmap_threshold) &&
2931 (av->n_mmaps < av->n_mmaps_max)) {
2932
2933 char* mm; /* return value from mmap call*/
2934
2935 /*
2936 Round up size to nearest page. For mmapped chunks, the overhead
2937 is one SIZE_SZ unit larger than for normal chunks, because there
2938 is no following chunk whose prev_size field could be used.
2939 */
2940 size = (nb + SIZE_SZ + MALLOC_ALIGN_MASK + pagemask) & ~pagemask;
2941
2942 /* Don't try if size wraps around 0 */
2943 if ((CHUNK_SIZE_T)(size) > (CHUNK_SIZE_T)(nb)) {
2944
2945 mm = (char*)(MMAP(0, size, PROT_READ|PROT_WRITE, MAP_PRIVATE));
2946
2947 if (mm != (char*)(MORECORE_FAILURE)) {
2948
2949 /*
2950 The offset to the start of the mmapped region is stored
2951 in the prev_size field of the chunk. This allows us to adjust
2952 returned start address to meet alignment requirements here
2953 and in memalign(), and still be able to compute proper
2954 address argument for later munmap in free() and realloc().
2955 */
2956
2957 front_misalign = (INTERNAL_SIZE_T)chunk2mem(mm) & MALLOC_ALIGN_MASK;
2958 if (front_misalign > 0) {
2959 correction = MALLOC_ALIGNMENT - front_misalign;
2960 p = (mchunkptr)(mm + correction);
2961 p->prev_size = correction;
2962 set_head(p, (size - correction) |IS_MMAPPED);
2963 }
2964 else {
2965 p = (mchunkptr)mm;
2966 p->prev_size = 0;
2967 set_head(p, size|IS_MMAPPED);
2968 }
2969
2970 /* update statistics */
2971
2972 if (++av->n_mmaps > av->max_n_mmaps)
2973 av->max_n_mmaps = av->n_mmaps;
2974
2975 sum = av->mmapped_mem += size;
2976 if (sum > (CHUNK_SIZE_T)(av->max_mmapped_mem))
2977 av->max_mmapped_mem = sum;
2978 sum += av->sbrked_mem;
2979 if (sum > (CHUNK_SIZE_T)(av->max_total_mem))
2980 av->max_total_mem = sum;
2981
2982 check_chunk(p);
2983
2984 return chunk2mem(p);
2985 }
2986 }
2987 }
2988#endif
2989
2990 /* Record incoming configuration of top */
2991
2992 old_top = av->top;
2993 old_size = chunksize(old_top);
2994 old_end = (char*)(chunk_at_offset(old_top, old_size));
2995
2996 brk = snd_brk = (char*)(MORECORE_FAILURE);
2997
2998 /*
2999 If not the first time through, we require old_size to be
3000 at least MINSIZE and to have prev_inuse set.
3001 */
3002
3003 assert((old_top == initial_top(av) && old_size == 0) ||
3004 ((CHUNK_SIZE_T) (old_size) >= MINSIZE &&
3005 prev_inuse(old_top)));
3006
3007 /* Precondition: not enough current space to satisfy nb request */
3008 assert((CHUNK_SIZE_T)(old_size) < (CHUNK_SIZE_T)(nb + MINSIZE));
3009
3010 /* Precondition: all fastbins are consolidated */
3011 assert(!have_fastchunks(av));
3012
3013
3014 /* Request enough space for nb + pad + overhead */
3015
3016 size = nb + av->top_pad + MINSIZE;
3017
3018 /*
3019 If contiguous, we can subtract out existing space that we hope to
3020 combine with new space. We add it back later only if
3021 we don't actually get contiguous space.
3022 */
3023
3024 if (contiguous(av))
3025 size -= old_size;
3026
3027 /*
3028 Round to a multiple of page size.
3029 If MORECORE is not contiguous, this ensures that we only call it
3030 with whole-page arguments. And if MORECORE is contiguous and
3031 this is not first time through, this preserves page-alignment of
3032 previous calls. Otherwise, we correct to page-align below.
3033 */
3034
3035 size = (size + pagemask) & ~pagemask;
3036
3037 /*
3038 Don't try to call MORECORE if argument is so big as to appear
3039 negative. Note that since mmap takes size_t arg, it may succeed
3040 below even if we cannot call MORECORE.
3041 */
3042
3043 if (size > 0)
3044 brk = (char*)(MORECORE(size));
3045
3046 /*
3047 If have mmap, try using it as a backup when MORECORE fails or
3048 cannot be used. This is worth doing on systems that have "holes" in
3049 address space, so sbrk cannot extend to give contiguous space, but
3050 space is available elsewhere. Note that we ignore mmap max count
3051 and threshold limits, since the space will not be used as a
3052 segregated mmap region.
3053 */
3054
3055#if HAVE_MMAP
3056 if (brk == (char*)(MORECORE_FAILURE)) {
3057
3058 /* Cannot merge with old top, so add its size back in */
3059 if (contiguous(av))
3060 size = (size + old_size + pagemask) & ~pagemask;
3061
3062 /* If we are relying on mmap as backup, then use larger units */
3063 if ((CHUNK_SIZE_T)(size) < (CHUNK_SIZE_T)(MMAP_AS_MORECORE_SIZE))
3064 size = MMAP_AS_MORECORE_SIZE;
3065
3066 /* Don't try if size wraps around 0 */
3067 if ((CHUNK_SIZE_T)(size) > (CHUNK_SIZE_T)(nb)) {
3068
3069 brk = (char*)(MMAP(0, size, PROT_READ|PROT_WRITE, MAP_PRIVATE));
3070
3071 if (brk != (char*)(MORECORE_FAILURE)) {
3072
3073 /* We do not need, and cannot use, another sbrk call to find end */
3074 snd_brk = brk + size;
3075
3076 /*
3077 Record that we no longer have a contiguous sbrk region.
3078 After the first time mmap is used as backup, we do not
3079 ever rely on contiguous space since this could incorrectly
3080 bridge regions.
3081 */
3082 set_noncontiguous(av);
3083 }
3084 }
3085 }
3086#endif
3087
3088 if (brk != (char*)(MORECORE_FAILURE)) {
3089 av->sbrked_mem += size;
3090
3091 /*
3092 If MORECORE extends previous space, we can likewise extend top size.
3093 */
3094
3095 if (brk == old_end && snd_brk == (char*)(MORECORE_FAILURE)) {
3096 set_head(old_top, (size + old_size) | PREV_INUSE);
3097 }
3098
3099 /*
3100 Otherwise, make adjustments:
3101
3102 * If the first time through or noncontiguous, we need to call sbrk
3103 just to find out where the end of memory lies.
3104
3105 * We need to ensure that all returned chunks from malloc will meet
3106 MALLOC_ALIGNMENT
3107
3108 * If there was an intervening foreign sbrk, we need to adjust sbrk
3109 request size to account for fact that we will not be able to
3110 combine new space with existing space in old_top.
3111
3112 * Almost all systems internally allocate whole pages at a time, in
3113 which case we might as well use the whole last page of request.
3114 So we allocate enough more memory to hit a page boundary now,
3115 which in turn causes future contiguous calls to page-align.
3116 */
3117
3118 else {
3119 front_misalign = 0;
3120 end_misalign = 0;
3121 correction = 0;
3122 aligned_brk = brk;
3123
3124 /*
3125 If MORECORE returns an address lower than we have seen before,
3126 we know it isn't really contiguous. This and some subsequent
3127 checks help cope with non-conforming MORECORE functions and
3128 the presence of "foreign" calls to MORECORE from outside of
3129 malloc or by other threads. We cannot guarantee to detect
3130 these in all cases, but cope with the ones we do detect.
3131 */
3132 if (contiguous(av) && old_size != 0 && brk < old_end) {
3133 set_noncontiguous(av);
3134 }
3135
3136 /* handle contiguous cases */
3137 if (contiguous(av)) {
3138
3139 /*
3140 We can tolerate forward non-contiguities here (usually due
3141 to foreign calls) but treat them as part of our space for
3142 stats reporting.
3143 */
3144 if (old_size != 0)
3145 av->sbrked_mem += brk - old_end;
3146
3147 /* Guarantee alignment of first new chunk made from this space */
3148
3149 front_misalign = (INTERNAL_SIZE_T)chunk2mem(brk) & MALLOC_ALIGN_MASK;
3150 if (front_misalign > 0) {
3151
3152 /*
3153 Skip over some bytes to arrive at an aligned position.
3154 We don't need to specially mark these wasted front bytes.
3155 They will never be accessed anyway because
3156 prev_inuse of av->top (and any chunk created from its start)
3157 is always true after initialization.
3158 */
3159
3160 correction = MALLOC_ALIGNMENT - front_misalign;
3161 aligned_brk += correction;
3162 }
3163
3164 /*
3165 If this isn't adjacent to existing space, then we will not
3166 be able to merge with old_top space, so must add to 2nd request.
3167 */
3168
3169 correction += old_size;
3170
3171 /* Extend the end address to hit a page boundary */
3172 end_misalign = (INTERNAL_SIZE_T)(brk + size + correction);
3173 correction += ((end_misalign + pagemask) & ~pagemask) - end_misalign;
3174
3175 assert(correction >= 0);
3176 snd_brk = (char*)(MORECORE(correction));
3177
3178 if (snd_brk == (char*)(MORECORE_FAILURE)) {
3179 /*
3180 If can't allocate correction, try to at least find out current
3181 brk. It might be enough to proceed without failing.
3182 */
3183 correction = 0;
3184 snd_brk = (char*)(MORECORE(0));
3185 }
3186 else if (snd_brk < brk) {
3187 /*
3188 If the second call gives noncontiguous space even though
3189 it says it won't, the only course of action is to ignore
3190 results of second call, and conservatively estimate where
3191 the first call left us. Also set noncontiguous, so this
3192 won't happen again, leaving at most one hole.
3193
3194 Note that this check is intrinsically incomplete. Because
3195 MORECORE is allowed to give more space than we ask for,
3196 there is no reliable way to detect a noncontiguity
3197 producing a forward gap for the second call.
3198 */
3199 snd_brk = brk + size;
3200 correction = 0;
3201 set_noncontiguous(av);
3202 }
3203
3204 }
3205
3206 /* handle non-contiguous cases */
3207 else {
3208 /* MORECORE/mmap must correctly align */
3209 assert(aligned_OK(chunk2mem(brk)));
3210
3211 /* Find out current end of memory */
3212 if (snd_brk == (char*)(MORECORE_FAILURE)) {
3213 snd_brk = (char*)(MORECORE(0));
3214 av->sbrked_mem += snd_brk - brk - size;
3215 }
3216 }
3217
3218 /* Adjust top based on results of second sbrk */
3219 if (snd_brk != (char*)(MORECORE_FAILURE)) {
3220 av->top = (mchunkptr)aligned_brk;
3221 set_head(av->top, (snd_brk - aligned_brk + correction) | PREV_INUSE);
3222 av->sbrked_mem += correction;
3223
3224 /*
3225 If not the first time through, we either have a
3226 gap due to foreign sbrk or a non-contiguous region. Insert a
3227 double fencepost at old_top to prevent consolidation with space
3228 we don't own. These fenceposts are artificial chunks that are
3229 marked as inuse and are in any case too small to use. We need
3230 two to make sizes and alignments work out.
3231 */
3232
3233 if (old_size != 0) {
3234 /*
3235 Shrink old_top to insert fenceposts, keeping size a
3236 multiple of MALLOC_ALIGNMENT. We know there is at least
3237 enough space in old_top to do this.
3238 */
3239 old_size = (old_size - 3*SIZE_SZ) & ~MALLOC_ALIGN_MASK;
3240 set_head(old_top, old_size | PREV_INUSE);
3241
3242 /*
3243 Note that the following assignments completely overwrite
3244 old_top when old_size was previously MINSIZE. This is
3245 intentional. We need the fencepost, even if old_top otherwise gets
3246 lost.
3247 */
3248 chunk_at_offset(old_top, old_size )->size =
3249 SIZE_SZ|PREV_INUSE;
3250
3251 chunk_at_offset(old_top, old_size + SIZE_SZ)->size =
3252 SIZE_SZ|PREV_INUSE;
3253
3254 /*
3255 If possible, release the rest, suppressing trimming.
3256 */
3257 if (old_size >= MINSIZE) {
3258 INTERNAL_SIZE_T tt = av->trim_threshold;
3259 av->trim_threshold = (INTERNAL_SIZE_T)(-1);
3260 fREe(chunk2mem(old_top));
3261 av->trim_threshold = tt;
3262 }
3263 }
3264 }
3265 }
3266
3267 /* Update statistics */
3268 sum = av->sbrked_mem;
3269 if (sum > (CHUNK_SIZE_T)(av->max_sbrked_mem))
3270 av->max_sbrked_mem = sum;
3271
3272 sum += av->mmapped_mem;
3273 if (sum > (CHUNK_SIZE_T)(av->max_total_mem))
3274 av->max_total_mem = sum;
3275
3276 check_malloc_state();
3277
3278 /* finally, do the allocation */
3279
3280 p = av->top;
3281 size = chunksize(p);
3282
3283 /* check that one of the above allocation paths succeeded */
3284 if ((CHUNK_SIZE_T)(size) >= (CHUNK_SIZE_T)(nb + MINSIZE)) {
3285 remainder_size = size - nb;
3286 remainder = chunk_at_offset(p, nb);
3287 av->top = remainder;
3288 set_head(p, nb | PREV_INUSE);
3289 set_head(remainder, remainder_size | PREV_INUSE);
3290 check_malloced_chunk(p, nb);
3291 return chunk2mem(p);
3292 }
3293
3294 }
3295
3296 /* catch all failure paths */
3297 MALLOC_FAILURE_ACTION;
3298 return 0;
3299}
3300
3301
3302
8dca9e23 3303#ifndef MORECORE_CANNOT_TRIM
c7e2187a
CF
3304/*
3305 sYSTRIm is an inverse of sorts to sYSMALLOc. It gives memory back
3306 to the system (via negative arguments to sbrk) if there is unused
3307 memory at the `high' end of the malloc pool. It is called
3308 automatically by free() when top space exceeds the trim
3309 threshold. It is also called by the public malloc_trim routine. It
3310 returns 1 if it actually released any memory, else 0.
3311*/
3312
3313#if __STD_C
3314static int sYSTRIm(size_t pad, mstate av)
3315#else
3316static int sYSTRIm(pad, av) size_t pad; mstate av;
3317#endif
3318{
3319 long top_size; /* Amount of top-most memory */
3320 long extra; /* Amount to release */
3321 long released; /* Amount actually released */
3322 char* current_brk; /* address returned by pre-check sbrk call */
3323 char* new_brk; /* address returned by post-check sbrk call */
3324 size_t pagesz;
3325
3326 pagesz = av->pagesize;
3327 top_size = chunksize(av->top);
3328
3329 /* Release in pagesize units, keeping at least one page */
3330 extra = ((top_size - pad - MINSIZE + (pagesz-1)) / pagesz - 1) * pagesz;
3331
3332 if (extra > 0) {
3333
3334 /*
3335 Only proceed if end of memory is where we last set it.
3336 This avoids problems if there were foreign sbrk calls.
3337 */
3338 current_brk = (char*)(MORECORE(0));
3339 if (current_brk == (char*)(av->top) + top_size) {
3340
3341 /*
3342 Attempt to release memory. We ignore MORECORE return value,
3343 and instead call again to find out where new end of memory is.
3344 This avoids problems if first call releases less than we asked,
3345 of if failure somehow altered brk value. (We could still
3346 encounter problems if it altered brk in some very bad way,
3347 but the only thing we can do is adjust anyway, which will cause
3348 some downstream failure.)
3349 */
3350
3351 MORECORE(-extra);
3352 new_brk = (char*)(MORECORE(0));
3353
3354 if (new_brk != (char*)MORECORE_FAILURE) {
3355 released = (long)(current_brk - new_brk);
3356
3357 if (released != 0) {
3358 /* Success. Adjust top. */
3359 av->sbrked_mem -= released;
3360 set_head(av->top, (top_size - released) | PREV_INUSE);
3361 check_malloc_state();
3362 return 1;
3363 }
3364 }
3365 }
3366 }
3367 return 0;
3368}
8dca9e23 3369#endif /*MORECORE_CANNOT_TRIM*/
c7e2187a
CF
3370
3371/*
3372 ------------------------------ malloc ------------------------------
3373*/
3374
3375
3376#if __STD_C
3377Void_t* mALLOc(size_t bytes)
3378#else
3379 Void_t* mALLOc(bytes) size_t bytes;
3380#endif
3381{
3382 mstate av = get_malloc_state();
3383
3384 INTERNAL_SIZE_T nb; /* normalized request size */
3385 unsigned int idx; /* associated bin index */
3386 mbinptr bin; /* associated bin */
3387 mfastbinptr* fb; /* associated fastbin */
3388
3389 mchunkptr victim; /* inspected/selected chunk */
3390 INTERNAL_SIZE_T size; /* its size */
3391 int victim_index; /* its bin index */
3392
3393 mchunkptr remainder; /* remainder from a split */
3394 CHUNK_SIZE_T remainder_size; /* its size */
3395
3396 unsigned int block; /* bit map traverser */
3397 unsigned int bit; /* bit map traverser */
3398 unsigned int map; /* current word of binmap */
3399
3400 mchunkptr fwd; /* misc temp for linking */
3401 mchunkptr bck; /* misc temp for linking */
3402
3403 /*
3404 Convert request size to internal form by adding SIZE_SZ bytes
3405 overhead plus possibly more to obtain necessary alignment and/or
3406 to obtain a size of at least MINSIZE, the smallest allocatable
3407 size. Also, checked_request2size traps (returning 0) request sizes
3408 that are so large that they wrap around zero when padded and
3409 aligned.
3410 */
3411
3412 checked_request2size(bytes, nb);
3413
3414 /*
3415 Bypass search if no frees yet
3416 */
3417 if (!have_anychunks(av)) {
3418 if (av->max_fast == 0) /* initialization check */
3419 malloc_consolidate(av);
3420 goto use_top;
3421 }
3422
3423 /*
3424 If the size qualifies as a fastbin, first check corresponding bin.
3425 */
3426
3427 if ((CHUNK_SIZE_T)(nb) <= (CHUNK_SIZE_T)(av->max_fast)) {
3428 fb = &(av->fastbins[(fastbin_index(nb))]);
3429 if ( (victim = *fb) != 0) {
3430 *fb = victim->fd;
3431 check_remalloced_chunk(victim, nb);
3432 return chunk2mem(victim);
3433 }
3434 }
3435
3436 /*
3437 If a small request, check regular bin. Since these "smallbins"
3438 hold one size each, no searching within bins is necessary.
3439 (For a large request, we need to wait until unsorted chunks are
3440 processed to find best fit. But for small ones, fits are exact
3441 anyway, so we can check now, which is faster.)
3442 */
3443
3444 if (in_smallbin_range(nb)) {
3445 idx = smallbin_index(nb);
3446 bin = bin_at(av,idx);
3447
3448 if ( (victim = last(bin)) != bin) {
3449 bck = victim->bk;
3450 set_inuse_bit_at_offset(victim, nb);
3451 bin->bk = bck;
3452 bck->fd = bin;
3453
3454 check_malloced_chunk(victim, nb);
3455 return chunk2mem(victim);
3456 }
3457 }
3458
3459 /*
3460 If this is a large request, consolidate fastbins before continuing.
3461 While it might look excessive to kill all fastbins before
3462 even seeing if there is space available, this avoids
3463 fragmentation problems normally associated with fastbins.
3464 Also, in practice, programs tend to have runs of either small or
3465 large requests, but less often mixtures, so consolidation is not
3466 invoked all that often in most programs. And the programs that
3467 it is called frequently in otherwise tend to fragment.
3468 */
3469
3470 else {
3471 idx = largebin_index(nb);
3472 if (have_fastchunks(av))
3473 malloc_consolidate(av);
3474 }
3475
3476 /*
3477 Process recently freed or remaindered chunks, taking one only if
3478 it is exact fit, or, if this a small request, the chunk is remainder from
3479 the most recent non-exact fit. Place other traversed chunks in
3480 bins. Note that this step is the only place in any routine where
3481 chunks are placed in bins.
3482 */
3483
3484 while ( (victim = unsorted_chunks(av)->bk) != unsorted_chunks(av)) {
3485 bck = victim->bk;
3486 size = chunksize(victim);
3487
3488 /*
3489 If a small request, try to use last remainder if it is the
3490 only chunk in unsorted bin. This helps promote locality for
3491 runs of consecutive small requests. This is the only
3492 exception to best-fit, and applies only when there is
3493 no exact fit for a small chunk.
3494 */
3495
3496 if (in_smallbin_range(nb) &&
3497 bck == unsorted_chunks(av) &&
3498 victim == av->last_remainder &&
3499 (CHUNK_SIZE_T)(size) > (CHUNK_SIZE_T)(nb + MINSIZE)) {
3500
3501 /* split and reattach remainder */
3502 remainder_size = size - nb;
3503 remainder = chunk_at_offset(victim, nb);
3504 unsorted_chunks(av)->bk = unsorted_chunks(av)->fd = remainder;
3505 av->last_remainder = remainder;
3506 remainder->bk = remainder->fd = unsorted_chunks(av);
3507
3508 set_head(victim, nb | PREV_INUSE);
3509 set_head(remainder, remainder_size | PREV_INUSE);
3510 set_foot(remainder, remainder_size);
3511
3512 check_malloced_chunk(victim, nb);
3513 return chunk2mem(victim);
3514 }
3515
3516 /* remove from unsorted list */
3517 unsorted_chunks(av)->bk = bck;
3518 bck->fd = unsorted_chunks(av);
3519
3520 /* Take now instead of binning if exact fit */
3521
3522 if (size == nb) {
3523 set_inuse_bit_at_offset(victim, size);
3524 check_malloced_chunk(victim, nb);
3525 return chunk2mem(victim);
3526 }
3527
3528 /* place chunk in bin */
3529
3530 if (in_smallbin_range(size)) {
3531 victim_index = smallbin_index(size);
3532 bck = bin_at(av, victim_index);
3533 fwd = bck->fd;
3534 }
3535 else {
3536 victim_index = largebin_index(size);
3537 bck = bin_at(av, victim_index);
3538 fwd = bck->fd;
3539
3540 if (fwd != bck) {
3541 /* if smaller than smallest, place first */
3542 if ((CHUNK_SIZE_T)(size) < (CHUNK_SIZE_T)(bck->bk->size)) {
3543 fwd = bck;
3544 bck = bck->bk;
3545 }
3546 else if ((CHUNK_SIZE_T)(size) >=
3547 (CHUNK_SIZE_T)(FIRST_SORTED_BIN_SIZE)) {
3548
3549 /* maintain large bins in sorted order */
3550 size |= PREV_INUSE; /* Or with inuse bit to speed comparisons */
3551 while ((CHUNK_SIZE_T)(size) < (CHUNK_SIZE_T)(fwd->size))
3552 fwd = fwd->fd;
3553 bck = fwd->bk;
3554 }
3555 }
3556 }
3557
3558 mark_bin(av, victim_index);
3559 victim->bk = bck;
3560 victim->fd = fwd;
3561 fwd->bk = victim;
3562 bck->fd = victim;
3563 }
3564
3565 /*
3566 If a large request, scan through the chunks of current bin to
3567 find one that fits. (This will be the smallest that fits unless
3568 FIRST_SORTED_BIN_SIZE has been changed from default.) This is
3569 the only step where an unbounded number of chunks might be
3570 scanned without doing anything useful with them. However the
3571 lists tend to be short.
3572 */
3573
3574 if (!in_smallbin_range(nb)) {
3575 bin = bin_at(av, idx);
3576
3577 for (victim = last(bin); victim != bin; victim = victim->bk) {
3578 size = chunksize(victim);
3579
3580 if ((CHUNK_SIZE_T)(size) >= (CHUNK_SIZE_T)(nb)) {
3581 remainder_size = size - nb;
3582 unlink(victim, bck, fwd);
3583
3584 /* Exhaust */
3585 if (remainder_size < MINSIZE) {
3586 set_inuse_bit_at_offset(victim, size);
3587 check_malloced_chunk(victim, nb);
3588 return chunk2mem(victim);
3589 }
3590 /* Split */
3591 else {
3592 remainder = chunk_at_offset(victim, nb);
3593 unsorted_chunks(av)->bk = unsorted_chunks(av)->fd = remainder;
3594 remainder->bk = remainder->fd = unsorted_chunks(av);
3595 set_head(victim, nb | PREV_INUSE);
3596 set_head(remainder, remainder_size | PREV_INUSE);
3597 set_foot(remainder, remainder_size);
3598 check_malloced_chunk(victim, nb);
3599 return chunk2mem(victim);
3600 }
3601 }
3602 }
3603 }
3604
3605 /*
3606 Search for a chunk by scanning bins, starting with next largest
3607 bin. This search is strictly by best-fit; i.e., the smallest
3608 (with ties going to approximately the least recently used) chunk
3609 that fits is selected.
3610
3611 The bitmap avoids needing to check that most blocks are nonempty.
3612 */
3613
3614 ++idx;
3615 bin = bin_at(av,idx);
3616 block = idx2block(idx);
3617 map = av->binmap[block];
3618 bit = idx2bit(idx);
3619
3620 for (;;) {
3621
3622 /* Skip rest of block if there are no more set bits in this block. */
3623 if (bit > map || bit == 0) {
3624 do {
3625 if (++block >= BINMAPSIZE) /* out of bins */
3626 goto use_top;
3627 } while ( (map = av->binmap[block]) == 0);
3628
3629 bin = bin_at(av, (block << BINMAPSHIFT));
3630 bit = 1;
3631 }
3632
3633 /* Advance to bin with set bit. There must be one. */
3634 while ((bit & map) == 0) {
3635 bin = next_bin(bin);
3636 bit <<= 1;
3637 assert(bit != 0);
3638 }
3639
3640 /* Inspect the bin. It is likely to be non-empty */
3641 victim = last(bin);
3642
3643 /* If a false alarm (empty bin), clear the bit. */
3644 if (victim == bin) {
3645 av->binmap[block] = map &= ~bit; /* Write through */
3646 bin = next_bin(bin);
3647 bit <<= 1;
3648 }
3649
3650 else {
3651 size = chunksize(victim);
3652
3653 /* We know the first chunk in this bin is big enough to use. */
3654 assert((CHUNK_SIZE_T)(size) >= (CHUNK_SIZE_T)(nb));
3655
3656 remainder_size = size - nb;
3657
3658 /* unlink */
3659 bck = victim->bk;
3660 bin->bk = bck;
3661 bck->fd = bin;
3662
3663 /* Exhaust */
3664 if (remainder_size < MINSIZE) {
3665 set_inuse_bit_at_offset(victim, size);
3666 check_malloced_chunk(victim, nb);
3667 return chunk2mem(victim);
3668 }
3669
3670 /* Split */
3671 else {
3672 remainder = chunk_at_offset(victim, nb);
3673
3674 unsorted_chunks(av)->bk = unsorted_chunks(av)->fd = remainder;
3675 remainder->bk = remainder->fd = unsorted_chunks(av);
3676 /* advertise as last remainder */
3677 if (in_smallbin_range(nb))
3678 av->last_remainder = remainder;
3679
3680 set_head(victim, nb | PREV_INUSE);
3681 set_head(remainder, remainder_size | PREV_INUSE);
3682 set_foot(remainder, remainder_size);
3683 check_malloced_chunk(victim, nb);
3684 return chunk2mem(victim);
3685 }
3686 }
3687 }
3688
3689 use_top:
3690 /*
3691 If large enough, split off the chunk bordering the end of memory
3692 (held in av->top). Note that this is in accord with the best-fit
3693 search rule. In effect, av->top is treated as larger (and thus
3694 less well fitting) than any other available chunk since it can
3695 be extended to be as large as necessary (up to system
3696 limitations).
3697
3698 We require that av->top always exists (i.e., has size >=
3699 MINSIZE) after initialization, so if it would otherwise be
3700 exhuasted by current request, it is replenished. (The main
3701 reason for ensuring it exists is that we may need MINSIZE space
3702 to put in fenceposts in sysmalloc.)
3703 */
3704
3705 victim = av->top;
3706 size = chunksize(victim);
3707
3708 if ((CHUNK_SIZE_T)(size) >= (CHUNK_SIZE_T)(nb + MINSIZE)) {
3709 remainder_size = size - nb;
3710 remainder = chunk_at_offset(victim, nb);
3711 av->top = remainder;
3712 set_head(victim, nb | PREV_INUSE);
3713 set_head(remainder, remainder_size | PREV_INUSE);
3714
3715 check_malloced_chunk(victim, nb);
3716 return chunk2mem(victim);
3717 }
3718
3719 /*
3720 If no space in top, relay to handle system-dependent cases
3721 */
3722 return sYSMALLOc(nb, av);
3723}
3724
3725/*
3726 ------------------------------ free ------------------------------
3727*/
3728
3729#if __STD_C
3730void fREe(Void_t* mem)
3731#else
3732void fREe(mem) Void_t* mem;
3733#endif
3734{
3735 mstate av = get_malloc_state();
3736
3737 mchunkptr p; /* chunk corresponding to mem */
3738 INTERNAL_SIZE_T size; /* its size */
3739 mfastbinptr* fb; /* associated fastbin */
3740 mchunkptr nextchunk; /* next contiguous chunk */
3741 INTERNAL_SIZE_T nextsize; /* its size */
3742 int nextinuse; /* true if nextchunk is used */
3743 INTERNAL_SIZE_T prevsize; /* size of previous contiguous chunk */
3744 mchunkptr bck; /* misc temp for linking */
3745 mchunkptr fwd; /* misc temp for linking */
3746
3747 /* free(0) has no effect */
3748 if (mem != 0) {
3749 p = mem2chunk(mem);
3750 size = chunksize(p);
3751
3752 check_inuse_chunk(p);
3753
3754 /*
3755 If eligible, place chunk on a fastbin so it can be found
3756 and used quickly in malloc.
3757 */
3758
3759 if ((CHUNK_SIZE_T)(size) <= (CHUNK_SIZE_T)(av->max_fast)
3760
3761#if TRIM_FASTBINS
3762 /*
3763 If TRIM_FASTBINS set, don't place chunks
3764 bordering top into fastbins
3765 */
3766 && (chunk_at_offset(p, size) != av->top)
3767#endif
3768 ) {
3769
3770 set_fastchunks(av);
3771 fb = &(av->fastbins[fastbin_index(size)]);
3772 p->fd = *fb;
3773 *fb = p;
3774 }
3775
3776 /*
3777 Consolidate other non-mmapped chunks as they arrive.
3778 */
3779
3780 else if (!chunk_is_mmapped(p)) {
3781 set_anychunks(av);
3782
3783 nextchunk = chunk_at_offset(p, size);
3784 nextsize = chunksize(nextchunk);
3785
3786 /* consolidate backward */
3787 if (!prev_inuse(p)) {
3788 prevsize = p->prev_size;
3789 size += prevsize;
3790 p = chunk_at_offset(p, -((long) prevsize));
3791 unlink(p, bck, fwd);
3792 }
3793
3794 if (nextchunk != av->top) {
3795 /* get and clear inuse bit */
3796 nextinuse = inuse_bit_at_offset(nextchunk, nextsize);
3797 set_head(nextchunk, nextsize);
3798
3799 /* consolidate forward */
3800 if (!nextinuse) {
3801 unlink(nextchunk, bck, fwd);
3802 size += nextsize;
3803 }
3804
3805 /*
3806 Place the chunk in unsorted chunk list. Chunks are
3807 not placed into regular bins until after they have
3808 been given one chance to be used in malloc.
3809 */
3810
3811 bck = unsorted_chunks(av);
3812 fwd = bck->fd;
3813 p->bk = bck;
3814 p->fd = fwd;
3815 bck->fd = p;
3816 fwd->bk = p;
3817
3818 set_head(p, size | PREV_INUSE);
3819 set_foot(p, size);
3820
3821 check_free_chunk(p);
3822 }
3823
3824 /*
3825 If the chunk borders the current high end of memory,
3826 consolidate into top
3827 */
3828
3829 else {
3830 size += nextsize;
3831 set_head(p, size | PREV_INUSE);
3832 av->top = p;
3833 check_chunk(p);
3834 }
3835
3836 /*
3837 If freeing a large space, consolidate possibly-surrounding
3838 chunks. Then, if the total unused topmost memory exceeds trim
3839 threshold, ask malloc_trim to reduce top.
3840
3841 Unless max_fast is 0, we don't know if there are fastbins
3842 bordering top, so we cannot tell for sure whether threshold
3843 has been reached unless fastbins are consolidated. But we
3844 don't want to consolidate on each free. As a compromise,
3845 consolidation is performed if FASTBIN_CONSOLIDATION_THRESHOLD
3846 is reached.
3847 */
3848
3849 if ((CHUNK_SIZE_T)(size) >= FASTBIN_CONSOLIDATION_THRESHOLD) {
3850 if (have_fastchunks(av))
3851 malloc_consolidate(av);
3852
3853#ifndef MORECORE_CANNOT_TRIM
3854 if ((CHUNK_SIZE_T)(chunksize(av->top)) >=
3855 (CHUNK_SIZE_T)(av->trim_threshold))
3856 sYSTRIm(av->top_pad, av);
3857#endif
3858 }
3859
3860 }
3861 /*
3862 If the chunk was allocated via mmap, release via munmap()
3863 Note that if HAVE_MMAP is false but chunk_is_mmapped is
3864 true, then user must have overwritten memory. There's nothing
3865 we can do to catch this error unless DEBUG is set, in which case
3866 check_inuse_chunk (above) will have triggered error.
3867 */
3868
3869 else {
3870#if HAVE_MMAP
3871 int ret;
3872 INTERNAL_SIZE_T offset = p->prev_size;
3873 av->n_mmaps--;
3874 av->mmapped_mem -= (size + offset);
3875 ret = munmap((char*)p - offset, size + offset);
3876 /* munmap returns non-zero on failure */
3877 assert(ret == 0);
3878#endif
3879 }
3880 }
3881}
3882
3883/*
3884 ------------------------- malloc_consolidate -------------------------
3885
3886 malloc_consolidate is a specialized version of free() that tears
3887 down chunks held in fastbins. Free itself cannot be used for this
3888 purpose since, among other things, it might place chunks back onto
3889 fastbins. So, instead, we need to use a minor variant of the same
3890 code.
3891
3892 Also, because this routine needs to be called the first time through
3893 malloc anyway, it turns out to be the perfect place to trigger
3894 initialization code.
3895*/
3896
3897#if __STD_C
3898static void malloc_consolidate(mstate av)
3899#else
3900static void malloc_consolidate(av) mstate av;
3901#endif
3902{
3903 mfastbinptr* fb; /* current fastbin being consolidated */
3904 mfastbinptr* maxfb; /* last fastbin (for loop control) */
3905 mchunkptr p; /* current chunk being consolidated */
3906 mchunkptr nextp; /* next chunk to consolidate */
3907 mchunkptr unsorted_bin; /* bin header */
3908 mchunkptr first_unsorted; /* chunk to link to */
3909
3910 /* These have same use as in free() */
3911 mchunkptr nextchunk;
3912 INTERNAL_SIZE_T size;
3913 INTERNAL_SIZE_T nextsize;
3914 INTERNAL_SIZE_T prevsize;
3915 int nextinuse;
3916 mchunkptr bck;
3917 mchunkptr fwd;
3918
3919 /*
3920 If max_fast is 0, we know that av hasn't
3921 yet been initialized, in which case do so below
3922 */
3923
3924 if (av->max_fast != 0) {
3925 clear_fastchunks(av);
3926
3927 unsorted_bin = unsorted_chunks(av);
3928
3929 /*
3930 Remove each chunk from fast bin and consolidate it, placing it
3931 then in unsorted bin. Among other reasons for doing this,
3932 placing in unsorted bin avoids needing to calculate actual bins
3933 until malloc is sure that chunks aren't immediately going to be
3934 reused anyway.
3935 */
3936
3937 maxfb = &(av->fastbins[fastbin_index(av->max_fast)]);
3938 fb = &(av->fastbins[0]);
3939 do {
3940 if ( (p = *fb) != 0) {
3941 *fb = 0;
3942
3943 do {
3944 check_inuse_chunk(p);
3945 nextp = p->fd;
3946
3947 /* Slightly streamlined version of consolidation code in free() */
3948 size = p->size & ~PREV_INUSE;
3949 nextchunk = chunk_at_offset(p, size);
3950 nextsize = chunksize(nextchunk);
3951
3952 if (!prev_inuse(p)) {
3953 prevsize = p->prev_size;
3954 size += prevsize;
3955 p = chunk_at_offset(p, -((long) prevsize));
3956 unlink(p, bck, fwd);
3957 }
3958
3959 if (nextchunk != av->top) {
3960 nextinuse = inuse_bit_at_offset(nextchunk, nextsize);
3961 set_head(nextchunk, nextsize);
3962
3963 if (!nextinuse) {
3964 size += nextsize;
3965 unlink(nextchunk, bck, fwd);
3966 }
3967
3968 first_unsorted = unsorted_bin->fd;
3969 unsorted_bin->fd = p;
3970 first_unsorted->bk = p;
3971
3972 set_head(p, size | PREV_INUSE);
3973 p->bk = unsorted_bin;
3974 p->fd = first_unsorted;
3975 set_foot(p, size);
3976 }
3977
3978 else {
3979 size += nextsize;
3980 set_head(p, size | PREV_INUSE);
3981 av->top = p;
3982 }
3983
3984 } while ( (p = nextp) != 0);
3985
3986 }
3987 } while (fb++ != maxfb);
3988 }
3989 else {
3990 malloc_init_state(av);
3991 check_malloc_state();
3992 }
3993}
3994
3995/*
3996 ------------------------------ realloc ------------------------------
3997*/
3998
3999
4000#if __STD_C
4001Void_t* rEALLOc(Void_t* oldmem, size_t bytes)
4002#else
4003Void_t* rEALLOc(oldmem, bytes) Void_t* oldmem; size_t bytes;
4004#endif
4005{
4006 mstate av = get_malloc_state();
4007
4008 INTERNAL_SIZE_T nb; /* padded request size */
4009
4010 mchunkptr oldp; /* chunk corresponding to oldmem */
4011 INTERNAL_SIZE_T oldsize; /* its size */
4012
4013 mchunkptr newp; /* chunk to return */
4014 INTERNAL_SIZE_T newsize; /* its size */
4015 Void_t* newmem; /* corresponding user mem */
4016
4017 mchunkptr next; /* next contiguous chunk after oldp */
4018
4019 mchunkptr remainder; /* extra space at end of newp */
4020 CHUNK_SIZE_T remainder_size; /* its size */
4021
4022 mchunkptr bck; /* misc temp for linking */
4023 mchunkptr fwd; /* misc temp for linking */
4024
4025 CHUNK_SIZE_T copysize; /* bytes to copy */
4026 unsigned int ncopies; /* INTERNAL_SIZE_T words to copy */
4027 INTERNAL_SIZE_T* s; /* copy source */
4028 INTERNAL_SIZE_T* d; /* copy destination */
4029
4030
4031#ifdef REALLOC_ZERO_BYTES_FREES
4032 if (bytes == 0) {
4033 fREe(oldmem);
4034 return 0;
4035 }
4036#endif
4037
4038 /* realloc of null is supposed to be same as malloc */
4039 if (oldmem == 0) return mALLOc(bytes);
4040
4041 checked_request2size(bytes, nb);
4042
4043 oldp = mem2chunk(oldmem);
4044 oldsize = chunksize(oldp);
4045
4046 check_inuse_chunk(oldp);
4047
4048 if (!chunk_is_mmapped(oldp)) {
4049
4050 if ((CHUNK_SIZE_T)(oldsize) >= (CHUNK_SIZE_T)(nb)) {
4051 /* already big enough; split below */
4052 newp = oldp;
4053 newsize = oldsize;
4054 }
4055
4056 else {
4057 next = chunk_at_offset(oldp, oldsize);
4058
4059 /* Try to expand forward into top */
4060 if (next == av->top &&
4061 (CHUNK_SIZE_T)(newsize = oldsize + chunksize(next)) >=
4062 (CHUNK_SIZE_T)(nb + MINSIZE)) {
4063 set_head_size(oldp, nb);
4064 av->top = chunk_at_offset(oldp, nb);
4065 set_head(av->top, (newsize - nb) | PREV_INUSE);
4066 return chunk2mem(oldp);
4067 }
4068
4069 /* Try to expand forward into next chunk; split off remainder below */
4070 else if (next != av->top &&
4071 !inuse(next) &&
4072 (CHUNK_SIZE_T)(newsize = oldsize + chunksize(next)) >=
4073 (CHUNK_SIZE_T)(nb)) {
4074 newp = oldp;
4075 unlink(next, bck, fwd);
4076 }
4077
4078 /* allocate, copy, free */
4079 else {
4080 newmem = mALLOc(nb - MALLOC_ALIGN_MASK);
4081 if (newmem == 0)
4082 return 0; /* propagate failure */
4083
4084 newp = mem2chunk(newmem);
4085 newsize = chunksize(newp);
4086
4087 /*
4088 Avoid copy if newp is next chunk after oldp.
4089 */
4090 if (newp == next) {
4091 newsize += oldsize;
4092 newp = oldp;
4093 }
4094 else {
4095 /*
4096 Unroll copy of <= 36 bytes (72 if 8byte sizes)
4097 We know that contents have an odd number of
4098 INTERNAL_SIZE_T-sized words; minimally 3.
4099 */
4100
4101 copysize = oldsize - SIZE_SZ;
4102 s = (INTERNAL_SIZE_T*)(oldmem);
4103 d = (INTERNAL_SIZE_T*)(newmem);
4104 ncopies = copysize / sizeof(INTERNAL_SIZE_T);
4105 assert(ncopies >= 3);
4106
4107 if (ncopies > 9)
4108 MALLOC_COPY(d, s, copysize);
4109
4110 else {
4111 *(d+0) = *(s+0);
4112 *(d+1) = *(s+1);
4113 *(d+2) = *(s+2);
4114 if (ncopies > 4) {
4115 *(d+3) = *(s+3);
4116 *(d+4) = *(s+4);
4117 if (ncopies > 6) {
4118 *(d+5) = *(s+5);
4119 *(d+6) = *(s+6);
4120 if (ncopies > 8) {
4121 *(d+7) = *(s+7);
4122 *(d+8) = *(s+8);
4123 }
4124 }
4125 }
4126 }
4127
4128 fREe(oldmem);
4129 check_inuse_chunk(newp);
4130 return chunk2mem(newp);
4131 }
4132 }
4133 }
4134
4135 /* If possible, free extra space in old or extended chunk */
4136
4137 assert((CHUNK_SIZE_T)(newsize) >= (CHUNK_SIZE_T)(nb));
4138
4139 remainder_size = newsize - nb;
4140
4141 if (remainder_size < MINSIZE) { /* not enough extra to split off */
4142 set_head_size(newp, newsize);
4143 set_inuse_bit_at_offset(newp, newsize);
4144 }
4145 else { /* split remainder */
4146 remainder = chunk_at_offset(newp, nb);
4147 set_head_size(newp, nb);
4148 set_head(remainder, remainder_size | PREV_INUSE);
4149 /* Mark remainder as inuse so free() won't complain */
4150 set_inuse_bit_at_offset(remainder, remainder_size);
4151 fREe(chunk2mem(remainder));
4152 }
4153
4154 check_inuse_chunk(newp);
4155 return chunk2mem(newp);
4156 }
4157
4158 /*
4159 Handle mmap cases
4160 */
4161
4162 else {
4163#if HAVE_MMAP
4164
4165#if HAVE_MREMAP
4166 INTERNAL_SIZE_T offset = oldp->prev_size;
4167 size_t pagemask = av->pagesize - 1;
4168 char *cp;
4169 CHUNK_SIZE_T sum;
4170
4171 /* Note the extra SIZE_SZ overhead */
4172 newsize = (nb + offset + SIZE_SZ + pagemask) & ~pagemask;
4173
4174 /* don't need to remap if still within same page */
4175 if (oldsize == newsize - offset)
4176 return oldmem;
4177
4178 cp = (char*)mremap((char*)oldp - offset, oldsize + offset, newsize, 1);
4179
4180 if (cp != (char*)MORECORE_FAILURE) {
4181
4182 newp = (mchunkptr)(cp + offset);
4183 set_head(newp, (newsize - offset)|IS_MMAPPED);
4184
4185 assert(aligned_OK(chunk2mem(newp)));
4186 assert((newp->prev_size == offset));
4187
4188 /* update statistics */
4189 sum = av->mmapped_mem += newsize - oldsize;
4190 if (sum > (CHUNK_SIZE_T)(av->max_mmapped_mem))
4191 av->max_mmapped_mem = sum;
4192 sum += av->sbrked_mem;
4193 if (sum > (CHUNK_SIZE_T)(av->max_total_mem))
4194 av->max_total_mem = sum;
4195
4196 return chunk2mem(newp);
4197 }
4198#endif
4199
4200 /* Note the extra SIZE_SZ overhead. */
4201 if ((CHUNK_SIZE_T)(oldsize) >= (CHUNK_SIZE_T)(nb + SIZE_SZ))
4202 newmem = oldmem; /* do nothing */
4203 else {
4204 /* Must alloc, copy, free. */
4205 newmem = mALLOc(nb - MALLOC_ALIGN_MASK);
4206 if (newmem != 0) {
4207 MALLOC_COPY(newmem, oldmem, oldsize - 2*SIZE_SZ);
4208 fREe(oldmem);
4209 }
4210 }
4211 return newmem;
4212
4213#else
4214 /* If !HAVE_MMAP, but chunk_is_mmapped, user must have overwritten mem */
4215 check_malloc_state();
4216 MALLOC_FAILURE_ACTION;
4217 return 0;
4218#endif
4219 }
4220}
4221
4222/*
4223 ------------------------------ memalign ------------------------------
4224*/
4225
4226#if __STD_C
4227Void_t* mEMALIGn(size_t alignment, size_t bytes)
4228#else
4229Void_t* mEMALIGn(alignment, bytes) size_t alignment; size_t bytes;
4230#endif
4231{
4232 INTERNAL_SIZE_T nb; /* padded request size */
4233 char* m; /* memory returned by malloc call */
4234 mchunkptr p; /* corresponding chunk */
4235 char* brk; /* alignment point within p */
4236 mchunkptr newp; /* chunk to return */
4237 INTERNAL_SIZE_T newsize; /* its size */
4238 INTERNAL_SIZE_T leadsize; /* leading space before alignment point */
4239 mchunkptr remainder; /* spare room at end to split off */
4240 CHUNK_SIZE_T remainder_size; /* its size */
4241 INTERNAL_SIZE_T size;
4242
4243 /* If need less alignment than we give anyway, just relay to malloc */
4244
4245 if (alignment <= MALLOC_ALIGNMENT) return mALLOc(bytes);
4246
4247 /* Otherwise, ensure that it is at least a minimum chunk size */
4248
4249 if (alignment < MINSIZE) alignment = MINSIZE;
4250
4251 /* Make sure alignment is power of 2 (in case MINSIZE is not). */
4252 if ((alignment & (alignment - 1)) != 0) {
4253 size_t a = MALLOC_ALIGNMENT * 2;
4254 while ((CHUNK_SIZE_T)a < (CHUNK_SIZE_T)alignment) a <<= 1;
4255 alignment = a;
4256 }
4257
4258 checked_request2size(bytes, nb);
4259
4260 /*
4261 Strategy: find a spot within that chunk that meets the alignment
4262 request, and then possibly free the leading and trailing space.
4263 */
4264
4265
4266 /* Call malloc with worst case padding to hit alignment. */
4267
4268 m = (char*)(mALLOc(nb + alignment + MINSIZE));
4269
4270 if (m == 0) return 0; /* propagate failure */
4271
4272 p = mem2chunk(m);
4273
4274 if ((((PTR_UINT)(m)) % alignment) != 0) { /* misaligned */
4275
4276 /*
4277 Find an aligned spot inside chunk. Since we need to give back
4278 leading space in a chunk of at least MINSIZE, if the first
4279 calculation places us at a spot with less than MINSIZE leader,
4280 we can move to the next aligned spot -- we've allocated enough
4281 total room so that this is always possible.
4282 */
4283
4284 brk = (char*)mem2chunk((PTR_UINT)(((PTR_UINT)(m + alignment - 1)) &
4285 -((signed long) alignment)));
4286 if ((CHUNK_SIZE_T)(brk - (char*)(p)) < MINSIZE)
4287 brk += alignment;
4288
4289 newp = (mchunkptr)brk;
4290 leadsize = brk - (char*)(p);
4291 newsize = chunksize(p) - leadsize;
4292
4293 /* For mmapped chunks, just adjust offset */
4294 if (chunk_is_mmapped(p)) {
4295 newp->prev_size = p->prev_size + leadsize;
4296 set_head(newp, newsize|IS_MMAPPED);
4297 return chunk2mem(newp);
4298 }
4299
4300 /* Otherwise, give back leader, use the rest */
4301 set_head(newp, newsize | PREV_INUSE);
4302 set_inuse_bit_at_offset(newp, newsize);
4303 set_head_size(p, leadsize);
4304 fREe(chunk2mem(p));
4305 p = newp;
4306
4307 assert (newsize >= nb &&
4308 (((PTR_UINT)(chunk2mem(p))) % alignment) == 0);
4309 }
4310
4311 /* Also give back spare room at the end */
4312 if (!chunk_is_mmapped(p)) {
4313 size = chunksize(p);
4314 if ((CHUNK_SIZE_T)(size) > (CHUNK_SIZE_T)(nb + MINSIZE)) {
4315 remainder_size = size - nb;
4316 remainder = chunk_at_offset(p, nb);
4317 set_head(remainder, remainder_size | PREV_INUSE);
4318 set_head_size(p, nb);
4319 fREe(chunk2mem(remainder));
4320 }
4321 }
4322
4323 check_inuse_chunk(p);
4324 return chunk2mem(p);
4325}
4326
4327/*
4328 ------------------------------ calloc ------------------------------
4329*/
4330
4331#if __STD_C
4332Void_t* cALLOc(size_t n_elements, size_t elem_size)
4333#else
4334Void_t* cALLOc(n_elements, elem_size) size_t n_elements; size_t elem_size;
4335#endif
4336{
4337 mchunkptr p;
4338 CHUNK_SIZE_T clearsize;
4339 CHUNK_SIZE_T nclears;
4340 INTERNAL_SIZE_T* d;
4341
4342 Void_t* mem = mALLOc(n_elements * elem_size);
4343
4344 if (mem != 0) {
4345 p = mem2chunk(mem);
4346
4347 if (!chunk_is_mmapped(p))
4348 {
4349 /*
4350 Unroll clear of <= 36 bytes (72 if 8byte sizes)
4351 We know that contents have an odd number of
4352 INTERNAL_SIZE_T-sized words; minimally 3.
4353 */
4354
4355 d = (INTERNAL_SIZE_T*)mem;
4356 clearsize = chunksize(p) - SIZE_SZ;
4357 nclears = clearsize / sizeof(INTERNAL_SIZE_T);
4358 assert(nclears >= 3);
4359
4360 if (nclears > 9)
4361 MALLOC_ZERO(d, clearsize);
4362
4363 else {
4364 *(d+0) = 0;
4365 *(d+1) = 0;
4366 *(d+2) = 0;
4367 if (nclears > 4) {
4368 *(d+3) = 0;
4369 *(d+4) = 0;
4370 if (nclears > 6) {
4371 *(d+5) = 0;
4372 *(d+6) = 0;
4373 if (nclears > 8) {
4374 *(d+7) = 0;
4375 *(d+8) = 0;
4376 }
4377 }
4378 }
4379 }
4380 }
4381#if ! MMAP_CLEARS
4382 else
4383 {
4384 d = (INTERNAL_SIZE_T*)mem;
4385 /*
4386 Note the additional SIZE_SZ
4387 */
4388 clearsize = chunksize(p) - 2*SIZE_SZ;
4389 MALLOC_ZERO(d, clearsize);
4390 }
4391#endif
4392 }
4393 return mem;
4394}
4395
4396/*
4397 ------------------------------ cfree ------------------------------
4398*/
4399
4400#if __STD_C
4401void cFREe(Void_t *mem)
4402#else
4403void cFREe(mem) Void_t *mem;
4404#endif
4405{
4406 fREe(mem);
4407}
4408
4409#ifdef NEED_INDEPENDENT
4410/*
4411 ------------------------- independent_calloc -------------------------
4412*/
4413
4414#if __STD_C
4415Void_t** iCALLOc(size_t n_elements, size_t elem_size, Void_t* chunks[])
4416#else
4417Void_t** iCALLOc(n_elements, elem_size, chunks) size_t n_elements; size_t elem_size; Void_t* chunks[];
4418#endif
4419{
4420 size_t sz = elem_size; /* serves as 1-element array */
4421 /* opts arg of 3 means all elements are same size, and should be cleared */
4422 return iALLOc(n_elements, &sz, 3, chunks);
4423}
4424
4425/*
4426 ------------------------- independent_comalloc -------------------------
4427*/
4428
4429#if __STD_C
4430Void_t** iCOMALLOc(size_t n_elements, size_t sizes[], Void_t* chunks[])
4431#else
4432Void_t** iCOMALLOc(n_elements, sizes, chunks) size_t n_elements; size_t sizes[]; Void_t* chunks[];
4433#endif
4434{
4435 return iALLOc(n_elements, sizes, 0, chunks);
4436}
4437
4438/*
4439 ------------------------------ ialloc ------------------------------
4440 ialloc provides common support for independent_X routines, handling all of
4441 the combinations that can result.
4442
4443 The opts arg has:
4444 bit 0 set if all elements are same size (using sizes[0])
4445 bit 1 set if elements should be zeroed
4446*/
4447
4448
4449#if __STD_C
4450static Void_t** iALLOc(size_t n_elements,
4451 size_t* sizes,
4452 int opts,
4453 Void_t* chunks[])
4454#else
4455static Void_t** iALLOc(n_elements, sizes, opts, chunks) size_t n_elements; size_t* sizes; int opts; Void_t* chunks[];
4456#endif
4457{
4458 mstate av = get_malloc_state();
4459 INTERNAL_SIZE_T element_size; /* chunksize of each element, if all same */
4460 INTERNAL_SIZE_T contents_size; /* total size of elements */
4461 INTERNAL_SIZE_T array_size; /* request size of pointer array */
4462 Void_t* mem; /* malloced aggregate space */
4463 mchunkptr p; /* corresponding chunk */
4464 INTERNAL_SIZE_T remainder_size; /* remaining bytes while splitting */
4465 Void_t** marray; /* either "chunks" or malloced ptr array */
4466 mchunkptr array_chunk; /* chunk for malloced ptr array */
4467 int mmx; /* to disable mmap */
4468 INTERNAL_SIZE_T size;
4469 size_t i;
4470
4471 /* Ensure initialization */
4472 if (av->max_fast == 0) malloc_consolidate(av);
4473
4474 /* compute array length, if needed */
4475 if (chunks != 0) {
4476 if (n_elements == 0)
4477 return chunks; /* nothing to do */
4478 marray = chunks;
4479 array_size = 0;
4480 }
4481 else {
4482 /* if empty req, must still return chunk representing empty array */
4483 if (n_elements == 0)
4484 return (Void_t**) mALLOc(0);
4485 marray = 0;
4486 array_size = request2size(n_elements * (sizeof(Void_t*)));
4487 }
4488
4489 /* compute total element size */
4490 if (opts & 0x1) { /* all-same-size */
4491 element_size = request2size(*sizes);
4492 contents_size = n_elements * element_size;
4493 }
4494 else { /* add up all the sizes */
4495 element_size = 0;
4496 contents_size = 0;
4497 for (i = 0; i != n_elements; ++i)
4498 contents_size += request2size(sizes[i]);
4499 }
4500
4501 /* subtract out alignment bytes from total to minimize overallocation */
4502 size = contents_size + array_size - MALLOC_ALIGN_MASK;
4503
4504 /*
4505 Allocate the aggregate chunk.
4506 But first disable mmap so malloc won't use it, since
4507 we would not be able to later free/realloc space internal
4508 to a segregated mmap region.
4509 */
4510 mmx = av->n_mmaps_max; /* disable mmap */
4511 av->n_mmaps_max = 0;
4512 mem = mALLOc(size);
4513 av->n_mmaps_max = mmx; /* reset mmap */
4514 if (mem == 0)
4515 return 0;
4516
4517 p = mem2chunk(mem);
4518 assert(!chunk_is_mmapped(p));
4519 remainder_size = chunksize(p);
4520
4521 if (opts & 0x2) { /* optionally clear the elements */
4522 MALLOC_ZERO(mem, remainder_size - SIZE_SZ - array_size);
4523 }
4524
4525 /* If not provided, allocate the pointer array as final part of chunk */
4526 if (marray == 0) {
4527 array_chunk = chunk_at_offset(p, contents_size);
4528 marray = (Void_t**) (chunk2mem(array_chunk));
4529 set_head(array_chunk, (remainder_size - contents_size) | PREV_INUSE);
4530 remainder_size = contents_size;
4531 }
4532
4533 /* split out elements */
4534 for (i = 0; ; ++i) {
4535 marray[i] = chunk2mem(p);
4536 if (i != n_elements-1) {
4537 if (element_size != 0)
4538 size = element_size;
4539 else
4540 size = request2size(sizes[i]);
4541 remainder_size -= size;
4542 set_head(p, size | PREV_INUSE);
4543 p = chunk_at_offset(p, size);
4544 }
4545 else { /* the final element absorbs any overallocation slop */
4546 set_head(p, remainder_size | PREV_INUSE);
4547 break;
4548 }
4549 }
4550
4551#if DEBUG
4552 if (marray != chunks) {
4553 /* final element must have exactly exhausted chunk */
4554 if (element_size != 0)
4555 assert(remainder_size == element_size);
4556 else
4557 assert(remainder_size == request2size(sizes[i]));
4558 check_inuse_chunk(mem2chunk(marray));
4559 }
4560
4561 for (i = 0; i != n_elements; ++i)
4562 check_inuse_chunk(mem2chunk(marray[i]));
4563#endif
4564
4565 return marray;
4566}
4567#endif /* NEED_INDEPENDENT */
4568
4569
4570/*
4571 ------------------------------ valloc ------------------------------
4572*/
4573
4574#if __STD_C
4575Void_t* vALLOc(size_t bytes)
4576#else
4577Void_t* vALLOc(bytes) size_t bytes;
4578#endif
4579{
4580 /* Ensure initialization */
4581 mstate av = get_malloc_state();
4582 if (av->max_fast == 0) malloc_consolidate(av);
4583 return mEMALIGn(av->pagesize, bytes);
4584}
4585
4586#ifdef NEED_PVALLOC
4587/*
4588 ------------------------------ pvalloc ------------------------------
4589*/
4590
4591
4592#if __STD_C
4593Void_t* pVALLOc(size_t bytes)
4594#else
4595Void_t* pVALLOc(bytes) size_t bytes;
4596#endif
4597{
4598 mstate av = get_malloc_state();
4599 size_t pagesz;
4600
4601 /* Ensure initialization */
4602 if (av->max_fast == 0) malloc_consolidate(av);
4603 pagesz = av->pagesize;
4604 return mEMALIGn(pagesz, (bytes + pagesz - 1) & ~(pagesz - 1));
4605}
4606#endif /*NEED_PVALLOC*/
4607
4608
4609/*
4610 ------------------------------ malloc_trim ------------------------------
4611*/
4612
4613#if __STD_C
4614int mTRIm(size_t pad)
4615#else
4616int mTRIm(pad) size_t pad;
4617#endif
4618{
4619 mstate av = get_malloc_state();
4620 /* Ensure initialization/consolidation */
4621 malloc_consolidate(av);
4622
4623#ifndef MORECORE_CANNOT_TRIM
4624 return sYSTRIm(pad, av);
4625#else
4626 return 0;
4627#endif
4628}
4629
4630
4631/*
4632 ------------------------- malloc_usable_size -------------------------
4633*/
4634
4635#if __STD_C
4636size_t mUSABLe(Void_t* mem)
4637#else
4638size_t mUSABLe(mem) Void_t* mem;
4639#endif
4640{
4641 mchunkptr p;
4642 if (mem != 0) {
4643 p = mem2chunk(mem);
4644 if (chunk_is_mmapped(p))
4645 return chunksize(p) - 2*SIZE_SZ;
4646 else if (inuse(p))
4647 return chunksize(p) - SIZE_SZ;
4648 }
4649 return 0;
4650}
4651
4652/*
4653 ------------------------------ mallinfo ------------------------------
4654*/
4655
4656struct mallinfo mALLINFo()
4657{
4658 mstate av = get_malloc_state();
4659 struct mallinfo mi;
4660 unsigned i;
4661 mbinptr b;
4662 mchunkptr p;
4663 INTERNAL_SIZE_T avail;
4664 INTERNAL_SIZE_T fastavail;
4665 int nblocks;
4666 int nfastblocks;
4667
4668 /* Ensure initialization */
4669 if (av->top == 0) malloc_consolidate(av);
4670
4671 check_malloc_state();
4672
4673 /* Account for top */
4674 avail = chunksize(av->top);
4675 nblocks = 1; /* top always exists */
4676
4677 /* traverse fastbins */
4678 nfastblocks = 0;
4679 fastavail = 0;
4680
4681 for (i = 0; i < NFASTBINS; ++i) {
4682 for (p = av->fastbins[i]; p != 0; p = p->fd) {
4683 ++nfastblocks;
4684 fastavail += chunksize(p);
4685 }
4686 }
4687
4688 avail += fastavail;
4689
4690 /* traverse regular bins */
4691 for (i = 1; i < NBINS; ++i) {
4692 b = bin_at(av, i);
4693 for (p = last(b); p != b; p = p->bk) {
4694 ++nblocks;
4695 avail += chunksize(p);
4696 }
4697 }
4698
4699 mi.smblks = nfastblocks;
4700 mi.ordblks = nblocks;
4701 mi.fordblks = avail;
4702 mi.uordblks = av->sbrked_mem - avail;
4703 mi.arena = av->sbrked_mem;
4704 mi.hblks = av->n_mmaps;
4705 mi.hblkhd = av->mmapped_mem;
4706 mi.fsmblks = fastavail;
4707 mi.keepcost = chunksize(av->top);
4708 mi.usmblks = av->max_total_mem;
4709 return mi;
4710}
4711
4712/*
4713 ------------------------------ malloc_stats ------------------------------
4714*/
4715
4716void mSTATs()
4717{
4718 struct mallinfo mi = mALLINFo();
4719
4720#ifdef WIN32
4721 {
4722 CHUNK_SIZE_T free, reserved, committed;
4723 vminfo (&free, &reserved, &committed);
4724 fprintf(stderr, "free bytes = %10lu\n",
4725 free);
4726 fprintf(stderr, "reserved bytes = %10lu\n",
4727 reserved);
4728 fprintf(stderr, "committed bytes = %10lu\n",
4729 committed);
4730 }
4731#endif
4732
4733
4734 fprintf(stderr, "max system bytes = %10lu\n",
4735 (CHUNK_SIZE_T)(mi.usmblks));
4736 fprintf(stderr, "system bytes = %10lu\n",
4737 (CHUNK_SIZE_T)(mi.arena + mi.hblkhd));
4738 fprintf(stderr, "in use bytes = %10lu\n",
4739 (CHUNK_SIZE_T)(mi.uordblks + mi.hblkhd));
4740
4741#ifdef WIN32
4742 {
4743 CHUNK_SIZE_T kernel, user;
4744 if (cpuinfo (TRUE, &kernel, &user)) {
4745 fprintf(stderr, "kernel ms = %10lu\n",
4746 kernel);
4747 fprintf(stderr, "user ms = %10lu\n",
4748 user);
4749 }
4750 }
4751#endif
4752}
4753
4754
4755/*
4756 ------------------------------ mallopt ------------------------------
4757*/
4758
4759#if __STD_C
4760int mALLOPt(int param_number, int value)
4761#else
4762int mALLOPt(param_number, value) int param_number; int value;
4763#endif
4764{
4765 mstate av = get_malloc_state();
4766 /* Ensure initialization/consolidation */
4767 malloc_consolidate(av);
4768
4769 switch(param_number) {
4770 case M_MXFAST:
4771 if (value >= 0 && value <= MAX_FAST_SIZE) {
4772 set_max_fast(av, value);
4773 return 1;
4774 }
4775 else
4776 return 0;
4777
4778 case M_TRIM_THRESHOLD:
4779 av->trim_threshold = value;
4780 return 1;
4781
4782 case M_TOP_PAD:
4783 av->top_pad = value;
4784 return 1;
4785
4786 case M_MMAP_THRESHOLD:
4787 av->mmap_threshold = value;
4788 return 1;
4789
4790 case M_MMAP_MAX:
4791#if !HAVE_MMAP
4792 if (value != 0)
4793 return 0;
4794#endif
4795 av->n_mmaps_max = value;
4796 return 1;
4797
4798 default:
4799 return 0;
4800 }
4801}
4802
4803/*
4804 -------------------- Alternative MORECORE functions --------------------
4805*/
4806
4807
4808/*
4809 General Requirements for MORECORE.
4810
4811 The MORECORE function must have the following properties:
4812
4813 If MORECORE_CONTIGUOUS is false:
4814
4815 * MORECORE must allocate in multiples of pagesize. It will
4816 only be called with arguments that are multiples of pagesize.
4817
4818 * MORECORE(0) must return an address that is at least
4819 MALLOC_ALIGNMENT aligned. (Page-aligning always suffices.)
4820
4821 else (i.e. If MORECORE_CONTIGUOUS is true):
4822
4823 * Consecutive calls to MORECORE with positive arguments
4824 return increasing addresses, indicating that space has been
4825 contiguously extended.
4826
4827 * MORECORE need not allocate in multiples of pagesize.
4828 Calls to MORECORE need not have args of multiples of pagesize.
4829
4830 * MORECORE need not page-align.
4831
4832 In either case:
4833
4834 * MORECORE may allocate more memory than requested. (Or even less,
4835 but this will generally result in a malloc failure.)
4836
4837 * MORECORE must not allocate memory when given argument zero, but
4838 instead return one past the end address of memory from previous
4839 nonzero call. This malloc does NOT call MORECORE(0)
4840 until at least one call with positive arguments is made, so
4841 the initial value returned is not important.
4842
4843 * Even though consecutive calls to MORECORE need not return contiguous
4844 addresses, it must be OK for malloc'ed chunks to span multiple
4845 regions in those cases where they do happen to be contiguous.
4846
4847 * MORECORE need not handle negative arguments -- it may instead
4848 just return MORECORE_FAILURE when given negative arguments.
4849 Negative arguments are always multiples of pagesize. MORECORE
4850 must not misinterpret negative args as large positive unsigned
4851 args. You can suppress all such calls from even occurring by defining
4852 MORECORE_CANNOT_TRIM,
4853
4854 There is some variation across systems about the type of the
4855 argument to sbrk/MORECORE. If size_t is unsigned, then it cannot
4856 actually be size_t, because sbrk supports negative args, so it is
4857 normally the signed type of the same width as size_t (sometimes
4858 declared as "intptr_t", and sometimes "ptrdiff_t"). It doesn't much
4859 matter though. Internally, we use "long" as arguments, which should
4860 work across all reasonable possibilities.
4861
4862 Additionally, if MORECORE ever returns failure for a positive
4863 request, and HAVE_MMAP is true, then mmap is used as a noncontiguous
4864 system allocator. This is a useful backup strategy for systems with
4865 holes in address spaces -- in this case sbrk cannot contiguously
4866 expand the heap, but mmap may be able to map noncontiguous space.
4867
4868 If you'd like mmap to ALWAYS be used, you can define MORECORE to be
4869 a function that always returns MORECORE_FAILURE.
4870
4871 Malloc only has limited ability to detect failures of MORECORE
4872 to supply contiguous space when it says it can. In particular,
4873 multithreaded programs that do not use locks may result in
4874 rece conditions across calls to MORECORE that result in gaps
4875 that cannot be detected as such, and subsequent corruption.
4876
4877 If you are using this malloc with something other than sbrk (or its
4878 emulation) to supply memory regions, you probably want to set
4879 MORECORE_CONTIGUOUS as false. As an example, here is a custom
4880 allocator kindly contributed for pre-OSX macOS. It uses virtually
4881 but not necessarily physically contiguous non-paged memory (locked
4882 in, present and won't get swapped out). You can use it by
4883 uncommenting this section, adding some #includes, and setting up the
4884 appropriate defines above:
4885
4886 #define MORECORE osMoreCore
4887 #define MORECORE_CONTIGUOUS 0
4888
4889 There is also a shutdown routine that should somehow be called for
4890 cleanup upon program exit.
4891
4892 #define MAX_POOL_ENTRIES 100
4893 #define MINIMUM_MORECORE_SIZE (64 * 1024)
4894 static int next_os_pool;
4895 void *our_os_pools[MAX_POOL_ENTRIES];
4896
4897 void *osMoreCore(int size)
4898 {
4899 void *ptr = 0;
4900 static void *sbrk_top = 0;
4901
4902 if (size > 0)
4903 {
4904 if (size < MINIMUM_MORECORE_SIZE)
4905 size = MINIMUM_MORECORE_SIZE;
4906 if (CurrentExecutionLevel() == kTaskLevel)
4907 ptr = PoolAllocateResident(size + RM_PAGE_SIZE, 0);
4908 if (ptr == 0)
4909 {
4910 return (void *) MORECORE_FAILURE;
4911 }
4912 // save ptrs so they can be freed during cleanup
4913 our_os_pools[next_os_pool] = ptr;
4914 next_os_pool++;
4915 ptr = (void *) ((((CHUNK_SIZE_T) ptr) + RM_PAGE_MASK) & ~RM_PAGE_MASK);
4916 sbrk_top = (char *) ptr + size;
4917 return ptr;
4918 }
4919 else if (size < 0)
4920 {
4921 // we don't currently support shrink behavior
4922 return (void *) MORECORE_FAILURE;
4923 }
4924 else
4925 {
4926 return sbrk_top;
4927 }
4928 }
4929
4930 // cleanup any allocated memory pools
4931 // called as last thing before shutting down driver
4932
4933 void osCleanupMem(void)
4934 {
4935 void **ptr;
4936
4937 for (ptr = our_os_pools; ptr < &our_os_pools[MAX_POOL_ENTRIES]; ptr++)
4938 if (*ptr)
4939 {
4940 PoolDeallocate(*ptr);
4941 *ptr = 0;
4942 }
4943 }
4944
4945*/
4946
4947
4948/*
4949 --------------------------------------------------------------
4950
4951 Emulation of sbrk for win32.
4952 Donated by J. Walter <Walter@GeNeSys-e.de>.
4953 For additional information about this code, and malloc on Win32, see
4954 http://www.genesys-e.de/jwalter/
4955*/
4956
4957
4958#ifdef WIN32
4959
4960#ifdef _DEBUG
4961/* #define TRACE */
4962#endif
4963
4964/* Support for USE_MALLOC_LOCK */
4965#ifdef USE_MALLOC_LOCK
4966
4967/* Wait for spin lock */
4968static int slwait (int *sl) {
4969 while (InterlockedCompareExchange ((void **) sl, (void *) 1, (void *) 0) != 0)
4970 Sleep (0);
4971 return 0;
4972}
4973
4974/* Release spin lock */
4975static int slrelease (int *sl) {
4976 InterlockedExchange (sl, 0);
4977 return 0;
4978}
4979
4980#ifdef NEEDED
4981/* Spin lock for emulation code */
4982static int g_sl;
4983#endif
4984
4985#endif /* USE_MALLOC_LOCK */
4986
4987/* getpagesize for windows */
4988static long getpagesize (void) {
4989 static long g_pagesize = 0;
4990 if (! g_pagesize) {
4991 SYSTEM_INFO system_info;
4992 GetSystemInfo (&system_info);
4993 g_pagesize = system_info.dwPageSize;
4994 }
4995 return g_pagesize;
4996}
4997static long getregionsize (void) {
4998 static long g_regionsize = 0;
4999 if (! g_regionsize) {
5000 SYSTEM_INFO system_info;
5001 GetSystemInfo (&system_info);
5002 g_regionsize = system_info.dwAllocationGranularity;
5003 }
5004 return g_regionsize;
5005}
5006
5007/* A region list entry */
5008typedef struct _region_list_entry {
5009 void *top_allocated;
5010 void *top_committed;
5011 void *top_reserved;
5012 long reserve_size;
5013 struct _region_list_entry *previous;
5014} region_list_entry;
5015
5016/* Allocate and link a region entry in the region list */
5017static int region_list_append (region_list_entry **last, void *base_reserved, long reserve_size) {
5018 region_list_entry *next = HeapAlloc (GetProcessHeap (), 0, sizeof (region_list_entry));
5019 if (! next)
5020 return FALSE;
5021 next->top_allocated = (char *) base_reserved;
5022 next->top_committed = (char *) base_reserved;
5023 next->top_reserved = (char *) base_reserved + reserve_size;
5024 next->reserve_size = reserve_size;
5025 next->previous = *last;
5026 *last = next;
5027 return TRUE;
5028}
5029/* Free and unlink the last region entry from the region list */
5030static int region_list_remove (region_list_entry **last) {
5031 region_list_entry *previous = (*last)->previous;
5032 if (! HeapFree (GetProcessHeap (), sizeof (region_list_entry), *last))
5033 return FALSE;
5034 *last = previous;
5035 return TRUE;
5036}
5037
5038#define CEIL(size,to) (((size)+(to)-1)&~((to)-1))
5039#define FLOOR(size,to) ((size)&~((to)-1))
5040
5041#define SBRK_SCALE 0
5042/* #define SBRK_SCALE 1 */
5043/* #define SBRK_SCALE 2 */
5044/* #define SBRK_SCALE 4 */
5045
5046/* sbrk for windows */
5047static void *sbrk (long size) {
5048 static long g_pagesize, g_my_pagesize;
5049 static long g_regionsize, g_my_regionsize;
5050 static region_list_entry *g_last;
5051 void *result = (void *) MORECORE_FAILURE;
5052#ifdef TRACE
5053 printf ("sbrk %d\n", size);
5054#endif
5055#if defined (USE_MALLOC_LOCK) && defined (NEEDED)
5056 /* Wait for spin lock */
5057 slwait (&g_sl);
5058#endif
5059 /* First time initialization */
5060 if (! g_pagesize) {
5061 g_pagesize = getpagesize ();
5062 g_my_pagesize = g_pagesize << SBRK_SCALE;
5063 }
5064 if (! g_regionsize) {
5065 g_regionsize = getregionsize ();
5066 g_my_regionsize = g_regionsize << SBRK_SCALE;
5067 }
5068 if (! g_last) {
5069 if (! region_list_append (&g_last, 0, 0))
5070 goto sbrk_exit;
5071 }
5072 /* Assert invariants */
5073 assert (g_last);
5074 assert ((char *) g_last->top_reserved - g_last->reserve_size <= (char *) g_last->top_allocated &&
5075 g_last->top_allocated <= g_last->top_committed);
5076 assert ((char *) g_last->top_reserved - g_last->reserve_size <= (char *) g_last->top_committed &&
5077 g_last->top_committed <= g_last->top_reserved &&
5078 (unsigned) g_last->top_committed % g_pagesize == 0);
5079 assert ((unsigned) g_last->top_reserved % g_regionsize == 0);
5080 assert ((unsigned) g_last->reserve_size % g_regionsize == 0);
5081 /* Allocation requested? */
5082 if (size >= 0) {
5083 /* Allocation size is the requested size */
5084 long allocate_size = size;
5085 /* Compute the size to commit */
5086 long to_commit = (char *) g_last->top_allocated + allocate_size - (char *) g_last->top_committed;
5087 /* Do we reach the commit limit? */
5088 if (to_commit > 0) {
5089 /* Round size to commit */
5090 long commit_size = CEIL (to_commit, g_my_pagesize);
5091 /* Compute the size to reserve */
5092 long to_reserve = (char *) g_last->top_committed + commit_size - (char *) g_last->top_reserved;
5093 /* Do we reach the reserve limit? */
5094 if (to_reserve > 0) {
5095 /* Compute the remaining size to commit in the current region */
5096 long remaining_commit_size = (char *) g_last->top_reserved - (char *) g_last->top_committed;
5097 if (remaining_commit_size > 0) {
5098 /* Assert preconditions */
5099 assert ((unsigned) g_last->top_committed % g_pagesize == 0);
5100 assert (0 < remaining_commit_size && remaining_commit_size % g_pagesize == 0); {
5101 /* Commit this */
5102 void *base_committed = VirtualAlloc (g_last->top_committed, remaining_commit_size,
5103 MEM_COMMIT, PAGE_READWRITE);
5104 /* Check returned pointer for consistency */
5105 if (base_committed != g_last->top_committed)
5106 goto sbrk_exit;
5107 /* Assert postconditions */
5108 assert ((unsigned) base_committed % g_pagesize == 0);
5109#ifdef TRACE
5110 printf ("Commit %p %d\n", base_committed, remaining_commit_size);
5111#endif
5112 /* Adjust the regions commit top */
5113 g_last->top_committed = (char *) base_committed + remaining_commit_size;
5114 }
5115 } {
5116 /* Now we are going to search and reserve. */
5117 int contiguous = -1;
5118 int found = FALSE;
5119 MEMORY_BASIC_INFORMATION memory_info;
5120 void *base_reserved;
5121 long reserve_size;
5122 do {
5123 /* Assume contiguous memory */
5124 contiguous = TRUE;
5125 /* Round size to reserve */
5126 reserve_size = CEIL (to_reserve, g_my_regionsize);
5127 /* Start with the current region's top */
5128 memory_info.BaseAddress = g_last->top_reserved;
5129 /* Assert preconditions */
5130 assert ((unsigned) memory_info.BaseAddress % g_pagesize == 0);
5131 assert (0 < reserve_size && reserve_size % g_regionsize == 0);
5132 while (VirtualQuery (memory_info.BaseAddress, &memory_info, sizeof (memory_info))) {
5133 /* Assert postconditions */
5134 assert ((unsigned) memory_info.BaseAddress % g_pagesize == 0);
5135#ifdef TRACE
5136 printf ("Query %p %d %s\n", memory_info.BaseAddress, memory_info.RegionSize,
5137 memory_info.State == MEM_FREE ? "FREE":
5138 (memory_info.State == MEM_RESERVE ? "RESERVED":
5139 (memory_info.State == MEM_COMMIT ? "COMMITTED": "?")));
5140#endif
5141 /* Region is free, well aligned and big enough: we are done */
5142 if (memory_info.State == MEM_FREE &&
5143 (unsigned) memory_info.BaseAddress % g_regionsize == 0 &&
5144 memory_info.RegionSize >= (unsigned) reserve_size) {
5145 found = TRUE;
5146 break;
5147 }
5148 /* From now on we can't get contiguous memory! */
5149 contiguous = FALSE;
5150 /* Recompute size to reserve */
5151 reserve_size = CEIL (allocate_size, g_my_regionsize);
5152 memory_info.BaseAddress = (char *) memory_info.BaseAddress + memory_info.RegionSize;
5153 /* Assert preconditions */
5154 assert ((unsigned) memory_info.BaseAddress % g_pagesize == 0);
5155 assert (0 < reserve_size && reserve_size % g_regionsize == 0);
5156 }
5157 /* Search failed? */
5158 if (! found)
5159 goto sbrk_exit;
5160 /* Assert preconditions */
5161 assert ((unsigned) memory_info.BaseAddress % g_regionsize == 0);
5162 assert (0 < reserve_size && reserve_size % g_regionsize == 0);
5163 /* Try to reserve this */
5164 base_reserved = VirtualAlloc (memory_info.BaseAddress, reserve_size,
5165 MEM_RESERVE, PAGE_NOACCESS);
5166 if (! base_reserved) {
5167 int rc = GetLastError ();
5168 if (rc != ERROR_INVALID_ADDRESS)
5169 goto sbrk_exit;
5170 }
5171 /* A null pointer signals (hopefully) a race condition with another thread. */
5172 /* In this case, we try again. */
5173 } while (! base_reserved);
5174 /* Check returned pointer for consistency */
5175 if (memory_info.BaseAddress && base_reserved != memory_info.BaseAddress)
5176 goto sbrk_exit;
5177 /* Assert postconditions */
5178 assert ((unsigned) base_reserved % g_regionsize == 0);
5179#ifdef TRACE
5180 printf ("Reserve %p %d\n", base_reserved, reserve_size);
5181#endif
5182 /* Did we get contiguous memory? */
5183 if (contiguous) {
5184 long start_size = (char *) g_last->top_committed - (char *) g_last->top_allocated;
5185 /* Adjust allocation size */
5186 allocate_size -= start_size;
5187 /* Adjust the regions allocation top */
5188 g_last->top_allocated = g_last->top_committed;
5189 /* Recompute the size to commit */
5190 to_commit = (char *) g_last->top_allocated + allocate_size - (char *) g_last->top_committed;
5191 /* Round size to commit */
5192 commit_size = CEIL (to_commit, g_my_pagesize);
5193 }
5194 /* Append the new region to the list */
5195 if (! region_list_append (&g_last, base_reserved, reserve_size))
5196 goto sbrk_exit;
5197 /* Didn't we get contiguous memory? */
5198 if (! contiguous) {
5199 /* Recompute the size to commit */
5200 to_commit = (char *) g_last->top_allocated + allocate_size - (char *) g_last->top_committed;
5201 /* Round size to commit */
5202 commit_size = CEIL (to_commit, g_my_pagesize);
5203 }
5204 }
5205 }
5206 /* Assert preconditions */
5207 assert ((unsigned) g_last->top_committed % g_pagesize == 0);
5208 assert (0 < commit_size && commit_size % g_pagesize == 0); {
5209 /* Commit this */
5210 void *base_committed = VirtualAlloc (g_last->top_committed, commit_size,
5211 MEM_COMMIT, PAGE_READWRITE);
5212 /* Check returned pointer for consistency */
5213 if (base_committed != g_last->top_committed)
5214 goto sbrk_exit;
5215 /* Assert postconditions */
5216 assert ((unsigned) base_committed % g_pagesize == 0);
5217#ifdef TRACE
5218 printf ("Commit %p %d\n", base_committed, commit_size);
5219#endif
5220 /* Adjust the regions commit top */
5221 g_last->top_committed = (char *) base_committed + commit_size;
5222 }
5223 }
5224 /* Adjust the regions allocation top */
5225 g_last->top_allocated = (char *) g_last->top_allocated + allocate_size;
5226 result = (char *) g_last->top_allocated - size;
5227 /* Deallocation requested? */
5228 } else if (size < 0) {
5229 long deallocate_size = - size;
5230 /* As long as we have a region to release */
5231 while ((char *) g_last->top_allocated - deallocate_size < (char *) g_last->top_reserved - g_last->reserve_size) {
5232 /* Get the size to release */
5233 long release_size = g_last->reserve_size;
5234 /* Get the base address */
5235 void *base_reserved = (char *) g_last->top_reserved - release_size;
5236 /* Assert preconditions */
5237 assert ((unsigned) base_reserved % g_regionsize == 0);
5238 assert (0 < release_size && release_size % g_regionsize == 0); {
5239 /* Release this */
5240 int rc = VirtualFree (base_reserved, 0,
5241 MEM_RELEASE);
5242 /* Check returned code for consistency */
5243 if (! rc)
5244 goto sbrk_exit;
5245#ifdef TRACE
5246 printf ("Release %p %d\n", base_reserved, release_size);
5247#endif
5248 }
5249 /* Adjust deallocation size */
5250 deallocate_size -= (char *) g_last->top_allocated - (char *) base_reserved;
5251 /* Remove the old region from the list */
5252 if (! region_list_remove (&g_last))
5253 goto sbrk_exit;
5254 } {
5255 /* Compute the size to decommit */
5256 long to_decommit = (char *) g_last->top_committed - ((char *) g_last->top_allocated - deallocate_size);
5257 if (to_decommit >= g_my_pagesize) {
5258 /* Compute the size to decommit */
5259 long decommit_size = FLOOR (to_decommit, g_my_pagesize);
5260 /* Compute the base address */
5261 void *base_committed = (char *) g_last->top_committed - decommit_size;
5262 /* Assert preconditions */
5263 assert ((unsigned) base_committed % g_pagesize == 0);
5264 assert (0 < decommit_size && decommit_size % g_pagesize == 0); {
5265 /* Decommit this */
5266 int rc = VirtualFree ((char *) base_committed, decommit_size,
5267 MEM_DECOMMIT);
5268 /* Check returned code for consistency */
5269 if (! rc)
5270 goto sbrk_exit;
5271#ifdef TRACE
5272 printf ("Decommit %p %d\n", base_committed, decommit_size);
5273#endif
5274 }
5275 /* Adjust deallocation size and regions commit and allocate top */
5276 deallocate_size -= (char *) g_last->top_allocated - (char *) base_committed;
5277 g_last->top_committed = base_committed;
5278 g_last->top_allocated = base_committed;
5279 }
5280 }
5281 /* Adjust regions allocate top */
5282 g_last->top_allocated = (char *) g_last->top_allocated - deallocate_size;
5283 /* Check for underflow */
5284 if ((char *) g_last->top_reserved - g_last->reserve_size > (char *) g_last->top_allocated ||
5285 g_last->top_allocated > g_last->top_committed) {
5286 /* Adjust regions allocate top */
5287 g_last->top_allocated = (char *) g_last->top_reserved - g_last->reserve_size;
5288 goto sbrk_exit;
5289 }
5290 result = g_last->top_allocated;
5291 }
5292 /* Assert invariants */
5293 assert (g_last);
5294 assert ((char *) g_last->top_reserved - g_last->reserve_size <= (char *) g_last->top_allocated &&
5295 g_last->top_allocated <= g_last->top_committed);
5296 assert ((char *) g_last->top_reserved - g_last->reserve_size <= (char *) g_last->top_committed &&
5297 g_last->top_committed <= g_last->top_reserved &&
5298 (unsigned) g_last->top_committed % g_pagesize == 0);
5299 assert ((unsigned) g_last->top_reserved % g_regionsize == 0);
5300 assert ((unsigned) g_last->reserve_size % g_regionsize == 0);
5301
5302sbrk_exit:
5303#if defined (USE_MALLOC_LOCK) && defined (NEEDED)
5304 /* Release spin lock */
5305 slrelease (&g_sl);
5306#endif
5307 return result;
5308}
5309
5310/* mmap for windows */
5311static void *mmap (void *ptr, long size, long prot, long type, long handle, long arg) {
5312 static long g_pagesize;
5313 static long g_regionsize;
5314#ifdef TRACE
5315 printf ("mmap %d\n", size);
5316#endif
5317#if defined (USE_MALLOC_LOCK) && defined (NEEDED)
5318 /* Wait for spin lock */
5319 slwait (&g_sl);
5320#endif
5321 /* First time initialization */
5322 if (! g_pagesize)
5323 g_pagesize = getpagesize ();
5324 if (! g_regionsize)
5325 g_regionsize = getregionsize ();
5326 /* Assert preconditions */
5327 assert ((unsigned) ptr % g_regionsize == 0);
5328 assert (size % g_pagesize == 0);
5329 /* Allocate this */
5330 ptr = VirtualAlloc (ptr, size,
5331 MEM_RESERVE | MEM_COMMIT | MEM_TOP_DOWN, PAGE_READWRITE);
5332 if (! ptr) {
5333 ptr = (void *) MORECORE_FAILURE;
5334 goto mmap_exit;
5335 }
5336 /* Assert postconditions */
5337 assert ((unsigned) ptr % g_regionsize == 0);
5338#ifdef TRACE
5339 printf ("Commit %p %d\n", ptr, size);
5340#endif
5341mmap_exit:
5342#if defined (USE_MALLOC_LOCK) && defined (NEEDED)
5343 /* Release spin lock */
5344 slrelease (&g_sl);
5345#endif
5346 return ptr;
5347}
5348
5349/* munmap for windows */
5350static long munmap (void *ptr, long size) {
5351 static long g_pagesize;
5352 static long g_regionsize;
5353 int rc = MUNMAP_FAILURE;
5354#ifdef TRACE
5355 printf ("munmap %p %d\n", ptr, size);
5356#endif
5357#if defined (USE_MALLOC_LOCK) && defined (NEEDED)
5358 /* Wait for spin lock */
5359 slwait (&g_sl);
5360#endif
5361 /* First time initialization */
5362 if (! g_pagesize)
5363 g_pagesize = getpagesize ();
5364 if (! g_regionsize)
5365 g_regionsize = getregionsize ();
5366 /* Assert preconditions */
5367 assert ((unsigned) ptr % g_regionsize == 0);
5368 assert (size % g_pagesize == 0);
5369 /* Free this */
5370 if (! VirtualFree (ptr, 0,
5371 MEM_RELEASE))
5372 goto munmap_exit;
5373 rc = 0;
5374#ifdef TRACE
5375 printf ("Release %p %d\n", ptr, size);
5376#endif
5377munmap_exit:
5378#if defined (USE_MALLOC_LOCK) && defined (NEEDED)
5379 /* Release spin lock */
5380 slrelease (&g_sl);
5381#endif
5382 return rc;
5383}
5384
5385static void vminfo (CHUNK_SIZE_T *free, CHUNK_SIZE_T *reserved, CHUNK_SIZE_T *committed) {
5386 MEMORY_BASIC_INFORMATION memory_info;
5387 memory_info.BaseAddress = 0;
5388 *free = *reserved = *committed = 0;
5389 while (VirtualQuery (memory_info.BaseAddress, &memory_info, sizeof (memory_info))) {
5390 switch (memory_info.State) {
5391 case MEM_FREE:
5392 *free += memory_info.RegionSize;
5393 break;
5394 case MEM_RESERVE:
5395 *reserved += memory_info.RegionSize;
5396 break;
5397 case MEM_COMMIT:
5398 *committed += memory_info.RegionSize;
5399 break;
5400 }
5401 memory_info.BaseAddress = (char *) memory_info.BaseAddress + memory_info.RegionSize;
5402 }
5403}
5404
5405static int cpuinfo (int whole, CHUNK_SIZE_T *kernel, CHUNK_SIZE_T *user) {
5406 if (whole) {
5407 __int64 creation64, exit64, kernel64, user64;
5408 int rc = GetProcessTimes (GetCurrentProcess (),
5409 (FILETIME *) &creation64,
5410 (FILETIME *) &exit64,
5411 (FILETIME *) &kernel64,
5412 (FILETIME *) &user64);
5413 if (! rc) {
5414 *kernel = 0;
5415 *user = 0;
5416 return FALSE;
5417 }
5418 *kernel = (CHUNK_SIZE_T) (kernel64 / 10000);
5419 *user = (CHUNK_SIZE_T) (user64 / 10000);
5420 return TRUE;
5421 } else {
5422 __int64 creation64, exit64, kernel64, user64;
5423 int rc = GetThreadTimes (GetCurrentThread (),
5424 (FILETIME *) &creation64,
5425 (FILETIME *) &exit64,
5426 (FILETIME *) &kernel64,
5427 (FILETIME *) &user64);
5428 if (! rc) {
5429 *kernel = 0;
5430 *user = 0;
5431 return FALSE;
5432 }
5433 *kernel = (CHUNK_SIZE_T) (kernel64 / 10000);
5434 *user = (CHUNK_SIZE_T) (user64 / 10000);
5435 return TRUE;
5436 }
5437}
5438
5439#endif /* WIN32 */
5440
5441/* ------------------------------------------------------------
5442History:
a80add95
CF
5443 V2.7.2 Sat Aug 17 09:07:30 2002 Doug Lea (dl at gee)
5444 * Fix malloc_state bitmap array misdeclaration
5445
c7e2187a
CF
5446 V2.7.1 Thu Jul 25 10:58:03 2002 Doug Lea (dl at gee)
5447 * Allow tuning of FIRST_SORTED_BIN_SIZE
5448 * Use PTR_UINT as type for all ptr->int casts. Thanks to John Belmonte.
5449 * Better detection and support for non-contiguousness of MORECORE.
5450 Thanks to Andreas Mueller, Conal Walsh, and Wolfram Gloger
5451 * Bypass most of malloc if no frees. Thanks To Emery Berger.
5452 * Fix freeing of old top non-contiguous chunk im sysmalloc.
5453 * Raised default trim and map thresholds to 256K.
5454 * Fix mmap-related #defines. Thanks to Lubos Lunak.
5455 * Fix copy macros; added LACKS_FCNTL_H. Thanks to Neal Walfield.
5456 * Branch-free bin calculation
5457 * Default trim and mmap thresholds now 256K.
5458
5459 V2.7.0 Sun Mar 11 14:14:06 2001 Doug Lea (dl at gee)
5460 * Introduce independent_comalloc and independent_calloc.
5461 Thanks to Michael Pachos for motivation and help.
5462 * Make optional .h file available
5463 * Allow > 2GB requests on 32bit systems.
5464 * new WIN32 sbrk, mmap, munmap, lock code from <Walter@GeNeSys-e.de>.
5465 Thanks also to Andreas Mueller <a.mueller at paradatec.de>,
5466 and Anonymous.
5467 * Allow override of MALLOC_ALIGNMENT (Thanks to Ruud Waij for
5468 helping test this.)
5469 * memalign: check alignment arg
5470 * realloc: don't try to shift chunks backwards, since this
5471 leads to more fragmentation in some programs and doesn't
5472 seem to help in any others.
5473 * Collect all cases in malloc requiring system memory into sYSMALLOc
5474 * Use mmap as backup to sbrk
5475 * Place all internal state in malloc_state
5476 * Introduce fastbins (although similar to 2.5.1)
5477 * Many minor tunings and cosmetic improvements
5478 * Introduce USE_PUBLIC_MALLOC_WRAPPERS, USE_MALLOC_LOCK
5479 * Introduce MALLOC_FAILURE_ACTION, MORECORE_CONTIGUOUS
5480 Thanks to Tony E. Bennett <tbennett@nvidia.com> and others.
5481 * Include errno.h to support default failure action.
5482
5483 V2.6.6 Sun Dec 5 07:42:19 1999 Doug Lea (dl at gee)
5484 * return null for negative arguments
5485 * Added Several WIN32 cleanups from Martin C. Fong <mcfong at yahoo.com>
5486 * Add 'LACKS_SYS_PARAM_H' for those systems without 'sys/param.h'
5487 (e.g. WIN32 platforms)
5488 * Cleanup header file inclusion for WIN32 platforms
5489 * Cleanup code to avoid Microsoft Visual C++ compiler complaints
5490 * Add 'USE_DL_PREFIX' to quickly allow co-existence with existing
5491 memory allocation routines
5492 * Set 'malloc_getpagesize' for WIN32 platforms (needs more work)
5493 * Use 'assert' rather than 'ASSERT' in WIN32 code to conform to
5494 usage of 'assert' in non-WIN32 code
5495 * Improve WIN32 'sbrk()' emulation's 'findRegion()' routine to
5496 avoid infinite loop
5497 * Always call 'fREe()' rather than 'free()'
5498
5499 V2.6.5 Wed Jun 17 15:57:31 1998 Doug Lea (dl at gee)
5500 * Fixed ordering problem with boundary-stamping
5501
5502 V2.6.3 Sun May 19 08:17:58 1996 Doug Lea (dl at gee)
5503 * Added pvalloc, as recommended by H.J. Liu
5504 * Added 64bit pointer support mainly from Wolfram Gloger
5505 * Added anonymously donated WIN32 sbrk emulation
5506 * Malloc, calloc, getpagesize: add optimizations from Raymond Nijssen
5507 * malloc_extend_top: fix mask error that caused wastage after
5508 foreign sbrks
5509 * Add linux mremap support code from HJ Liu
5510
5511 V2.6.2 Tue Dec 5 06:52:55 1995 Doug Lea (dl at gee)
5512 * Integrated most documentation with the code.
5513 * Add support for mmap, with help from
5514 Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
5515 * Use last_remainder in more cases.
5516 * Pack bins using idea from colin@nyx10.cs.du.edu
5517 * Use ordered bins instead of best-fit threshhold
5518 * Eliminate block-local decls to simplify tracing and debugging.
5519 * Support another case of realloc via move into top
5520 * Fix error occuring when initial sbrk_base not word-aligned.
5521 * Rely on page size for units instead of SBRK_UNIT to
5522 avoid surprises about sbrk alignment conventions.
5523 * Add mallinfo, mallopt. Thanks to Raymond Nijssen
5524 (raymond@es.ele.tue.nl) for the suggestion.
5525 * Add `pad' argument to malloc_trim and top_pad mallopt parameter.
5526 * More precautions for cases where other routines call sbrk,
5527 courtesy of Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
5528 * Added macros etc., allowing use in linux libc from
5529 H.J. Lu (hjl@gnu.ai.mit.edu)
5530 * Inverted this history list
5531
5532 V2.6.1 Sat Dec 2 14:10:57 1995 Doug Lea (dl at gee)
5533 * Re-tuned and fixed to behave more nicely with V2.6.0 changes.
5534 * Removed all preallocation code since under current scheme
5535 the work required to undo bad preallocations exceeds
5536 the work saved in good cases for most test programs.
5537 * No longer use return list or unconsolidated bins since
5538 no scheme using them consistently outperforms those that don't
5539 given above changes.
5540 * Use best fit for very large chunks to prevent some worst-cases.
5541 * Added some support for debugging
5542
5543 V2.6.0 Sat Nov 4 07:05:23 1995 Doug Lea (dl at gee)
5544 * Removed footers when chunks are in use. Thanks to
5545 Paul Wilson (wilson@cs.texas.edu) for the suggestion.
5546
5547 V2.5.4 Wed Nov 1 07:54:51 1995 Doug Lea (dl at gee)
5548 * Added malloc_trim, with help from Wolfram Gloger
5549 (wmglo@Dent.MED.Uni-Muenchen.DE).
5550
5551 V2.5.3 Tue Apr 26 10:16:01 1994 Doug Lea (dl at g)
5552
5553 V2.5.2 Tue Apr 5 16:20:40 1994 Doug Lea (dl at g)
5554 * realloc: try to expand in both directions
5555 * malloc: swap order of clean-bin strategy;
5556 * realloc: only conditionally expand backwards
5557 * Try not to scavenge used bins
5558 * Use bin counts as a guide to preallocation
5559 * Occasionally bin return list chunks in first scan
5560 * Add a few optimizations from colin@nyx10.cs.du.edu
5561
5562 V2.5.1 Sat Aug 14 15:40:43 1993 Doug Lea (dl at g)
5563 * faster bin computation & slightly different binning
5564 * merged all consolidations to one part of malloc proper
5565 (eliminating old malloc_find_space & malloc_clean_bin)
5566 * Scan 2 returns chunks (not just 1)
5567 * Propagate failure in realloc if malloc returns 0
5568 * Add stuff to allow compilation on non-ANSI compilers
5569 from kpv@research.att.com
5570
5571 V2.5 Sat Aug 7 07:41:59 1993 Doug Lea (dl at g.oswego.edu)
5572 * removed potential for odd address access in prev_chunk
5573 * removed dependency on getpagesize.h
5574 * misc cosmetics and a bit more internal documentation
5575 * anticosmetics: mangled names in macros to evade debugger strangeness
5576 * tested on sparc, hp-700, dec-mips, rs6000
5577 with gcc & native cc (hp, dec only) allowing
5578 Detlefs & Zorn comparison study (in SIGPLAN Notices.)
5579
5580 Trial version Fri Aug 28 13:14:29 1992 Doug Lea (dl at g.oswego.edu)
5581 * Based loosely on libg++-1.2X malloc. (It retains some of the overall
5582 structure of old version, but most details differ.)
5583
5584*/
This page took 0.539162 seconds and 5 git commands to generate.