]> sourceware.org Git - newlib-cygwin.git/blame - winsup/cygwin/malloc.cc
* calls.texinfo: Add getc_unlocked, getchar_unlocked, putc_unlocked
[newlib-cygwin.git] / winsup / cygwin / malloc.cc
CommitLineData
c7e2187a
CF
1/*
2 This is a version (aka dlmalloc) of malloc/free/realloc written by
3 Doug Lea and released to the public domain. Use, modify, and
4 redistribute this code without permission or acknowledgement in any
5 way you wish. Send questions, comments, complaints, performance
6 data, etc to dl@cs.oswego.edu
7
a80add95 8* VERSION 2.7.2 Sat Aug 17 09:07:30 2002 Doug Lea (dl at gee)
c7e2187a
CF
9
10 Note: There may be an updated version of this malloc obtainable at
11 ftp://gee.cs.oswego.edu/pub/misc/malloc.c
12 Check before installing!
13
14* Quickstart
15
16 This library is all in one file to simplify the most common usage:
17 ftp it, compile it (-O), and link it into another program. All
18 of the compile-time options default to reasonable values for use on
19 most unix platforms. Compile -DWIN32 for reasonable defaults on windows.
20 You might later want to step through various compile-time and dynamic
21 tuning options.
22
23 For convenience, an include file for code using this malloc is at:
24 ftp://gee.cs.oswego.edu/pub/misc/malloc-2.7.1.h
25 You don't really need this .h file unless you call functions not
26 defined in your system include files. The .h file contains only the
27 excerpts from this file needed for using this malloc on ANSI C/C++
28 systems, so long as you haven't changed compile-time options about
29 naming and tuning parameters. If you do, then you can create your
30 own malloc.h that does include all settings by cutting at the point
31 indicated below.
32
33* Why use this malloc?
34
35 This is not the fastest, most space-conserving, most portable, or
36 most tunable malloc ever written. However it is among the fastest
37 while also being among the most space-conserving, portable and tunable.
38 Consistent balance across these factors results in a good general-purpose
39 allocator for malloc-intensive programs.
40
41 The main properties of the algorithms are:
42 * For large (>= 512 bytes) requests, it is a pure best-fit allocator,
43 with ties normally decided via FIFO (i.e. least recently used).
44 * For small (<= 64 bytes by default) requests, it is a caching
45 allocator, that maintains pools of quickly recycled chunks.
46 * In between, and for combinations of large and small requests, it does
47 the best it can trying to meet both goals at once.
48 * For very large requests (>= 128KB by default), it relies on system
49 memory mapping facilities, if supported.
50
51 For a longer but slightly out of date high-level description, see
52 http://gee.cs.oswego.edu/dl/html/malloc.html
53
54 You may already by default be using a C library containing a malloc
55 that is based on some version of this malloc (for example in
56 linux). You might still want to use the one in this file in order to
57 customize settings or to avoid overheads associated with library
58 versions.
59
60* Contents, described in more detail in "description of public routines" below.
61
62 Standard (ANSI/SVID/...) functions:
63 malloc(size_t n);
64 calloc(size_t n_elements, size_t element_size);
65 free(Void_t* p);
66 realloc(Void_t* p, size_t n);
67 memalign(size_t alignment, size_t n);
68 valloc(size_t n);
69 mallinfo()
70 mallopt(int parameter_number, int parameter_value)
71
72 Additional functions:
73 independent_calloc(size_t n_elements, size_t size, Void_t* chunks[]);
74 independent_comalloc(size_t n_elements, size_t sizes[], Void_t* chunks[]);
75 pvalloc(size_t n);
76 cfree(Void_t* p);
77 malloc_trim(size_t pad);
78 malloc_usable_size(Void_t* p);
79 malloc_stats();
80
81* Vital statistics:
82
83 Supported pointer representation: 4 or 8 bytes
84 Supported size_t representation: 4 or 8 bytes
85 Note that size_t is allowed to be 4 bytes even if pointers are 8.
86 You can adjust this by defining INTERNAL_SIZE_T
87
88 Alignment: 2 * sizeof(size_t) (default)
89 (i.e., 8 byte alignment with 4byte size_t). This suffices for
90 nearly all current machines and C compilers. However, you can
91 define MALLOC_ALIGNMENT to be wider than this if necessary.
92
93 Minimum overhead per allocated chunk: 4 or 8 bytes
94 Each malloced chunk has a hidden word of overhead holding size
95 and status information.
96
97 Minimum allocated size: 4-byte ptrs: 16 bytes (including 4 overhead)
98 8-byte ptrs: 24/32 bytes (including, 4/8 overhead)
99
100 When a chunk is freed, 12 (for 4byte ptrs) or 20 (for 8 byte
101 ptrs but 4 byte size) or 24 (for 8/8) additional bytes are
102 needed; 4 (8) for a trailing size field and 8 (16) bytes for
103 free list pointers. Thus, the minimum allocatable size is
104 16/24/32 bytes.
105
106 Even a request for zero bytes (i.e., malloc(0)) returns a
107 pointer to something of the minimum allocatable size.
108
109 The maximum overhead wastage (i.e., number of extra bytes
110 allocated than were requested in malloc) is less than or equal
111 to the minimum size, except for requests >= mmap_threshold that
112 are serviced via mmap(), where the worst case wastage is 2 *
113 sizeof(size_t) bytes plus the remainder from a system page (the
114 minimal mmap unit); typically 4096 or 8192 bytes.
115
116 Maximum allocated size: 4-byte size_t: 2^32 minus about two pages
117 8-byte size_t: 2^64 minus about two pages
118
119 It is assumed that (possibly signed) size_t values suffice to
120 represent chunk sizes. `Possibly signed' is due to the fact
121 that `size_t' may be defined on a system as either a signed or
122 an unsigned type. The ISO C standard says that it must be
123 unsigned, but a few systems are known not to adhere to this.
124 Additionally, even when size_t is unsigned, sbrk (which is by
125 default used to obtain memory from system) accepts signed
126 arguments, and may not be able to handle size_t-wide arguments
127 with negative sign bit. Generally, values that would
128 appear as negative after accounting for overhead and alignment
129 are supported only via mmap(), which does not have this
130 limitation.
131
132 Requests for sizes outside the allowed range will perform an optional
133 failure action and then return null. (Requests may also
134 also fail because a system is out of memory.)
135
136 Thread-safety: NOT thread-safe unless USE_MALLOC_LOCK defined
137
138 When USE_MALLOC_LOCK is defined, wrappers are created to
139 surround every public call with either a pthread mutex or
140 a win32 spinlock (depending on WIN32). This is not
141 especially fast, and can be a major bottleneck.
142 It is designed only to provide minimal protection
143 in concurrent environments, and to provide a basis for
144 extensions. If you are using malloc in a concurrent program,
145 you would be far better off obtaining ptmalloc, which is
146 derived from a version of this malloc, and is well-tuned for
147 concurrent programs. (See http://www.malloc.de) Note that
148 even when USE_MALLOC_LOCK is defined, you can can guarantee
149 full thread-safety only if no threads acquire memory through
150 direct calls to MORECORE or other system-level allocators.
151
152 Compliance: I believe it is compliant with the 1997 Single Unix Specification
153 (See http://www.opennc.org). Also SVID/XPG, ANSI C, and probably
154 others as well.
155
156* Synopsis of compile-time options:
157
158 People have reported using previous versions of this malloc on all
159 versions of Unix, sometimes by tweaking some of the defines
160 below. It has been tested most extensively on Solaris and
161 Linux. It is also reported to work on WIN32 platforms.
162 People also report using it in stand-alone embedded systems.
163
164 The implementation is in straight, hand-tuned ANSI C. It is not
165 at all modular. (Sorry!) It uses a lot of macros. To be at all
166 usable, this code should be compiled using an optimizing compiler
167 (for example gcc -O3) that can simplify expressions and control
168 paths. (FAQ: some macros import variables as arguments rather than
169 declare locals because people reported that some debuggers
170 otherwise get confused.)
171
172 OPTION DEFAULT VALUE
173
174 Compilation Environment options:
175
176 __STD_C derived from C compiler defines
177 WIN32 NOT defined
178 HAVE_MEMCPY defined
179 USE_MEMCPY 1 if HAVE_MEMCPY is defined
180 HAVE_MMAP defined as 1
181 MMAP_CLEARS 1
182 HAVE_MREMAP 0 unless linux defined
183 malloc_getpagesize derived from system #includes, or 4096 if not
184 HAVE_USR_INCLUDE_MALLOC_H NOT defined
185 LACKS_UNISTD_H NOT defined unless WIN32
186 LACKS_SYS_PARAM_H NOT defined unless WIN32
187 LACKS_SYS_MMAN_H NOT defined unless WIN32
188 LACKS_FCNTL_H NOT defined
189
190 Changing default word sizes:
191
192 INTERNAL_SIZE_T size_t
193 MALLOC_ALIGNMENT 2 * sizeof(INTERNAL_SIZE_T)
194 PTR_UINT unsigned long
195 CHUNK_SIZE_T unsigned long
196
197 Configuration and functionality options:
198
199 USE_DL_PREFIX NOT defined
200 USE_PUBLIC_MALLOC_WRAPPERS NOT defined
201 USE_MALLOC_LOCK NOT defined
202 DEBUG NOT defined
203 REALLOC_ZERO_BYTES_FREES NOT defined
204 MALLOC_FAILURE_ACTION errno = ENOMEM, if __STD_C defined, else no-op
205 TRIM_FASTBINS 0
206 FIRST_SORTED_BIN_SIZE 512
207
208 Options for customizing MORECORE:
209
210 MORECORE sbrk
211 MORECORE_CONTIGUOUS 1
212 MORECORE_CANNOT_TRIM NOT defined
213 MMAP_AS_MORECORE_SIZE (1024 * 1024)
214
215 Tuning options that are also dynamically changeable via mallopt:
216
217 DEFAULT_MXFAST 64
218 DEFAULT_TRIM_THRESHOLD 256 * 1024
219 DEFAULT_TOP_PAD 0
220 DEFAULT_MMAP_THRESHOLD 256 * 1024
221 DEFAULT_MMAP_MAX 65536
222
223 There are several other #defined constants and macros that you
224 probably don't want to touch unless you are extending or adapting malloc.
225*/
226
227/*
228 WIN32 sets up defaults for MS environment and compilers.
229 Otherwise defaults are for unix.
230*/
231
232/* #define WIN32 */
233
234#ifdef WIN32
235
236#define WIN32_LEAN_AND_MEAN
237#include <windows.h>
238
239/* Win32 doesn't supply or need the following headers */
240#define LACKS_UNISTD_H
241#define LACKS_SYS_PARAM_H
242#define LACKS_SYS_MMAN_H
243
244/* Use the supplied emulation of sbrk */
245#define MORECORE sbrk
246#define MORECORE_CONTIGUOUS 1
247#define MORECORE_FAILURE ((void*)(-1))
248
249/* Use the supplied emulation of mmap and munmap */
250#define HAVE_MMAP 1
251#define MUNMAP_FAILURE (-1)
252#define MMAP_CLEARS 1
253
254/* These values don't really matter in windows mmap emulation */
255#define MAP_PRIVATE 1
256#define MAP_ANONYMOUS 2
257#define PROT_READ 1
258#define PROT_WRITE 2
259
260/* Emulation functions defined at the end of this file */
261
262/* If USE_MALLOC_LOCK, use supplied critical-section-based lock functions */
263#ifdef USE_MALLOC_LOCK
264static int slwait(int *sl);
265static int slrelease(int *sl);
266#endif
267
268static long getpagesize(void);
269static long getregionsize(void);
270static void *sbrk(long size);
271static void *mmap(void *ptr, long size, long prot, long type, long handle, long arg);
272static long munmap(void *ptr, long size);
273
274static void vminfo (unsigned long*free, unsigned long*reserved, unsigned long*committed);
275static int cpuinfo (int whole, unsigned long*kernel, unsigned long*user);
276
277#endif
278
279/*
280 __STD_C should be nonzero if using ANSI-standard C compiler, a C++
281 compiler, or a C compiler sufficiently close to ANSI to get away
282 with it.
283*/
284
285#ifndef __STD_C
286#if defined(__STDC__) || defined(_cplusplus)
287#define __STD_C 1
288#else
289#define __STD_C 0
290#endif
291#endif /*__STD_C*/
292
293
294/*
295 Void_t* is the pointer type that malloc should say it returns
296*/
297
298#ifndef Void_t
299#if (__STD_C || defined(WIN32))
300#define Void_t void
301#else
302#define Void_t char
303#endif
304#endif /*Void_t*/
305
306#if __STD_C
307#include <stddef.h> /* for size_t */
308#else
309#include <sys/types.h>
310#endif
311
312#include "cygmalloc.h"
313
314#ifdef __cplusplus
315extern "C" {
316#endif
317
318/* define LACKS_UNISTD_H if your system does not have a <unistd.h>. */
319
320/* #define LACKS_UNISTD_H */
321
322#ifndef LACKS_UNISTD_H
323#include <unistd.h>
324#endif
325
326/* define LACKS_SYS_PARAM_H if your system does not have a <sys/param.h>. */
327
328/* #define LACKS_SYS_PARAM_H */
329
330
331#include <stdio.h> /* needed for malloc_stats */
332#include <errno.h> /* needed for optional MALLOC_FAILURE_ACTION */
333
334
335/*
336 Debugging:
337
338 Because freed chunks may be overwritten with bookkeeping fields, this
339 malloc will often die when freed memory is overwritten by user
340 programs. This can be very effective (albeit in an annoying way)
341 in helping track down dangling pointers.
342
343 If you compile with -DDEBUG, a number of assertion checks are
344 enabled that will catch more memory errors. You probably won't be
345 able to make much sense of the actual assertion errors, but they
346 should help you locate incorrectly overwritten memory. The
347 checking is fairly extensive, and will slow down execution
348 noticeably. Calling malloc_stats or mallinfo with DEBUG set will
349 attempt to check every non-mmapped allocated and free chunk in the
350 course of computing the summmaries. (By nature, mmapped regions
351 cannot be checked very much automatically.)
352
353 Setting DEBUG may also be helpful if you are trying to modify
354 this code. The assertions in the check routines spell out in more
355 detail the assumptions and invariants underlying the algorithms.
356
357 Setting DEBUG does NOT provide an automated mechanism for checking
358 that all accesses to malloced memory stay within their
359 bounds. However, there are several add-ons and adaptations of this
360 or other mallocs available that do this.
361*/
362
363#if DEBUG
364#include <assert.h>
365#else
366#define assert(x) ((void)0)
367#endif
368
369/*
370 The unsigned integer type used for comparing any two chunk sizes.
371 This should be at least as wide as size_t, but should not be signed.
372*/
373
374#ifndef CHUNK_SIZE_T
375#define CHUNK_SIZE_T unsigned long
376#endif
377
378/*
379 The unsigned integer type used to hold addresses when they are are
380 manipulated as integers. Except that it is not defined on all
381 systems, intptr_t would suffice.
382*/
383#ifndef PTR_UINT
384#define PTR_UINT unsigned long
385#endif
386
387
388/*
389 INTERNAL_SIZE_T is the word-size used for internal bookkeeping
390 of chunk sizes.
391
392 The default version is the same as size_t.
393
394 While not strictly necessary, it is best to define this as an
395 unsigned type, even if size_t is a signed type. This may avoid some
396 artificial size limitations on some systems.
397
398 On a 64-bit machine, you may be able to reduce malloc overhead by
399 defining INTERNAL_SIZE_T to be a 32 bit `unsigned int' at the
400 expense of not being able to handle more than 2^32 of malloced
401 space. If this limitation is acceptable, you are encouraged to set
402 this unless you are on a platform requiring 16byte alignments. In
403 this case the alignment requirements turn out to negate any
404 potential advantages of decreasing size_t word size.
405
406 Implementors: Beware of the possible combinations of:
407 - INTERNAL_SIZE_T might be signed or unsigned, might be 32 or 64 bits,
408 and might be the same width as int or as long
409 - size_t might have different width and signedness as INTERNAL_SIZE_T
410 - int and long might be 32 or 64 bits, and might be the same width
411 To deal with this, most comparisons and difference computations
412 among INTERNAL_SIZE_Ts should cast them to CHUNK_SIZE_T, being
413 aware of the fact that casting an unsigned int to a wider long does
414 not sign-extend. (This also makes checking for negative numbers
415 awkward.) Some of these casts result in harmless compiler warnings
416 on some systems.
417*/
418
419#ifndef INTERNAL_SIZE_T
420#define INTERNAL_SIZE_T size_t
421#endif
422
423/* The corresponding word size */
424#define SIZE_SZ (sizeof(INTERNAL_SIZE_T))
425
426
427
428/*
429 MALLOC_ALIGNMENT is the minimum alignment for malloc'ed chunks.
430 It must be a power of two at least 2 * SIZE_SZ, even on machines
431 for which smaller alignments would suffice. It may be defined as
432 larger than this though. Note however that code and data structures
433 are optimized for the case of 8-byte alignment.
434*/
435
436
437#ifndef MALLOC_ALIGNMENT
438#define MALLOC_ALIGNMENT (2 * SIZE_SZ)
439#endif
440
441/* The corresponding bit mask value */
442#define MALLOC_ALIGN_MASK (MALLOC_ALIGNMENT - 1)
443
444
445
446/*
447 REALLOC_ZERO_BYTES_FREES should be set if a call to
448 realloc with zero bytes should be the same as a call to free.
449 Some people think it should. Otherwise, since this malloc
450 returns a unique pointer for malloc(0), so does realloc(p, 0).
451*/
452
453/* #define REALLOC_ZERO_BYTES_FREES */
454
455/*
456 TRIM_FASTBINS controls whether free() of a very small chunk can
457 immediately lead to trimming. Setting to true (1) can reduce memory
458 footprint, but will almost always slow down programs that use a lot
459 of small chunks.
460
461 Define this only if you are willing to give up some speed to more
462 aggressively reduce system-level memory footprint when releasing
463 memory in programs that use many small chunks. You can get
464 essentially the same effect by setting MXFAST to 0, but this can
465 lead to even greater slowdowns in programs using many small chunks.
466 TRIM_FASTBINS is an in-between compile-time option, that disables
467 only those chunks bordering topmost memory from being placed in
468 fastbins.
469*/
470
471#ifndef TRIM_FASTBINS
472#define TRIM_FASTBINS 0
473#endif
474
475
476/*
477 USE_DL_PREFIX will prefix all public routines with the string 'dl'.
478 This is necessary when you only want to use this malloc in one part
479 of a program, using your regular system malloc elsewhere.
480*/
481
482/* #define USE_DL_PREFIX */
483
484
485/*
486 USE_MALLOC_LOCK causes wrapper functions to surround each
487 callable routine with pthread mutex lock/unlock.
488
489 USE_MALLOC_LOCK forces USE_PUBLIC_MALLOC_WRAPPERS to be defined
490*/
491
492
493/* #define USE_MALLOC_LOCK */
494
495
496/*
497 If USE_PUBLIC_MALLOC_WRAPPERS is defined, every public routine is
498 actually a wrapper function that first calls MALLOC_PREACTION, then
499 calls the internal routine, and follows it with
500 MALLOC_POSTACTION. This is needed for locking, but you can also use
501 this, without USE_MALLOC_LOCK, for purposes of interception,
502 instrumentation, etc. It is a sad fact that using wrappers often
503 noticeably degrades performance of malloc-intensive programs.
504*/
505
506#ifdef USE_MALLOC_LOCK
507#define USE_PUBLIC_MALLOC_WRAPPERS
508#else
509/* #define USE_PUBLIC_MALLOC_WRAPPERS */
510#endif
511
512
513/*
514 Two-phase name translation.
515 All of the actual routines are given mangled names.
516 When wrappers are used, they become the public callable versions.
517 When DL_PREFIX is used, the callable names are prefixed.
518*/
519
520#ifndef USE_PUBLIC_MALLOC_WRAPPERS
521#define cALLOc public_cALLOc
522#define fREe public_fREe
523#define cFREe public_cFREe
524#define mALLOc public_mALLOc
525#define mEMALIGn public_mEMALIGn
526#define rEALLOc public_rEALLOc
527#define vALLOc public_vALLOc
528#define pVALLOc public_pVALLOc
529#define mALLINFo public_mALLINFo
530#define mALLOPt public_mALLOPt
531#define mTRIm public_mTRIm
532#define mSTATs public_mSTATs
533#define mUSABLe public_mUSABLe
534#define iCALLOc public_iCALLOc
535#define iCOMALLOc public_iCOMALLOc
536#endif
537
538#ifdef USE_DL_PREFIX
539#define public_cALLOc dlcalloc
540#define public_fREe dlfree
541#define public_cFREe dlcfree
542#define public_mALLOc dlmalloc
543#define public_mEMALIGn dlmemalign
544#define public_rEALLOc dlrealloc
545#define public_vALLOc dlvalloc
546#define public_pVALLOc dlpvalloc
547#define public_mALLINFo dlmallinfo
548#define public_mALLOPt dlmallopt
549#define public_mTRIm dlmalloc_trim
550#define public_mSTATs dlmalloc_stats
551#define public_mUSABLe dlmalloc_usable_size
552#define public_iCALLOc dlindependent_calloc
553#define public_iCOMALLOc dlindependent_comalloc
554#else /* USE_DL_PREFIX */
555#define public_cALLOc calloc
556#define public_fREe free
557#define public_cFREe cfree
558#define public_mALLOc malloc
559#define public_mEMALIGn memalign
560#define public_rEALLOc realloc
561#define public_vALLOc valloc
562#define public_pVALLOc pvalloc
563#define public_mALLINFo mallinfo
564#define public_mALLOPt mallopt
565#define public_mTRIm malloc_trim
566#define public_mSTATs malloc_stats
567#define public_mUSABLe malloc_usable_size
568#define public_iCALLOc independent_calloc
569#define public_iCOMALLOc independent_comalloc
570#endif /* USE_DL_PREFIX */
571
572
573/*
574 HAVE_MEMCPY should be defined if you are not otherwise using
575 ANSI STD C, but still have memcpy and memset in your C library
576 and want to use them in calloc and realloc. Otherwise simple
577 macro versions are defined below.
578
579 USE_MEMCPY should be defined as 1 if you actually want to
580 have memset and memcpy called. People report that the macro
581 versions are faster than libc versions on some systems.
582
583 Even if USE_MEMCPY is set to 1, loops to copy/clear small chunks
584 (of <= 36 bytes) are manually unrolled in realloc and calloc.
585*/
586
587#define HAVE_MEMCPY
588
589#ifndef USE_MEMCPY
590#ifdef HAVE_MEMCPY
591#define USE_MEMCPY 1
592#else
593#define USE_MEMCPY 0
594#endif
595#endif
596
597
598#if (__STD_C || defined(HAVE_MEMCPY))
599
600#ifdef WIN32
601/* On Win32 memset and memcpy are already declared in windows.h */
602#else
603#if __STD_C
604void* memset(void*, int, size_t);
605void* memcpy(void*, const void*, size_t);
606#else
607Void_t* memset();
608Void_t* memcpy();
609#endif
610#endif
611#endif
612
613/*
614 MALLOC_FAILURE_ACTION is the action to take before "return 0" when
615 malloc fails to be able to return memory, either because memory is
616 exhausted or because of illegal arguments.
617
618 By default, sets errno if running on STD_C platform, else does nothing.
619*/
620
621#ifndef MALLOC_FAILURE_ACTION
622#if __STD_C
623#define MALLOC_FAILURE_ACTION \
624 errno = ENOMEM;
625
626#else
627#define MALLOC_FAILURE_ACTION
628#endif
629#endif
630
631/*
632 MORECORE-related declarations. By default, rely on sbrk
633*/
634
635
636#ifdef LACKS_UNISTD_H
637#if !defined(__FreeBSD__) && !defined(__OpenBSD__) && !defined(__NetBSD__)
638#if __STD_C
639extern Void_t* sbrk(ptrdiff_t);
640#else
641extern Void_t* sbrk();
642#endif
643#endif
644#endif
645
646/*
647 MORECORE is the name of the routine to call to obtain more memory
648 from the system. See below for general guidance on writing
649 alternative MORECORE functions, as well as a version for WIN32 and a
650 sample version for pre-OSX macos.
651*/
652
653#ifndef MORECORE
654#define MORECORE sbrk
655#endif
656
657/*
658 MORECORE_FAILURE is the value returned upon failure of MORECORE
659 as well as mmap. Since it cannot be an otherwise valid memory address,
660 and must reflect values of standard sys calls, you probably ought not
661 try to redefine it.
662*/
663
664#ifndef MORECORE_FAILURE
665#define MORECORE_FAILURE (-1)
666#endif
667
668/*
669 If MORECORE_CONTIGUOUS is true, take advantage of fact that
670 consecutive calls to MORECORE with positive arguments always return
671 contiguous increasing addresses. This is true of unix sbrk. Even
672 if not defined, when regions happen to be contiguous, malloc will
673 permit allocations spanning regions obtained from different
674 calls. But defining this when applicable enables some stronger
675 consistency checks and space efficiencies.
676*/
677
678#ifndef MORECORE_CONTIGUOUS
679#define MORECORE_CONTIGUOUS 1
680#endif
681
682/*
683 Define MORECORE_CANNOT_TRIM if your version of MORECORE
684 cannot release space back to the system when given negative
685 arguments. This is generally necessary only if you are using
686 a hand-crafted MORECORE function that cannot handle negative arguments.
687*/
688
689/* #define MORECORE_CANNOT_TRIM */
690
691
692/*
693 Define HAVE_MMAP as true to optionally make malloc() use mmap() to
694 allocate very large blocks. These will be returned to the
695 operating system immediately after a free(). Also, if mmap
696 is available, it is used as a backup strategy in cases where
697 MORECORE fails to provide space from system.
698
699 This malloc is best tuned to work with mmap for large requests.
700 If you do not have mmap, operations involving very large chunks (1MB
701 or so) may be slower than you'd like.
702*/
703
704#ifndef HAVE_MMAP
705#define HAVE_MMAP 1
706#endif
707
708#if HAVE_MMAP
709/*
710 Standard unix mmap using /dev/zero clears memory so calloc doesn't
711 need to.
712*/
713
714#ifndef MMAP_CLEARS
715#define MMAP_CLEARS 1
716#endif
717
718#else /* no mmap */
719#ifndef MMAP_CLEARS
720#define MMAP_CLEARS 0
721#endif
722#endif
723
724
725/*
726 MMAP_AS_MORECORE_SIZE is the minimum mmap size argument to use if
727 sbrk fails, and mmap is used as a backup (which is done only if
728 HAVE_MMAP). The value must be a multiple of page size. This
729 backup strategy generally applies only when systems have "holes" in
730 address space, so sbrk cannot perform contiguous expansion, but
731 there is still space available on system. On systems for which
732 this is known to be useful (i.e. most linux kernels), this occurs
733 only when programs allocate huge amounts of memory. Between this,
734 and the fact that mmap regions tend to be limited, the size should
735 be large, to avoid too many mmap calls and thus avoid running out
736 of kernel resources.
737*/
738
739#ifndef MMAP_AS_MORECORE_SIZE
740#define MMAP_AS_MORECORE_SIZE (1024 * 1024)
741#endif
742
743/*
744 Define HAVE_MREMAP to make realloc() use mremap() to re-allocate
745 large blocks. This is currently only possible on Linux with
746 kernel versions newer than 1.3.77.
747*/
748
749#ifndef HAVE_MREMAP
750#ifdef linux
751#define HAVE_MREMAP 1
752#else
753#define HAVE_MREMAP 0
754#endif
755
756#endif /* HAVE_MMAP */
757
758
759/*
760 The system page size. To the extent possible, this malloc manages
761 memory from the system in page-size units. Note that this value is
762 cached during initialization into a field of malloc_state. So even
763 if malloc_getpagesize is a function, it is only called once.
764
765 The following mechanics for getpagesize were adapted from bsd/gnu
766 getpagesize.h. If none of the system-probes here apply, a value of
767 4096 is used, which should be OK: If they don't apply, then using
768 the actual value probably doesn't impact performance.
769*/
770
771
772#ifndef malloc_getpagesize
773
774#ifndef LACKS_UNISTD_H
775# include <unistd.h>
776#endif
777
778# ifdef _SC_PAGESIZE /* some SVR4 systems omit an underscore */
779# ifndef _SC_PAGE_SIZE
780# define _SC_PAGE_SIZE _SC_PAGESIZE
781# endif
782# endif
783
784# ifdef _SC_PAGE_SIZE
785# define malloc_getpagesize sysconf(_SC_PAGE_SIZE)
786# else
787# if defined(BSD) || defined(DGUX) || defined(HAVE_GETPAGESIZE)
788 extern size_t getpagesize();
789# define malloc_getpagesize getpagesize()
790# else
791# ifdef WIN32 /* use supplied emulation of getpagesize */
792# define malloc_getpagesize getpagesize()
793# else
794# ifndef LACKS_SYS_PARAM_H
795# include <sys/param.h>
796# endif
797# ifdef EXEC_PAGESIZE
798# define malloc_getpagesize EXEC_PAGESIZE
799# else
800# ifdef NBPG
801# ifndef CLSIZE
802# define malloc_getpagesize NBPG
803# else
804# define malloc_getpagesize (NBPG * CLSIZE)
805# endif
806# else
807# ifdef NBPC
808# define malloc_getpagesize NBPC
809# else
810# ifdef PAGESIZE
811# define malloc_getpagesize PAGESIZE
812# else /* just guess */
813# define malloc_getpagesize (4096)
814# endif
815# endif
816# endif
817# endif
818# endif
819# endif
820# endif
821#endif
822
823/*
824 This version of malloc supports the standard SVID/XPG mallinfo
825 routine that returns a struct containing usage properties and
826 statistics. It should work on any SVID/XPG compliant system that has
827 a /usr/include/malloc.h defining struct mallinfo. (If you'd like to
828 install such a thing yourself, cut out the preliminary declarations
829 as described above and below and save them in a malloc.h file. But
830 there's no compelling reason to bother to do this.)
831
832 The main declaration needed is the mallinfo struct that is returned
833 (by-copy) by mallinfo(). The SVID/XPG malloinfo struct contains a
834 bunch of fields that are not even meaningful in this version of
835 malloc. These fields are are instead filled by mallinfo() with
836 other numbers that might be of interest.
837
838 HAVE_USR_INCLUDE_MALLOC_H should be set if you have a
839 /usr/include/malloc.h file that includes a declaration of struct
840 mallinfo. If so, it is included; else an SVID2/XPG2 compliant
841 version is declared below. These must be precisely the same for
842 mallinfo() to work. The original SVID version of this struct,
843 defined on most systems with mallinfo, declares all fields as
844 ints. But some others define as unsigned long. If your system
845 defines the fields using a type of different width than listed here,
846 you must #include your system version and #define
847 HAVE_USR_INCLUDE_MALLOC_H.
848*/
849
850/* #define HAVE_USR_INCLUDE_MALLOC_H */
851
852#ifdef HAVE_USR_INCLUDE_MALLOC_H
853#include "/usr/include/malloc.h"
854#else
855
856/* SVID2/XPG mallinfo structure */
857
858struct mallinfo {
859 int arena; /* non-mmapped space allocated from system */
860 int ordblks; /* number of free chunks */
861 int smblks; /* number of fastbin blocks */
862 int hblks; /* number of mmapped regions */
863 int hblkhd; /* space in mmapped regions */
864 int usmblks; /* maximum total allocated space */
865 int fsmblks; /* space available in freed fastbin blocks */
866 int uordblks; /* total allocated space */
867 int fordblks; /* total free space */
868 int keepcost; /* top-most, releasable (via malloc_trim) space */
869};
870
871/*
872 SVID/XPG defines four standard parameter numbers for mallopt,
873 normally defined in malloc.h. Only one of these (M_MXFAST) is used
874 in this malloc. The others (M_NLBLKS, M_GRAIN, M_KEEP) don't apply,
875 so setting them has no effect. But this malloc also supports other
876 options in mallopt described below.
877*/
878#endif
879
880
881/* ---------- description of public routines ------------ */
882
883/*
884 malloc(size_t n)
885 Returns a pointer to a newly allocated chunk of at least n bytes, or null
886 if no space is available. Additionally, on failure, errno is
887 set to ENOMEM on ANSI C systems.
888
889 If n is zero, malloc returns a minumum-sized chunk. (The minimum
890 size is 16 bytes on most 32bit systems, and 24 or 32 bytes on 64bit
891 systems.) On most systems, size_t is an unsigned type, so calls
892 with negative arguments are interpreted as requests for huge amounts
893 of space, which will often fail. The maximum supported value of n
894 differs across systems, but is in all cases less than the maximum
895 representable value of a size_t.
896*/
897#if __STD_C
898Void_t* public_mALLOc(size_t);
899#else
900Void_t* public_mALLOc();
901#endif
902
903/*
904 free(Void_t* p)
905 Releases the chunk of memory pointed to by p, that had been previously
906 allocated using malloc or a related routine such as realloc.
907 It has no effect if p is null. It can have arbitrary (i.e., bad!)
908 effects if p has already been freed.
909
910 Unless disabled (using mallopt), freeing very large spaces will
911 when possible, automatically trigger operations that give
912 back unused memory to the system, thus reducing program footprint.
913*/
914#if __STD_C
915void public_fREe(Void_t*);
916#else
917void public_fREe();
918#endif
919
920/*
921 calloc(size_t n_elements, size_t element_size);
922 Returns a pointer to n_elements * element_size bytes, with all locations
923 set to zero.
924*/
925#if __STD_C
926Void_t* public_cALLOc(size_t, size_t);
927#else
928Void_t* public_cALLOc();
929#endif
930
931/*
932 realloc(Void_t* p, size_t n)
933 Returns a pointer to a chunk of size n that contains the same data
934 as does chunk p up to the minimum of (n, p's size) bytes, or null
935 if no space is available.
936
937 The returned pointer may or may not be the same as p. The algorithm
938 prefers extending p when possible, otherwise it employs the
939 equivalent of a malloc-copy-free sequence.
940
941 If p is null, realloc is equivalent to malloc.
942
943 If space is not available, realloc returns null, errno is set (if on
944 ANSI) and p is NOT freed.
945
946 if n is for fewer bytes than already held by p, the newly unused
947 space is lopped off and freed if possible. Unless the #define
948 REALLOC_ZERO_BYTES_FREES is set, realloc with a size argument of
949 zero (re)allocates a minimum-sized chunk.
950
951 Large chunks that were internally obtained via mmap will always
952 be reallocated using malloc-copy-free sequences unless
953 the system supports MREMAP (currently only linux).
954
955 The old unix realloc convention of allowing the last-free'd chunk
956 to be used as an argument to realloc is not supported.
957*/
958#if __STD_C
959Void_t* public_rEALLOc(Void_t*, size_t);
960#else
961Void_t* public_rEALLOc();
962#endif
963
964/*
965 memalign(size_t alignment, size_t n);
966 Returns a pointer to a newly allocated chunk of n bytes, aligned
967 in accord with the alignment argument.
968
969 The alignment argument should be a power of two. If the argument is
970 not a power of two, the nearest greater power is used.
971 8-byte alignment is guaranteed by normal malloc calls, so don't
972 bother calling memalign with an argument of 8 or less.
973
974 Overreliance on memalign is a sure way to fragment space.
975*/
976#if __STD_C
977Void_t* public_mEMALIGn(size_t, size_t);
978#else
979Void_t* public_mEMALIGn();
980#endif
981
982/*
983 valloc(size_t n);
984 Equivalent to memalign(pagesize, n), where pagesize is the page
985 size of the system. If the pagesize is unknown, 4096 is used.
986*/
987#if __STD_C
988Void_t* public_vALLOc(size_t);
989#else
990Void_t* public_vALLOc();
991#endif
992
993
994
995/*
996 mallopt(int parameter_number, int parameter_value)
997 Sets tunable parameters The format is to provide a
998 (parameter-number, parameter-value) pair. mallopt then sets the
999 corresponding parameter to the argument value if it can (i.e., so
1000 long as the value is meaningful), and returns 1 if successful else
1001 0. SVID/XPG/ANSI defines four standard param numbers for mallopt,
1002 normally defined in malloc.h. Only one of these (M_MXFAST) is used
1003 in this malloc. The others (M_NLBLKS, M_GRAIN, M_KEEP) don't apply,
1004 so setting them has no effect. But this malloc also supports four
1005 other options in mallopt. See below for details. Briefly, supported
1006 parameters are as follows (listed defaults are for "typical"
1007 configurations).
1008
1009 Symbol param # default allowed param values
1010 M_MXFAST 1 64 0-80 (0 disables fastbins)
1011 M_TRIM_THRESHOLD -1 256*1024 any (-1U disables trimming)
1012 M_TOP_PAD -2 0 any
1013 M_MMAP_THRESHOLD -3 256*1024 any (or 0 if no MMAP support)
1014 M_MMAP_MAX -4 65536 any (0 disables use of mmap)
1015*/
1016#if __STD_C
1017int public_mALLOPt(int, int);
1018#else
1019int public_mALLOPt();
1020#endif
1021
1022
1023/*
1024 mallinfo()
1025 Returns (by copy) a struct containing various summary statistics:
1026
1027 arena: current total non-mmapped bytes allocated from system
1028 ordblks: the number of free chunks
1029 smblks: the number of fastbin blocks (i.e., small chunks that
1030 have been freed but not use resused or consolidated)
1031 hblks: current number of mmapped regions
1032 hblkhd: total bytes held in mmapped regions
1033 usmblks: the maximum total allocated space. This will be greater
1034 than current total if trimming has occurred.
1035 fsmblks: total bytes held in fastbin blocks
1036 uordblks: current total allocated space (normal or mmapped)
1037 fordblks: total free space
1038 keepcost: the maximum number of bytes that could ideally be released
1039 back to system via malloc_trim. ("ideally" means that
1040 it ignores page restrictions etc.)
1041
1042 Because these fields are ints, but internal bookkeeping may
1043 be kept as longs, the reported values may wrap around zero and
1044 thus be inaccurate.
1045*/
1046#if __STD_C
1047struct mallinfo public_mALLINFo(void);
1048#else
1049struct mallinfo public_mALLINFo();
1050#endif
1051
1052/*
1053 independent_calloc(size_t n_elements, size_t element_size, Void_t* chunks[]);
1054
1055 independent_calloc is similar to calloc, but instead of returning a
1056 single cleared space, it returns an array of pointers to n_elements
1057 independent elements that can hold contents of size elem_size, each
1058 of which starts out cleared, and can be independently freed,
1059 realloc'ed etc. The elements are guaranteed to be adjacently
1060 allocated (this is not guaranteed to occur with multiple callocs or
1061 mallocs), which may also improve cache locality in some
1062 applications.
1063
1064 The "chunks" argument is optional (i.e., may be null, which is
1065 probably the most typical usage). If it is null, the returned array
1066 is itself dynamically allocated and should also be freed when it is
1067 no longer needed. Otherwise, the chunks array must be of at least
1068 n_elements in length. It is filled in with the pointers to the
1069 chunks.
1070
1071 In either case, independent_calloc returns this pointer array, or
1072 null if the allocation failed. If n_elements is zero and "chunks"
1073 is null, it returns a chunk representing an array with zero elements
1074 (which should be freed if not wanted).
1075
1076 Each element must be individually freed when it is no longer
1077 needed. If you'd like to instead be able to free all at once, you
1078 should instead use regular calloc and assign pointers into this
1079 space to represent elements. (In this case though, you cannot
1080 independently free elements.)
1081
1082 independent_calloc simplifies and speeds up implementations of many
1083 kinds of pools. It may also be useful when constructing large data
1084 structures that initially have a fixed number of fixed-sized nodes,
1085 but the number is not known at compile time, and some of the nodes
1086 may later need to be freed. For example:
1087
1088 struct Node { int item; struct Node* next; };
1089
1090 struct Node* build_list() {
1091 struct Node** pool;
1092 int n = read_number_of_nodes_needed();
1093 if (n <= 0) return 0;
1094 pool = (struct Node**)(independent_calloc(n, sizeof(struct Node), 0);
1095 if (pool == 0) die();
1096 // organize into a linked list...
1097 struct Node* first = pool[0];
1098 for (i = 0; i < n-1; ++i)
1099 pool[i]->next = pool[i+1];
1100 free(pool); // Can now free the array (or not, if it is needed later)
1101 return first;
1102 }
1103*/
1104#if __STD_C
1105Void_t** public_iCALLOc(size_t, size_t, Void_t**);
1106#else
1107Void_t** public_iCALLOc();
1108#endif
1109
1110/*
1111 independent_comalloc(size_t n_elements, size_t sizes[], Void_t* chunks[]);
1112
1113 independent_comalloc allocates, all at once, a set of n_elements
1114 chunks with sizes indicated in the "sizes" array. It returns
1115 an array of pointers to these elements, each of which can be
1116 independently freed, realloc'ed etc. The elements are guaranteed to
1117 be adjacently allocated (this is not guaranteed to occur with
1118 multiple callocs or mallocs), which may also improve cache locality
1119 in some applications.
1120
1121 The "chunks" argument is optional (i.e., may be null). If it is null
1122 the returned array is itself dynamically allocated and should also
1123 be freed when it is no longer needed. Otherwise, the chunks array
1124 must be of at least n_elements in length. It is filled in with the
1125 pointers to the chunks.
1126
1127 In either case, independent_comalloc returns this pointer array, or
1128 null if the allocation failed. If n_elements is zero and chunks is
1129 null, it returns a chunk representing an array with zero elements
1130 (which should be freed if not wanted).
1131
1132 Each element must be individually freed when it is no longer
1133 needed. If you'd like to instead be able to free all at once, you
1134 should instead use a single regular malloc, and assign pointers at
1135 particular offsets in the aggregate space. (In this case though, you
1136 cannot independently free elements.)
1137
1138 independent_comallac differs from independent_calloc in that each
1139 element may have a different size, and also that it does not
1140 automatically clear elements.
1141
1142 independent_comalloc can be used to speed up allocation in cases
1143 where several structs or objects must always be allocated at the
1144 same time. For example:
1145
1146 struct Head { ... }
1147 struct Foot { ... }
1148
1149 void send_message(char* msg) {
1150 int msglen = strlen(msg);
1151 size_t sizes[3] = { sizeof(struct Head), msglen, sizeof(struct Foot) };
1152 void* chunks[3];
1153 if (independent_comalloc(3, sizes, chunks) == 0)
1154 die();
1155 struct Head* head = (struct Head*)(chunks[0]);
1156 char* body = (char*)(chunks[1]);
1157 struct Foot* foot = (struct Foot*)(chunks[2]);
1158 // ...
1159 }
1160
1161 In general though, independent_comalloc is worth using only for
1162 larger values of n_elements. For small values, you probably won't
1163 detect enough difference from series of malloc calls to bother.
1164
1165 Overuse of independent_comalloc can increase overall memory usage,
1166 since it cannot reuse existing noncontiguous small chunks that
1167 might be available for some of the elements.
1168*/
1169#if __STD_C
1170Void_t** public_iCOMALLOc(size_t, size_t*, Void_t**);
1171#else
1172Void_t** public_iCOMALLOc();
1173#endif
1174
1175
1176/*
1177 pvalloc(size_t n);
1178 Equivalent to valloc(minimum-page-that-holds(n)), that is,
1179 round up n to nearest pagesize.
1180 */
1181#if __STD_C
1182Void_t* public_pVALLOc(size_t);
1183#else
1184Void_t* public_pVALLOc();
1185#endif
1186
1187/*
1188 cfree(Void_t* p);
1189 Equivalent to free(p).
1190
1191 cfree is needed/defined on some systems that pair it with calloc,
1192 for odd historical reasons (such as: cfree is used in example
1193 code in the first edition of K&R).
1194*/
1195#if __STD_C
1196void public_cFREe(Void_t*);
1197#else
1198void public_cFREe();
1199#endif
1200
1201/*
1202 malloc_trim(size_t pad);
1203
1204 If possible, gives memory back to the system (via negative
1205 arguments to sbrk) if there is unused memory at the `high' end of
1206 the malloc pool. You can call this after freeing large blocks of
1207 memory to potentially reduce the system-level memory requirements
1208 of a program. However, it cannot guarantee to reduce memory. Under
1209 some allocation patterns, some large free blocks of memory will be
1210 locked between two used chunks, so they cannot be given back to
1211 the system.
1212
1213 The `pad' argument to malloc_trim represents the amount of free
1214 trailing space to leave untrimmed. If this argument is zero,
1215 only the minimum amount of memory to maintain internal data
1216 structures will be left (one page or less). Non-zero arguments
1217 can be supplied to maintain enough trailing space to service
1218 future expected allocations without having to re-obtain memory
1219 from the system.
1220
1221 Malloc_trim returns 1 if it actually released any memory, else 0.
1222 On systems that do not support "negative sbrks", it will always
1223 rreturn 0.
1224*/
1225#if __STD_C
1226int public_mTRIm(size_t);
1227#else
1228int public_mTRIm();
1229#endif
1230
1231/*
1232 malloc_usable_size(Void_t* p);
1233
1234 Returns the number of bytes you can actually use in
1235 an allocated chunk, which may be more than you requested (although
1236 often not) due to alignment and minimum size constraints.
1237 You can use this many bytes without worrying about
1238 overwriting other allocated objects. This is not a particularly great
1239 programming practice. malloc_usable_size can be more useful in
1240 debugging and assertions, for example:
1241
1242 p = malloc(n);
1243 assert(malloc_usable_size(p) >= 256);
1244
1245*/
1246#if __STD_C
1247size_t public_mUSABLe(Void_t*);
1248#else
1249size_t public_mUSABLe();
1250#endif
1251
1252/*
1253 malloc_stats();
1254 Prints on stderr the amount of space obtained from the system (both
1255 via sbrk and mmap), the maximum amount (which may be more than
1256 current if malloc_trim and/or munmap got called), and the current
1257 number of bytes allocated via malloc (or realloc, etc) but not yet
1258 freed. Note that this is the number of bytes allocated, not the
1259 number requested. It will be larger than the number requested
1260 because of alignment and bookkeeping overhead. Because it includes
1261 alignment wastage as being in use, this figure may be greater than
1262 zero even when no user-level chunks are allocated.
1263
1264 The reported current and maximum system memory can be inaccurate if
1265 a program makes other calls to system memory allocation functions
1266 (normally sbrk) outside of malloc.
1267
1268 malloc_stats prints only the most commonly interesting statistics.
1269 More information can be obtained by calling mallinfo.
1270
1271*/
1272#if __STD_C
1273void public_mSTATs();
1274#else
1275void public_mSTATs();
1276#endif
1277
1278/* mallopt tuning options */
1279
1280/*
1281 M_MXFAST is the maximum request size used for "fastbins", special bins
1282 that hold returned chunks without consolidating their spaces. This
1283 enables future requests for chunks of the same size to be handled
1284 very quickly, but can increase fragmentation, and thus increase the
1285 overall memory footprint of a program.
1286
1287 This malloc manages fastbins very conservatively yet still
1288 efficiently, so fragmentation is rarely a problem for values less
1289 than or equal to the default. The maximum supported value of MXFAST
1290 is 80. You wouldn't want it any higher than this anyway. Fastbins
1291 are designed especially for use with many small structs, objects or
1292 strings -- the default handles structs/objects/arrays with sizes up
1293 to 16 4byte fields, or small strings representing words, tokens,
1294 etc. Using fastbins for larger objects normally worsens
1295 fragmentation without improving speed.
1296
1297 M_MXFAST is set in REQUEST size units. It is internally used in
1298 chunksize units, which adds padding and alignment. You can reduce
1299 M_MXFAST to 0 to disable all use of fastbins. This causes the malloc
1300 algorithm to be a closer approximation of fifo-best-fit in all cases,
1301 not just for larger requests, but will generally cause it to be
1302 slower.
1303*/
1304
1305
1306/* M_MXFAST is a standard SVID/XPG tuning option, usually listed in malloc.h */
1307#ifndef M_MXFAST
1308#define M_MXFAST 1
1309#endif
1310
1311#ifndef DEFAULT_MXFAST
1312#define DEFAULT_MXFAST 64
1313#endif
1314
1315
1316/*
1317 M_TRIM_THRESHOLD is the maximum amount of unused top-most memory
1318 to keep before releasing via malloc_trim in free().
1319
1320 Automatic trimming is mainly useful in long-lived programs.
1321 Because trimming via sbrk can be slow on some systems, and can
1322 sometimes be wasteful (in cases where programs immediately
1323 afterward allocate more large chunks) the value should be high
1324 enough so that your overall system performance would improve by
1325 releasing this much memory.
1326
1327 The trim threshold and the mmap control parameters (see below)
1328 can be traded off with one another. Trimming and mmapping are
1329 two different ways of releasing unused memory back to the
1330 system. Between these two, it is often possible to keep
1331 system-level demands of a long-lived program down to a bare
1332 minimum. For example, in one test suite of sessions measuring
1333 the XF86 X server on Linux, using a trim threshold of 128K and a
1334 mmap threshold of 192K led to near-minimal long term resource
1335 consumption.
1336
1337 If you are using this malloc in a long-lived program, it should
1338 pay to experiment with these values. As a rough guide, you
1339 might set to a value close to the average size of a process
1340 (program) running on your system. Releasing this much memory
1341 would allow such a process to run in memory. Generally, it's
1342 worth it to tune for trimming rather tham memory mapping when a
1343 program undergoes phases where several large chunks are
1344 allocated and released in ways that can reuse each other's
1345 storage, perhaps mixed with phases where there are no such
1346 chunks at all. And in well-behaved long-lived programs,
1347 controlling release of large blocks via trimming versus mapping
1348 is usually faster.
1349
1350 However, in most programs, these parameters serve mainly as
1351 protection against the system-level effects of carrying around
1352 massive amounts of unneeded memory. Since frequent calls to
1353 sbrk, mmap, and munmap otherwise degrade performance, the default
1354 parameters are set to relatively high values that serve only as
1355 safeguards.
1356
1357 The trim value must be greater than page size to have any useful
1358 effect. To disable trimming completely, you can set to
1359 (unsigned long)(-1)
1360
1361 Trim settings interact with fastbin (MXFAST) settings: Unless
1362 TRIM_FASTBINS is defined, automatic trimming never takes place upon
1363 freeing a chunk with size less than or equal to MXFAST. Trimming is
1364 instead delayed until subsequent freeing of larger chunks. However,
1365 you can still force an attempted trim by calling malloc_trim.
1366
1367 Also, trimming is not generally possible in cases where
1368 the main arena is obtained via mmap.
1369
1370 Note that the trick some people use of mallocing a huge space and
1371 then freeing it at program startup, in an attempt to reserve system
1372 memory, doesn't have the intended effect under automatic trimming,
1373 since that memory will immediately be returned to the system.
1374*/
1375
1376#define M_TRIM_THRESHOLD -1
1377
1378#ifndef DEFAULT_TRIM_THRESHOLD
1379#define DEFAULT_TRIM_THRESHOLD (256 * 1024)
1380#endif
1381
1382/*
1383 M_TOP_PAD is the amount of extra `padding' space to allocate or
1384 retain whenever sbrk is called. It is used in two ways internally:
1385
1386 * When sbrk is called to extend the top of the arena to satisfy
1387 a new malloc request, this much padding is added to the sbrk
1388 request.
1389
1390 * When malloc_trim is called automatically from free(),
1391 it is used as the `pad' argument.
1392
1393 In both cases, the actual amount of padding is rounded
1394 so that the end of the arena is always a system page boundary.
1395
1396 The main reason for using padding is to avoid calling sbrk so
1397 often. Having even a small pad greatly reduces the likelihood
1398 that nearly every malloc request during program start-up (or
1399 after trimming) will invoke sbrk, which needlessly wastes
1400 time.
1401
1402 Automatic rounding-up to page-size units is normally sufficient
1403 to avoid measurable overhead, so the default is 0. However, in
1404 systems where sbrk is relatively slow, it can pay to increase
1405 this value, at the expense of carrying around more memory than
1406 the program needs.
1407*/
1408
1409#define M_TOP_PAD -2
1410
1411#ifndef DEFAULT_TOP_PAD
1412#define DEFAULT_TOP_PAD (0)
1413#endif
1414
1415/*
1416 M_MMAP_THRESHOLD is the request size threshold for using mmap()
1417 to service a request. Requests of at least this size that cannot
1418 be allocated using already-existing space will be serviced via mmap.
1419 (If enough normal freed space already exists it is used instead.)
1420
1421 Using mmap segregates relatively large chunks of memory so that
1422 they can be individually obtained and released from the host
1423 system. A request serviced through mmap is never reused by any
1424 other request (at least not directly; the system may just so
1425 happen to remap successive requests to the same locations).
1426
1427 Segregating space in this way has the benefits that:
1428
1429 1. Mmapped space can ALWAYS be individually released back
1430 to the system, which helps keep the system level memory
1431 demands of a long-lived program low.
1432 2. Mapped memory can never become `locked' between
1433 other chunks, as can happen with normally allocated chunks, which
1434 means that even trimming via malloc_trim would not release them.
1435 3. On some systems with "holes" in address spaces, mmap can obtain
1436 memory that sbrk cannot.
1437
1438 However, it has the disadvantages that:
1439
1440 1. The space cannot be reclaimed, consolidated, and then
1441 used to service later requests, as happens with normal chunks.
1442 2. It can lead to more wastage because of mmap page alignment
1443 requirements
1444 3. It causes malloc performance to be more dependent on host
1445 system memory management support routines which may vary in
1446 implementation quality and may impose arbitrary
1447 limitations. Generally, servicing a request via normal
1448 malloc steps is faster than going through a system's mmap.
1449
1450 The advantages of mmap nearly always outweigh disadvantages for
1451 "large" chunks, but the value of "large" varies across systems. The
1452 default is an empirically derived value that works well in most
1453 systems.
1454*/
1455
1456#define M_MMAP_THRESHOLD -3
1457
1458#ifndef DEFAULT_MMAP_THRESHOLD
1459#define DEFAULT_MMAP_THRESHOLD (256 * 1024)
1460#endif
1461
1462/*
1463 M_MMAP_MAX is the maximum number of requests to simultaneously
1464 service using mmap. This parameter exists because
1465. Some systems have a limited number of internal tables for
1466 use by mmap, and using more than a few of them may degrade
1467 performance.
1468
1469 The default is set to a value that serves only as a safeguard.
1470 Setting to 0 disables use of mmap for servicing large requests. If
1471 HAVE_MMAP is not set, the default value is 0, and attempts to set it
1472 to non-zero values in mallopt will fail.
1473*/
1474
1475#define M_MMAP_MAX -4
1476
1477#ifndef DEFAULT_MMAP_MAX
1478#if HAVE_MMAP
1479#define DEFAULT_MMAP_MAX (65536)
1480#else
1481#define DEFAULT_MMAP_MAX (0)
1482#endif
1483#endif
1484
1485#ifdef __cplusplus
1486}; /* end of extern "C" */
1487#endif
1488
1489/*
1490 ========================================================================
1491 To make a fully customizable malloc.h header file, cut everything
1492 above this line, put into file malloc.h, edit to suit, and #include it
1493 on the next line, as well as in programs that use this malloc.
1494 ========================================================================
1495*/
1496
1497/* #include "malloc.h" */
1498
1499/* --------------------- public wrappers ---------------------- */
1500
1501#ifdef USE_PUBLIC_MALLOC_WRAPPERS
1502
1503/* Declare all routines as internal */
1504#if __STD_C
1505static Void_t* mALLOc(size_t);
1506static void fREe(Void_t*);
1507static Void_t* rEALLOc(Void_t*, size_t);
1508static Void_t* mEMALIGn(size_t, size_t);
1509static Void_t* vALLOc(size_t);
1510static Void_t* pVALLOc(size_t);
1511static Void_t* cALLOc(size_t, size_t);
1512static Void_t** iCALLOc(size_t, size_t, Void_t**);
1513static Void_t** iCOMALLOc(size_t, size_t*, Void_t**);
1514static void cFREe(Void_t*);
1515static int mTRIm(size_t);
1516static size_t mUSABLe(Void_t*);
1517static void mSTATs();
1518static int mALLOPt(int, int);
1519static struct mallinfo mALLINFo(void);
1520#else
1521static Void_t* mALLOc();
1522static void fREe();
1523static Void_t* rEALLOc();
1524static Void_t* mEMALIGn();
1525static Void_t* vALLOc();
1526static Void_t* pVALLOc();
1527static Void_t* cALLOc();
1528static Void_t** iCALLOc();
1529static Void_t** iCOMALLOc();
1530static void cFREe();
1531static int mTRIm();
1532static size_t mUSABLe();
1533static void mSTATs();
1534static int mALLOPt();
1535static struct mallinfo mALLINFo();
1536#endif
1537
1538/*
1539 MALLOC_PREACTION and MALLOC_POSTACTION should be
1540 defined to return 0 on success, and nonzero on failure.
1541 The return value of MALLOC_POSTACTION is currently ignored
1542 in wrapper functions since there is no reasonable default
1543 action to take on failure.
1544*/
1545
1546
1547#ifdef USE_MALLOC_LOCK
1548
1549#ifdef WIN32
1550
1551static int mALLOC_MUTEx;
1552#define MALLOC_PREACTION slwait(&mALLOC_MUTEx)
1553#define MALLOC_POSTACTION slrelease(&mALLOC_MUTEx)
1554
1555#else
1556
1557#include <pthread.h>
1558
1559static pthread_mutex_t mALLOC_MUTEx = PTHREAD_MUTEX_INITIALIZER;
1560
1561#define MALLOC_PREACTION pthread_mutex_lock(&mALLOC_MUTEx)
1562#define MALLOC_POSTACTION pthread_mutex_unlock(&mALLOC_MUTEx)
1563
1564#endif /* USE_MALLOC_LOCK */
1565
1566#else
1567
1568/* Substitute anything you like for these */
1569
a80add95
CF
1570#define MALLOC_PREACTION (0)
1571#define MALLOC_POSTACTION (0)
c7e2187a
CF
1572
1573#endif
1574
1575Void_t* public_mALLOc(size_t bytes) {
1576 Void_t* m;
1577 if (MALLOC_PREACTION != 0) {
1578 return 0;
1579 }
1580 m = mALLOc(bytes);
1581 if (MALLOC_POSTACTION != 0) {
1582 }
1583 return m;
1584}
1585
1586void public_fREe(Void_t* m) {
1587 if (MALLOC_PREACTION != 0) {
1588 return;
1589 }
1590 fREe(m);
1591 if (MALLOC_POSTACTION != 0) {
1592 }
1593}
1594
1595Void_t* public_rEALLOc(Void_t* m, size_t bytes) {
1596 if (MALLOC_PREACTION != 0) {
1597 return 0;
1598 }
1599 m = rEALLOc(m, bytes);
1600 if (MALLOC_POSTACTION != 0) {
1601 }
1602 return m;
1603}
1604
1605Void_t* public_mEMALIGn(size_t alignment, size_t bytes) {
1606 Void_t* m;
1607 if (MALLOC_PREACTION != 0) {
1608 return 0;
1609 }
1610 m = mEMALIGn(alignment, bytes);
1611 if (MALLOC_POSTACTION != 0) {
1612 }
1613 return m;
1614}
1615
1616Void_t* public_vALLOc(size_t bytes) {
1617 Void_t* m;
1618 if (MALLOC_PREACTION != 0) {
1619 return 0;
1620 }
1621 m = vALLOc(bytes);
1622 if (MALLOC_POSTACTION != 0) {
1623 }
1624 return m;
1625}
1626
1627#ifdef NEED_PVALLOC
1628Void_t* public_pVALLOc(size_t bytes) {
1629 Void_t* m;
1630 if (MALLOC_PREACTION != 0) {
1631 return 0;
1632 }
1633 m = pVALLOc(bytes);
1634 if (MALLOC_POSTACTION != 0) {
1635 }
1636 return m;
1637}
1638#endif
1639
1640Void_t* public_cALLOc(size_t n, size_t elem_size) {
1641 Void_t* m;
1642 if (MALLOC_PREACTION != 0) {
1643 return 0;
1644 }
1645 m = cALLOc(n, elem_size);
1646 if (MALLOC_POSTACTION != 0) {
1647 }
1648 return m;
1649}
1650
1651
1652Void_t** public_iCALLOc(size_t n, size_t elem_size, Void_t** chunks) {
1653 Void_t** m;
1654 if (MALLOC_PREACTION != 0) {
1655 return 0;
1656 }
1657 m = iCALLOc(n, elem_size, chunks);
1658 if (MALLOC_POSTACTION != 0) {
1659 }
1660 return m;
1661}
1662
1663Void_t** public_iCOMALLOc(size_t n, size_t sizes[], Void_t** chunks) {
1664 Void_t** m;
1665 if (MALLOC_PREACTION != 0) {
1666 return 0;
1667 }
1668 m = iCOMALLOc(n, sizes, chunks);
1669 if (MALLOC_POSTACTION != 0) {
1670 }
1671 return m;
1672}
1673
1674void public_cFREe(Void_t* m) {
1675 if (MALLOC_PREACTION != 0) {
1676 return;
1677 }
1678 cFREe(m);
1679 if (MALLOC_POSTACTION != 0) {
1680 }
1681}
1682
1683int public_mTRIm(size_t s) {
1684 int result;
1685 if (MALLOC_PREACTION != 0) {
1686 return 0;
1687 }
1688 result = mTRIm(s);
1689 if (MALLOC_POSTACTION != 0) {
1690 }
1691 return result;
1692}
1693
1694size_t public_mUSABLe(Void_t* m) {
1695 size_t result;
1696 if (MALLOC_PREACTION != 0) {
1697 return 0;
1698 }
1699 result = mUSABLe(m);
1700 if (MALLOC_POSTACTION != 0) {
1701 }
1702 return result;
1703}
1704
1705void public_mSTATs() {
1706 if (MALLOC_PREACTION != 0) {
1707 return;
1708 }
1709 mSTATs();
1710 if (MALLOC_POSTACTION != 0) {
1711 }
1712}
1713
1714struct mallinfo public_mALLINFo() {
1715 struct mallinfo m;
1716 if (MALLOC_PREACTION != 0) {
1717 struct mallinfo nm = { 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 };
1718 return nm;
1719 }
1720 m = mALLINFo();
1721 if (MALLOC_POSTACTION != 0) {
1722 }
1723 return m;
1724}
1725
1726int public_mALLOPt(int p, int v) {
1727 int result;
1728 if (MALLOC_PREACTION != 0) {
1729 return 0;
1730 }
1731 result = mALLOPt(p, v);
1732 if (MALLOC_POSTACTION != 0) {
1733 }
1734 return result;
1735}
1736
1737#endif
1738
1739
1740
1741/* ------------- Optional versions of memcopy ---------------- */
1742
1743
1744#if USE_MEMCPY
1745
1746/*
1747 Note: memcpy is ONLY invoked with non-overlapping regions,
1748 so the (usually slower) memmove is not needed.
1749*/
1750
1751#define MALLOC_COPY(dest, src, nbytes) memcpy(dest, src, nbytes)
1752#define MALLOC_ZERO(dest, nbytes) memset(dest, 0, nbytes)
1753
1754#else /* !USE_MEMCPY */
1755
1756/* Use Duff's device for good zeroing/copying performance. */
1757
1758#define MALLOC_ZERO(charp, nbytes) \
1759do { \
1760 INTERNAL_SIZE_T* mzp = (INTERNAL_SIZE_T*)(charp); \
1761 CHUNK_SIZE_T mctmp = (nbytes)/sizeof(INTERNAL_SIZE_T); \
1762 long mcn; \
1763 if (mctmp < 8) mcn = 0; else { mcn = (mctmp-1)/8; mctmp %= 8; } \
1764 switch (mctmp) { \
1765 case 0: for(;;) { *mzp++ = 0; \
1766 case 7: *mzp++ = 0; \
1767 case 6: *mzp++ = 0; \
1768 case 5: *mzp++ = 0; \
1769 case 4: *mzp++ = 0; \
1770 case 3: *mzp++ = 0; \
1771 case 2: *mzp++ = 0; \
1772 case 1: *mzp++ = 0; if(mcn <= 0) break; mcn--; } \
1773 } \
1774} while(0)
1775
1776#define MALLOC_COPY(dest,src,nbytes) \
1777do { \
1778 INTERNAL_SIZE_T* mcsrc = (INTERNAL_SIZE_T*) src; \
1779 INTERNAL_SIZE_T* mcdst = (INTERNAL_SIZE_T*) dest; \
1780 CHUNK_SIZE_T mctmp = (nbytes)/sizeof(INTERNAL_SIZE_T); \
1781 long mcn; \
1782 if (mctmp < 8) mcn = 0; else { mcn = (mctmp-1)/8; mctmp %= 8; } \
1783 switch (mctmp) { \
1784 case 0: for(;;) { *mcdst++ = *mcsrc++; \
1785 case 7: *mcdst++ = *mcsrc++; \
1786 case 6: *mcdst++ = *mcsrc++; \
1787 case 5: *mcdst++ = *mcsrc++; \
1788 case 4: *mcdst++ = *mcsrc++; \
1789 case 3: *mcdst++ = *mcsrc++; \
1790 case 2: *mcdst++ = *mcsrc++; \
1791 case 1: *mcdst++ = *mcsrc++; if(mcn <= 0) break; mcn--; } \
1792 } \
1793} while(0)
1794
1795#endif
1796
1797/* ------------------ MMAP support ------------------ */
1798
1799
1800#if HAVE_MMAP
1801
1802#ifndef LACKS_FCNTL_H
1803#include <fcntl.h>
1804#endif
1805
1806#ifndef LACKS_SYS_MMAN_H
1807#include <sys/mman.h>
1808#endif
1809
1810#if !defined(MAP_ANONYMOUS) && defined(MAP_ANON)
1811#define MAP_ANONYMOUS MAP_ANON
1812#endif
1813
1814/*
1815 Nearly all versions of mmap support MAP_ANONYMOUS,
1816 so the following is unlikely to be needed, but is
1817 supplied just in case.
1818*/
1819
1820#ifndef MAP_ANONYMOUS
1821
1822static int dev_zero_fd = -1; /* Cached file descriptor for /dev/zero. */
1823
1824#define MMAP(addr, size, prot, flags) ((dev_zero_fd < 0) ? \
1825 (dev_zero_fd = open("/dev/zero", O_RDWR), \
1826 mmap((addr), (size), (prot), (flags), dev_zero_fd, 0)) : \
1827 mmap((addr), (size), (prot), (flags), dev_zero_fd, 0))
1828
1829#else
1830
1831#define MMAP(addr, size, prot, flags) \
1832 (mmap((addr), (size), (prot), (flags)|MAP_ANONYMOUS, -1, 0))
1833
1834#endif
1835
1836
1837#endif /* HAVE_MMAP */
1838
1839
1840/*
1841 ----------------------- Chunk representations -----------------------
1842*/
1843
1844
1845/*
1846 This struct declaration is misleading (but accurate and necessary).
1847 It declares a "view" into memory allowing access to necessary
1848 fields at known offsets from a given base. See explanation below.
1849*/
1850
1851struct malloc_chunk {
1852
1853 INTERNAL_SIZE_T prev_size; /* Size of previous chunk (if free). */
1854 INTERNAL_SIZE_T size; /* Size in bytes, including overhead. */
1855
1856 struct malloc_chunk* fd; /* double links -- used only if free. */
1857 struct malloc_chunk* bk;
1858};
1859
1860
1861typedef struct malloc_chunk* mchunkptr;
1862
1863/*
1864 malloc_chunk details:
1865
1866 (The following includes lightly edited explanations by Colin Plumb.)
1867
1868 Chunks of memory are maintained using a `boundary tag' method as
1869 described in e.g., Knuth or Standish. (See the paper by Paul
1870 Wilson ftp://ftp.cs.utexas.edu/pub/garbage/allocsrv.ps for a
1871 survey of such techniques.) Sizes of free chunks are stored both
1872 in the front of each chunk and at the end. This makes
1873 consolidating fragmented chunks into bigger chunks very fast. The
1874 size fields also hold bits representing whether chunks are free or
1875 in use.
1876
1877 An allocated chunk looks like this:
1878
1879
1880 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1881 | Size of previous chunk, if allocated | |
1882 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1883 | Size of chunk, in bytes |P|
1884 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1885 | User data starts here... .
1886 . .
1887 . (malloc_usable_space() bytes) .
1888 . |
1889nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1890 | Size of chunk |
1891 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1892
1893
1894 Where "chunk" is the front of the chunk for the purpose of most of
1895 the malloc code, but "mem" is the pointer that is returned to the
1896 user. "Nextchunk" is the beginning of the next contiguous chunk.
1897
1898 Chunks always begin on even word boundries, so the mem portion
1899 (which is returned to the user) is also on an even word boundary, and
1900 thus at least double-word aligned.
1901
1902 Free chunks are stored in circular doubly-linked lists, and look like this:
1903
1904 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1905 | Size of previous chunk |
1906 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1907 `head:' | Size of chunk, in bytes |P|
1908 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1909 | Forward pointer to next chunk in list |
1910 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1911 | Back pointer to previous chunk in list |
1912 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1913 | Unused space (may be 0 bytes long) .
1914 . .
1915 . |
1916nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1917 `foot:' | Size of chunk, in bytes |
1918 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1919
1920 The P (PREV_INUSE) bit, stored in the unused low-order bit of the
1921 chunk size (which is always a multiple of two words), is an in-use
1922 bit for the *previous* chunk. If that bit is *clear*, then the
1923 word before the current chunk size contains the previous chunk
1924 size, and can be used to find the front of the previous chunk.
1925 The very first chunk allocated always has this bit set,
1926 preventing access to non-existent (or non-owned) memory. If
1927 prev_inuse is set for any given chunk, then you CANNOT determine
1928 the size of the previous chunk, and might even get a memory
1929 addressing fault when trying to do so.
1930
1931 Note that the `foot' of the current chunk is actually represented
1932 as the prev_size of the NEXT chunk. This makes it easier to
1933 deal with alignments etc but can be very confusing when trying
1934 to extend or adapt this code.
1935
1936 The two exceptions to all this are
1937
1938 1. The special chunk `top' doesn't bother using the
1939 trailing size field since there is no next contiguous chunk
1940 that would have to index off it. After initialization, `top'
1941 is forced to always exist. If it would become less than
1942 MINSIZE bytes long, it is replenished.
1943
1944 2. Chunks allocated via mmap, which have the second-lowest-order
1945 bit (IS_MMAPPED) set in their size fields. Because they are
1946 allocated one-by-one, each must contain its own trailing size field.
1947
1948*/
1949
1950/*
1951 ---------- Size and alignment checks and conversions ----------
1952*/
1953
1954/* conversion from malloc headers to user pointers, and back */
1955
1956#define chunk2mem(p) ((Void_t*)((char*)(p) + 2*SIZE_SZ))
1957#define mem2chunk(mem) ((mchunkptr)((char*)(mem) - 2*SIZE_SZ))
1958
1959/* The smallest possible chunk */
1960#define MIN_CHUNK_SIZE (sizeof(struct malloc_chunk))
1961
1962/* The smallest size we can malloc is an aligned minimal chunk */
1963
1964#define MINSIZE \
1965 (CHUNK_SIZE_T)(((MIN_CHUNK_SIZE+MALLOC_ALIGN_MASK) & ~MALLOC_ALIGN_MASK))
1966
1967/* Check if m has acceptable alignment */
1968
1969#define aligned_OK(m) (((PTR_UINT)((m)) & (MALLOC_ALIGN_MASK)) == 0)
1970
1971
1972/*
1973 Check if a request is so large that it would wrap around zero when
1974 padded and aligned. To simplify some other code, the bound is made
1975 low enough so that adding MINSIZE will also not wrap around sero.
1976*/
1977
1978#define REQUEST_OUT_OF_RANGE(req) \
1979 ((CHUNK_SIZE_T)(req) >= \
1980 (CHUNK_SIZE_T)(INTERNAL_SIZE_T)(-2 * MINSIZE))
1981
1982/* pad request bytes into a usable size -- internal version */
1983
1984#define request2size(req) \
1985 (((req) + SIZE_SZ + MALLOC_ALIGN_MASK < MINSIZE) ? \
1986 MINSIZE : \
1987 ((req) + SIZE_SZ + MALLOC_ALIGN_MASK) & ~MALLOC_ALIGN_MASK)
1988
1989/* Same, except also perform argument check */
1990
1991#define checked_request2size(req, sz) \
1992 if (REQUEST_OUT_OF_RANGE(req)) { \
1993 MALLOC_FAILURE_ACTION; \
1994 return 0; \
1995 } \
1996 (sz) = request2size(req);
1997
1998/*
1999 --------------- Physical chunk operations ---------------
2000*/
2001
2002
2003/* size field is or'ed with PREV_INUSE when previous adjacent chunk in use */
2004#define PREV_INUSE 0x1
2005
2006/* extract inuse bit of previous chunk */
2007#define prev_inuse(p) ((p)->size & PREV_INUSE)
2008
2009
2010/* size field is or'ed with IS_MMAPPED if the chunk was obtained with mmap() */
2011#define IS_MMAPPED 0x2
2012
2013/* check for mmap()'ed chunk */
2014#define chunk_is_mmapped(p) ((p)->size & IS_MMAPPED)
2015
2016/*
2017 Bits to mask off when extracting size
2018
2019 Note: IS_MMAPPED is intentionally not masked off from size field in
2020 macros for which mmapped chunks should never be seen. This should
2021 cause helpful core dumps to occur if it is tried by accident by
2022 people extending or adapting this malloc.
2023*/
2024#define SIZE_BITS (PREV_INUSE|IS_MMAPPED)
2025
2026/* Get size, ignoring use bits */
2027#define chunksize(p) ((p)->size & ~(SIZE_BITS))
2028
2029
2030/* Ptr to next physical malloc_chunk. */
2031#define next_chunk(p) ((mchunkptr)( ((char*)(p)) + ((p)->size & ~PREV_INUSE) ))
2032
2033/* Ptr to previous physical malloc_chunk */
2034#define prev_chunk(p) ((mchunkptr)( ((char*)(p)) - ((p)->prev_size) ))
2035
2036/* Treat space at ptr + offset as a chunk */
2037#define chunk_at_offset(p, s) ((mchunkptr)(((char*)(p)) + (s)))
2038
2039/* extract p's inuse bit */
2040#define inuse(p)\
2041((((mchunkptr)(((char*)(p))+((p)->size & ~PREV_INUSE)))->size) & PREV_INUSE)
2042
2043/* set/clear chunk as being inuse without otherwise disturbing */
2044#define set_inuse(p)\
2045((mchunkptr)(((char*)(p)) + ((p)->size & ~PREV_INUSE)))->size |= PREV_INUSE
2046
2047#define clear_inuse(p)\
2048((mchunkptr)(((char*)(p)) + ((p)->size & ~PREV_INUSE)))->size &= ~(PREV_INUSE)
2049
2050
2051/* check/set/clear inuse bits in known places */
2052#define inuse_bit_at_offset(p, s)\
2053 (((mchunkptr)(((char*)(p)) + (s)))->size & PREV_INUSE)
2054
2055#define set_inuse_bit_at_offset(p, s)\
2056 (((mchunkptr)(((char*)(p)) + (s)))->size |= PREV_INUSE)
2057
2058#define clear_inuse_bit_at_offset(p, s)\
2059 (((mchunkptr)(((char*)(p)) + (s)))->size &= ~(PREV_INUSE))
2060
2061
2062/* Set size at head, without disturbing its use bit */
2063#define set_head_size(p, s) ((p)->size = (((p)->size & PREV_INUSE) | (s)))
2064
2065/* Set size/use field */
2066#define set_head(p, s) ((p)->size = (s))
2067
2068/* Set size at footer (only when chunk is not in use) */
2069#define set_foot(p, s) (((mchunkptr)((char*)(p) + (s)))->prev_size = (s))
2070
2071
2072/*
2073 -------------------- Internal data structures --------------------
2074
2075 All internal state is held in an instance of malloc_state defined
2076 below. There are no other static variables, except in two optional
2077 cases:
2078 * If USE_MALLOC_LOCK is defined, the mALLOC_MUTEx declared above.
2079 * If HAVE_MMAP is true, but mmap doesn't support
2080 MAP_ANONYMOUS, a dummy file descriptor for mmap.
2081
2082 Beware of lots of tricks that minimize the total bookkeeping space
2083 requirements. The result is a little over 1K bytes (for 4byte
2084 pointers and size_t.)
2085*/
2086
2087/*
2088 Bins
2089
2090 An array of bin headers for free chunks. Each bin is doubly
2091 linked. The bins are approximately proportionally (log) spaced.
2092 There are a lot of these bins (128). This may look excessive, but
2093 works very well in practice. Most bins hold sizes that are
2094 unusual as malloc request sizes, but are more usual for fragments
2095 and consolidated sets of chunks, which is what these bins hold, so
2096 they can be found quickly. All procedures maintain the invariant
2097 that no consolidated chunk physically borders another one, so each
2098 chunk in a list is known to be preceeded and followed by either
2099 inuse chunks or the ends of memory.
2100
2101 Chunks in bins are kept in size order, with ties going to the
2102 approximately least recently used chunk. Ordering isn't needed
2103 for the small bins, which all contain the same-sized chunks, but
2104 facilitates best-fit allocation for larger chunks. These lists
2105 are just sequential. Keeping them in order almost never requires
2106 enough traversal to warrant using fancier ordered data
2107 structures.
2108
2109 Chunks of the same size are linked with the most
2110 recently freed at the front, and allocations are taken from the
2111 back. This results in LRU (FIFO) allocation order, which tends
2112 to give each chunk an equal opportunity to be consolidated with
2113 adjacent freed chunks, resulting in larger free chunks and less
2114 fragmentation.
2115
2116 To simplify use in double-linked lists, each bin header acts
2117 as a malloc_chunk. This avoids special-casing for headers.
2118 But to conserve space and improve locality, we allocate
2119 only the fd/bk pointers of bins, and then use repositioning tricks
2120 to treat these as the fields of a malloc_chunk*.
2121*/
2122
2123typedef struct malloc_chunk* mbinptr;
2124
2125/* addressing -- note that bin_at(0) does not exist */
2126#define bin_at(m, i) ((mbinptr)((char*)&((m)->bins[(i)<<1]) - (SIZE_SZ<<1)))
2127
2128/* analog of ++bin */
2129#define next_bin(b) ((mbinptr)((char*)(b) + (sizeof(mchunkptr)<<1)))
2130
2131/* Reminders about list directionality within bins */
2132#define first(b) ((b)->fd)
2133#define last(b) ((b)->bk)
2134
2135/* Take a chunk off a bin list */
2136#define unlink(P, BK, FD) { \
2137 FD = P->fd; \
2138 BK = P->bk; \
2139 FD->bk = BK; \
2140 BK->fd = FD; \
2141}
2142
2143/*
2144 Indexing
2145
2146 Bins for sizes < 512 bytes contain chunks of all the same size, spaced
2147 8 bytes apart. Larger bins are approximately logarithmically spaced:
2148
2149 64 bins of size 8
2150 32 bins of size 64
2151 16 bins of size 512
2152 8 bins of size 4096
2153 4 bins of size 32768
2154 2 bins of size 262144
2155 1 bin of size what's left
2156
2157 The bins top out around 1MB because we expect to service large
2158 requests via mmap.
2159*/
2160
2161#define NBINS 96
2162#define NSMALLBINS 32
2163#define SMALLBIN_WIDTH 8
2164#define MIN_LARGE_SIZE 256
2165
2166#define in_smallbin_range(sz) \
2167 ((CHUNK_SIZE_T)(sz) < (CHUNK_SIZE_T)MIN_LARGE_SIZE)
2168
2169#define smallbin_index(sz) (((unsigned)(sz)) >> 3)
2170
2171/*
2172 Compute index for size. We expect this to be inlined when
2173 compiled with optimization, else not, which works out well.
2174*/
2175static int largebin_index(unsigned int sz) {
2176 unsigned int x = sz >> SMALLBIN_WIDTH;
2177 unsigned int m; /* bit position of highest set bit of m */
2178
2179 if (x >= 0x10000) return NBINS-1;
2180
2181 /* On intel, use BSRL instruction to find highest bit */
2182#if defined(__GNUC__) && defined(i386)
2183
2184 __asm__("bsrl %1,%0\n\t"
2185 : "=r" (m)
2186 : "g" (x));
2187
2188#else
2189 {
2190 /*
2191 Based on branch-free nlz algorithm in chapter 5 of Henry
2192 S. Warren Jr's book "Hacker's Delight".
2193 */
2194
2195 unsigned int n = ((x - 0x100) >> 16) & 8;
2196 x <<= n;
2197 m = ((x - 0x1000) >> 16) & 4;
2198 n += m;
2199 x <<= m;
2200 m = ((x - 0x4000) >> 16) & 2;
2201 n += m;
2202 x = (x << m) >> 14;
2203 m = 13 - n + (x & ~(x>>1));
2204 }
2205#endif
2206
2207 /* Use next 2 bits to create finer-granularity bins */
2208 return NSMALLBINS + (m << 2) + ((sz >> (m + 6)) & 3);
2209}
2210
2211#define bin_index(sz) \
2212 ((in_smallbin_range(sz)) ? smallbin_index(sz) : largebin_index(sz))
2213
2214/*
2215 FIRST_SORTED_BIN_SIZE is the chunk size corresponding to the
2216 first bin that is maintained in sorted order. This must
2217 be the smallest size corresponding to a given bin.
2218
2219 Normally, this should be MIN_LARGE_SIZE. But you can weaken
2220 best fit guarantees to sometimes speed up malloc by increasing value.
2221 Doing this means that malloc may choose a chunk that is
2222 non-best-fitting by up to the width of the bin.
2223
2224 Some useful cutoff values:
2225 512 - all bins sorted
2226 2560 - leaves bins <= 64 bytes wide unsorted
2227 12288 - leaves bins <= 512 bytes wide unsorted
2228 65536 - leaves bins <= 4096 bytes wide unsorted
2229 262144 - leaves bins <= 32768 bytes wide unsorted
2230 -1 - no bins sorted (not recommended!)
2231*/
2232
2233#define FIRST_SORTED_BIN_SIZE MIN_LARGE_SIZE
2234/* #define FIRST_SORTED_BIN_SIZE 65536 */
2235
2236/*
2237 Unsorted chunks
2238
2239 All remainders from chunk splits, as well as all returned chunks,
2240 are first placed in the "unsorted" bin. They are then placed
2241 in regular bins after malloc gives them ONE chance to be used before
2242 binning. So, basically, the unsorted_chunks list acts as a queue,
2243 with chunks being placed on it in free (and malloc_consolidate),
2244 and taken off (to be either used or placed in bins) in malloc.
2245*/
2246
2247/* The otherwise unindexable 1-bin is used to hold unsorted chunks. */
2248#define unsorted_chunks(M) (bin_at(M, 1))
2249
2250/*
2251 Top
2252
2253 The top-most available chunk (i.e., the one bordering the end of
2254 available memory) is treated specially. It is never included in
2255 any bin, is used only if no other chunk is available, and is
2256 released back to the system if it is very large (see
2257 M_TRIM_THRESHOLD). Because top initially
2258 points to its own bin with initial zero size, thus forcing
2259 extension on the first malloc request, we avoid having any special
2260 code in malloc to check whether it even exists yet. But we still
2261 need to do so when getting memory from system, so we make
2262 initial_top treat the bin as a legal but unusable chunk during the
2263 interval between initialization and the first call to
2264 sYSMALLOc. (This is somewhat delicate, since it relies on
2265 the 2 preceding words to be zero during this interval as well.)
2266*/
2267
2268/* Conveniently, the unsorted bin can be used as dummy top on first call */
2269#define initial_top(M) (unsorted_chunks(M))
2270
2271/*
2272 Binmap
2273
2274 To help compensate for the large number of bins, a one-level index
2275 structure is used for bin-by-bin searching. `binmap' is a
2276 bitvector recording whether bins are definitely empty so they can
2277 be skipped over during during traversals. The bits are NOT always
2278 cleared as soon as bins are empty, but instead only
2279 when they are noticed to be empty during traversal in malloc.
2280*/
2281
2282/* Conservatively use 32 bits per map word, even if on 64bit system */
2283#define BINMAPSHIFT 5
2284#define BITSPERMAP (1U << BINMAPSHIFT)
2285#define BINMAPSIZE (NBINS / BITSPERMAP)
2286
2287#define idx2block(i) ((i) >> BINMAPSHIFT)
2288#define idx2bit(i) ((1U << ((i) & ((1U << BINMAPSHIFT)-1))))
2289
2290#define mark_bin(m,i) ((m)->binmap[idx2block(i)] |= idx2bit(i))
2291#define unmark_bin(m,i) ((m)->binmap[idx2block(i)] &= ~(idx2bit(i)))
2292#define get_binmap(m,i) ((m)->binmap[idx2block(i)] & idx2bit(i))
2293
2294/*
2295 Fastbins
2296
2297 An array of lists holding recently freed small chunks. Fastbins
2298 are not doubly linked. It is faster to single-link them, and
2299 since chunks are never removed from the middles of these lists,
2300 double linking is not necessary. Also, unlike regular bins, they
2301 are not even processed in FIFO order (they use faster LIFO) since
2302 ordering doesn't much matter in the transient contexts in which
2303 fastbins are normally used.
2304
2305 Chunks in fastbins keep their inuse bit set, so they cannot
2306 be consolidated with other free chunks. malloc_consolidate
2307 releases all chunks in fastbins and consolidates them with
2308 other free chunks.
2309*/
2310
2311typedef struct malloc_chunk* mfastbinptr;
2312
2313/* offset 2 to use otherwise unindexable first 2 bins */
2314#define fastbin_index(sz) ((((unsigned int)(sz)) >> 3) - 2)
2315
2316/* The maximum fastbin request size we support */
2317#define MAX_FAST_SIZE 80
2318
2319#define NFASTBINS (fastbin_index(request2size(MAX_FAST_SIZE))+1)
2320
2321/*
2322 FASTBIN_CONSOLIDATION_THRESHOLD is the size of a chunk in free()
2323 that triggers automatic consolidation of possibly-surrounding
2324 fastbin chunks. This is a heuristic, so the exact value should not
2325 matter too much. It is defined at half the default trim threshold as a
2326 compromise heuristic to only attempt consolidation if it is likely
2327 to lead to trimming. However, it is not dynamically tunable, since
2328 consolidation reduces fragmentation surrounding loarge chunks even
2329 if trimming is not used.
2330*/
2331
2332#define FASTBIN_CONSOLIDATION_THRESHOLD \
2333 ((unsigned long)(DEFAULT_TRIM_THRESHOLD) >> 1)
2334
2335/*
2336 Since the lowest 2 bits in max_fast don't matter in size comparisons,
2337 they are used as flags.
2338*/
2339
2340/*
2341 ANYCHUNKS_BIT held in max_fast indicates that there may be any
2342 freed chunks at all. It is set true when entering a chunk into any
2343 bin.
2344*/
2345
2346#define ANYCHUNKS_BIT (1U)
2347
2348#define have_anychunks(M) (((M)->max_fast & ANYCHUNKS_BIT))
2349#define set_anychunks(M) ((M)->max_fast |= ANYCHUNKS_BIT)
2350#define clear_anychunks(M) ((M)->max_fast &= ~ANYCHUNKS_BIT)
2351
2352/*
2353 FASTCHUNKS_BIT held in max_fast indicates that there are probably
2354 some fastbin chunks. It is set true on entering a chunk into any
2355 fastbin, and cleared only in malloc_consolidate.
2356*/
2357
2358#define FASTCHUNKS_BIT (2U)
2359
2360#define have_fastchunks(M) (((M)->max_fast & FASTCHUNKS_BIT))
2361#define set_fastchunks(M) ((M)->max_fast |= (FASTCHUNKS_BIT|ANYCHUNKS_BIT))
2362#define clear_fastchunks(M) ((M)->max_fast &= ~(FASTCHUNKS_BIT))
2363
2364/*
2365 Set value of max_fast.
2366 Use impossibly small value if 0.
2367*/
2368
2369#define set_max_fast(M, s) \
2370 (M)->max_fast = (((s) == 0)? SMALLBIN_WIDTH: request2size(s)) | \
2371 ((M)->max_fast & (FASTCHUNKS_BIT|ANYCHUNKS_BIT))
2372
2373#define get_max_fast(M) \
2374 ((M)->max_fast & ~(FASTCHUNKS_BIT | ANYCHUNKS_BIT))
2375
2376
2377/*
2378 morecore_properties is a status word holding dynamically discovered
2379 or controlled properties of the morecore function
2380*/
2381
2382#define MORECORE_CONTIGUOUS_BIT (1U)
2383
2384#define contiguous(M) \
2385 (((M)->morecore_properties & MORECORE_CONTIGUOUS_BIT))
2386#define noncontiguous(M) \
2387 (((M)->morecore_properties & MORECORE_CONTIGUOUS_BIT) == 0)
2388#define set_contiguous(M) \
2389 ((M)->morecore_properties |= MORECORE_CONTIGUOUS_BIT)
2390#define set_noncontiguous(M) \
2391 ((M)->morecore_properties &= ~MORECORE_CONTIGUOUS_BIT)
2392
2393
2394/*
2395 ----------- Internal state representation and initialization -----------
2396*/
2397
2398struct malloc_state {
2399
2400 /* The maximum chunk size to be eligible for fastbin */
2401 INTERNAL_SIZE_T max_fast; /* low 2 bits used as flags */
2402
2403 /* Fastbins */
2404 mfastbinptr fastbins[NFASTBINS];
2405
2406 /* Base of the topmost chunk -- not otherwise kept in a bin */
2407 mchunkptr top;
2408
2409 /* The remainder from the most recent split of a small request */
2410 mchunkptr last_remainder;
2411
2412 /* Normal bins packed as described above */
2413 mchunkptr bins[NBINS * 2];
2414
a80add95
CF
2415 /* Bitmap of bins. Trailing zero map handles cases of largest binned size */
2416 unsigned int binmap[BINMAPSIZE+1];
c7e2187a
CF
2417
2418 /* Tunable parameters */
2419 CHUNK_SIZE_T trim_threshold;
2420 INTERNAL_SIZE_T top_pad;
2421 INTERNAL_SIZE_T mmap_threshold;
2422
2423 /* Memory map support */
2424 int n_mmaps;
2425 int n_mmaps_max;
2426 int max_n_mmaps;
2427
2428 /* Cache malloc_getpagesize */
2429 unsigned int pagesize;
2430
2431 /* Track properties of MORECORE */
2432 unsigned int morecore_properties;
2433
2434 /* Statistics */
2435 INTERNAL_SIZE_T mmapped_mem;
2436 INTERNAL_SIZE_T sbrked_mem;
2437 INTERNAL_SIZE_T max_sbrked_mem;
2438 INTERNAL_SIZE_T max_mmapped_mem;
2439 INTERNAL_SIZE_T max_total_mem;
2440};
2441
2442typedef struct malloc_state *mstate;
2443
2444/*
2445 There is exactly one instance of this struct in this malloc.
2446 If you are adapting this malloc in a way that does NOT use a static
2447 malloc_state, you MUST explicitly zero-fill it before using. This
2448 malloc relies on the property that malloc_state is initialized to
2449 all zeroes (as is true of C statics).
2450*/
2451
2452static struct malloc_state av_; /* never directly referenced */
2453
2454/*
2455 All uses of av_ are via get_malloc_state().
2456 At most one "call" to get_malloc_state is made per invocation of
2457 the public versions of malloc and free, but other routines
2458 that in turn invoke malloc and/or free may call more then once.
2459 Also, it is called in check* routines if DEBUG is set.
2460*/
2461
2462#define get_malloc_state() (&(av_))
2463
2464/*
2465 Initialize a malloc_state struct.
2466
2467 This is called only from within malloc_consolidate, which needs
2468 be called in the same contexts anyway. It is never called directly
2469 outside of malloc_consolidate because some optimizing compilers try
2470 to inline it at all call points, which turns out not to be an
2471 optimization at all. (Inlining it in malloc_consolidate is fine though.)
2472*/
2473
2474#if __STD_C
2475static void malloc_init_state(mstate av)
2476#else
2477static void malloc_init_state(av) mstate av;
2478#endif
2479{
2480 int i;
2481 mbinptr bin;
2482
2483 /* Establish circular links for normal bins */
2484 for (i = 1; i < NBINS; ++i) {
2485 bin = bin_at(av,i);
2486 bin->fd = bin->bk = bin;
2487 }
2488
2489 av->top_pad = DEFAULT_TOP_PAD;
2490 av->n_mmaps_max = DEFAULT_MMAP_MAX;
2491 av->mmap_threshold = DEFAULT_MMAP_THRESHOLD;
2492 av->trim_threshold = DEFAULT_TRIM_THRESHOLD;
2493
2494#if MORECORE_CONTIGUOUS
2495 set_contiguous(av);
2496#else
2497 set_noncontiguous(av);
2498#endif
2499
2500
2501 set_max_fast(av, DEFAULT_MXFAST);
2502
2503 av->top = initial_top(av);
2504 av->pagesize = malloc_getpagesize;
2505}
2506
2507/*
2508 Other internal utilities operating on mstates
2509*/
2510
2511#if __STD_C
2512static Void_t* sYSMALLOc(INTERNAL_SIZE_T, mstate);
2513static int sYSTRIm(size_t, mstate);
2514static void malloc_consolidate(mstate);
2515#ifdef NEED_INDEPENDENT
2516static Void_t** iALLOc(size_t, size_t*, int, Void_t**);
2517#endif
2518#else
2519static Void_t* sYSMALLOc();
2520static int sYSTRIm();
2521static void malloc_consolidate();
2522#ifdef NEED_INDEPENDENT
2523static Void_t** iALLOc();
2524#endif
2525#endif
2526
2527/*
2528 Debugging support
2529
2530 These routines make a number of assertions about the states
2531 of data structures that should be true at all times. If any
2532 are not true, it's very likely that a user program has somehow
2533 trashed memory. (It's also possible that there is a coding error
2534 in malloc. In which case, please report it!)
2535*/
2536
2537#if ! DEBUG
2538
2539#define check_chunk(P)
2540#define check_free_chunk(P)
2541#define check_inuse_chunk(P)
2542#define check_remalloced_chunk(P,N)
2543#define check_malloced_chunk(P,N)
2544#define check_malloc_state()
2545
2546#else
2547#define check_chunk(P) do_check_chunk(P)
2548#define check_free_chunk(P) do_check_free_chunk(P)
2549#define check_inuse_chunk(P) do_check_inuse_chunk(P)
2550#define check_remalloced_chunk(P,N) do_check_remalloced_chunk(P,N)
2551#define check_malloced_chunk(P,N) do_check_malloced_chunk(P,N)
2552#define check_malloc_state() do_check_malloc_state()
2553
2554/*
2555 Properties of all chunks
2556*/
2557
2558#if __STD_C
2559static void do_check_chunk(mchunkptr p)
2560#else
2561static void do_check_chunk(p) mchunkptr p;
2562#endif
2563{
2564 mstate av = get_malloc_state();
2565 CHUNK_SIZE_T sz = chunksize(p);
2566 /* min and max possible addresses assuming contiguous allocation */
2567 char* max_address = (char*)(av->top) + chunksize(av->top);
2568 char* min_address = max_address - av->sbrked_mem;
2569
2570 if (!chunk_is_mmapped(p)) {
2571
2572 /* Has legal address ... */
2573 if (p != av->top) {
2574 if (contiguous(av)) {
2575 assert(((char*)p) >= min_address);
2576 assert(((char*)p + sz) <= ((char*)(av->top)));
2577 }
2578 }
2579 else {
2580 /* top size is always at least MINSIZE */
2581 assert((CHUNK_SIZE_T)(sz) >= MINSIZE);
2582 /* top predecessor always marked inuse */
2583 assert(prev_inuse(p));
2584 }
2585
2586 }
2587 else {
2588#if HAVE_MMAP
2589 /* address is outside main heap */
2590 if (contiguous(av) && av->top != initial_top(av)) {
2591 assert(((char*)p) < min_address || ((char*)p) > max_address);
2592 }
2593 /* chunk is page-aligned */
2594 assert(((p->prev_size + sz) & (av->pagesize-1)) == 0);
2595 /* mem is aligned */
2596 assert(aligned_OK(chunk2mem(p)));
2597#else
2598 /* force an appropriate assert violation if debug set */
2599 assert(!chunk_is_mmapped(p));
2600#endif
2601 }
2602}
2603
2604/*
2605 Properties of free chunks
2606*/
2607
2608#if __STD_C
2609static void do_check_free_chunk(mchunkptr p)
2610#else
2611static void do_check_free_chunk(p) mchunkptr p;
2612#endif
2613{
2614 mstate av = get_malloc_state();
2615
2616 INTERNAL_SIZE_T sz = p->size & ~PREV_INUSE;
2617 mchunkptr next = chunk_at_offset(p, sz);
2618
2619 do_check_chunk(p);
2620
2621 /* Chunk must claim to be free ... */
2622 assert(!inuse(p));
2623 assert (!chunk_is_mmapped(p));
2624
2625 /* Unless a special marker, must have OK fields */
2626 if ((CHUNK_SIZE_T)(sz) >= MINSIZE)
2627 {
2628 assert((sz & MALLOC_ALIGN_MASK) == 0);
2629 assert(aligned_OK(chunk2mem(p)));
2630 /* ... matching footer field */
2631 assert(next->prev_size == sz);
2632 /* ... and is fully consolidated */
2633 assert(prev_inuse(p));
2634 assert (next == av->top || inuse(next));
2635
2636 /* ... and has minimally sane links */
2637 assert(p->fd->bk == p);
2638 assert(p->bk->fd == p);
2639 }
2640 else /* markers are always of size SIZE_SZ */
2641 assert(sz == SIZE_SZ);
2642}
2643
2644/*
2645 Properties of inuse chunks
2646*/
2647
2648#if __STD_C
2649static void do_check_inuse_chunk(mchunkptr p)
2650#else
2651static void do_check_inuse_chunk(p) mchunkptr p;
2652#endif
2653{
2654 mstate av = get_malloc_state();
2655 mchunkptr next;
2656 do_check_chunk(p);
2657
2658 if (chunk_is_mmapped(p))
2659 return; /* mmapped chunks have no next/prev */
2660
2661 /* Check whether it claims to be in use ... */
2662 assert(inuse(p));
2663
2664 next = next_chunk(p);
2665
2666 /* ... and is surrounded by OK chunks.
2667 Since more things can be checked with free chunks than inuse ones,
2668 if an inuse chunk borders them and debug is on, it's worth doing them.
2669 */
2670 if (!prev_inuse(p)) {
2671 /* Note that we cannot even look at prev unless it is not inuse */
2672 mchunkptr prv = prev_chunk(p);
2673 assert(next_chunk(prv) == p);
2674 do_check_free_chunk(prv);
2675 }
2676
2677 if (next == av->top) {
2678 assert(prev_inuse(next));
2679 assert(chunksize(next) >= MINSIZE);
2680 }
2681 else if (!inuse(next))
2682 do_check_free_chunk(next);
2683}
2684
2685/*
2686 Properties of chunks recycled from fastbins
2687*/
2688
2689#if __STD_C
2690static void do_check_remalloced_chunk(mchunkptr p, INTERNAL_SIZE_T s)
2691#else
2692static void do_check_remalloced_chunk(p, s) mchunkptr p; INTERNAL_SIZE_T s;
2693#endif
2694{
2695 INTERNAL_SIZE_T sz = p->size & ~PREV_INUSE;
2696
2697 do_check_inuse_chunk(p);
2698
2699 /* Legal size ... */
2700 assert((sz & MALLOC_ALIGN_MASK) == 0);
2701 assert((CHUNK_SIZE_T)(sz) >= MINSIZE);
2702 /* ... and alignment */
2703 assert(aligned_OK(chunk2mem(p)));
2704 /* chunk is less than MINSIZE more than request */
2705 assert((long)(sz) - (long)(s) >= 0);
2706 assert((long)(sz) - (long)(s + MINSIZE) < 0);
2707}
2708
2709/*
2710 Properties of nonrecycled chunks at the point they are malloced
2711*/
2712
2713#if __STD_C
2714static void do_check_malloced_chunk(mchunkptr p, INTERNAL_SIZE_T s)
2715#else
2716static void do_check_malloced_chunk(p, s) mchunkptr p; INTERNAL_SIZE_T s;
2717#endif
2718{
2719 /* same as recycled case ... */
2720 do_check_remalloced_chunk(p, s);
2721
2722 /*
2723 ... plus, must obey implementation invariant that prev_inuse is
2724 always true of any allocated chunk; i.e., that each allocated
2725 chunk borders either a previously allocated and still in-use
2726 chunk, or the base of its memory arena. This is ensured
2727 by making all allocations from the the `lowest' part of any found
2728 chunk. This does not necessarily hold however for chunks
2729 recycled via fastbins.
2730 */
2731
2732 assert(prev_inuse(p));
2733}
2734
2735
2736/*
2737 Properties of malloc_state.
2738
2739 This may be useful for debugging malloc, as well as detecting user
2740 programmer errors that somehow write into malloc_state.
2741
2742 If you are extending or experimenting with this malloc, you can
2743 probably figure out how to hack this routine to print out or
2744 display chunk addresses, sizes, bins, and other instrumentation.
2745*/
2746
2747static void do_check_malloc_state()
2748{
2749 mstate av = get_malloc_state();
2750 int i;
2751 mchunkptr p;
2752 mchunkptr q;
2753 mbinptr b;
2754 unsigned int binbit;
2755 int empty;
2756 unsigned int idx;
2757 INTERNAL_SIZE_T size;
2758 CHUNK_SIZE_T total = 0;
2759 int max_fast_bin;
2760
2761 /* internal size_t must be no wider than pointer type */
2762 assert(sizeof(INTERNAL_SIZE_T) <= sizeof(char*));
2763
2764 /* alignment is a power of 2 */
2765 assert((MALLOC_ALIGNMENT & (MALLOC_ALIGNMENT-1)) == 0);
2766
2767 /* cannot run remaining checks until fully initialized */
2768 if (av->top == 0 || av->top == initial_top(av))
2769 return;
2770
2771 /* pagesize is a power of 2 */
2772 assert((av->pagesize & (av->pagesize-1)) == 0);
2773
2774 /* properties of fastbins */
2775
2776 /* max_fast is in allowed range */
2777 assert(get_max_fast(av) <= request2size(MAX_FAST_SIZE));
2778
2779 max_fast_bin = fastbin_index(av->max_fast);
2780
2781 for (i = 0; i < NFASTBINS; ++i) {
2782 p = av->fastbins[i];
2783
2784 /* all bins past max_fast are empty */
2785 if (i > max_fast_bin)
2786 assert(p == 0);
2787
2788 while (p != 0) {
2789 /* each chunk claims to be inuse */
2790 do_check_inuse_chunk(p);
2791 total += chunksize(p);
2792 /* chunk belongs in this bin */
2793 assert(fastbin_index(chunksize(p)) == i);
2794 p = p->fd;
2795 }
2796 }
2797
2798 if (total != 0)
2799 assert(have_fastchunks(av));
2800 else if (!have_fastchunks(av))
2801 assert(total == 0);
2802
2803 /* check normal bins */
2804 for (i = 1; i < NBINS; ++i) {
2805 b = bin_at(av,i);
2806
2807 /* binmap is accurate (except for bin 1 == unsorted_chunks) */
2808 if (i >= 2) {
2809 binbit = get_binmap(av,i);
2810 empty = last(b) == b;
2811 if (!binbit)
2812 assert(empty);
2813 else if (!empty)
2814 assert(binbit);
2815 }
2816
2817 for (p = last(b); p != b; p = p->bk) {
2818 /* each chunk claims to be free */
2819 do_check_free_chunk(p);
2820 size = chunksize(p);
2821 total += size;
2822 if (i >= 2) {
2823 /* chunk belongs in bin */
2824 idx = bin_index(size);
2825 assert(idx == i);
2826 /* lists are sorted */
2827 if ((CHUNK_SIZE_T) size >= (CHUNK_SIZE_T)(FIRST_SORTED_BIN_SIZE)) {
2828 assert(p->bk == b ||
2829 (CHUNK_SIZE_T)chunksize(p->bk) >=
2830 (CHUNK_SIZE_T)chunksize(p));
2831 }
2832 }
2833 /* chunk is followed by a legal chain of inuse chunks */
2834 for (q = next_chunk(p);
2835 (q != av->top && inuse(q) &&
2836 (CHUNK_SIZE_T)(chunksize(q)) >= MINSIZE);
2837 q = next_chunk(q))
2838 do_check_inuse_chunk(q);
2839 }
2840 }
2841
2842 /* top chunk is OK */
2843 check_chunk(av->top);
2844
2845 /* sanity checks for statistics */
2846
2847 assert(total <= (CHUNK_SIZE_T)(av->max_total_mem));
2848 assert(av->n_mmaps >= 0);
2849 assert(av->n_mmaps <= av->max_n_mmaps);
2850
2851 assert((CHUNK_SIZE_T)(av->sbrked_mem) <=
2852 (CHUNK_SIZE_T)(av->max_sbrked_mem));
2853
2854 assert((CHUNK_SIZE_T)(av->mmapped_mem) <=
2855 (CHUNK_SIZE_T)(av->max_mmapped_mem));
2856
2857 assert((CHUNK_SIZE_T)(av->max_total_mem) >=
2858 (CHUNK_SIZE_T)(av->mmapped_mem) + (CHUNK_SIZE_T)(av->sbrked_mem));
2859}
2860#endif
2861
2862
2863/* ----------- Routines dealing with system allocation -------------- */
2864
2865/*
2866 sysmalloc handles malloc cases requiring more memory from the system.
2867 On entry, it is assumed that av->top does not have enough
2868 space to service request for nb bytes, thus requiring that av->top
2869 be extended or replaced.
2870*/
2871
2872#if __STD_C
2873static Void_t* sYSMALLOc(INTERNAL_SIZE_T nb, mstate av)
2874#else
2875static Void_t* sYSMALLOc(nb, av) INTERNAL_SIZE_T nb; mstate av;
2876#endif
2877{
2878 mchunkptr old_top; /* incoming value of av->top */
2879 INTERNAL_SIZE_T old_size; /* its size */
2880 char* old_end; /* its end address */
2881
2882 long size; /* arg to first MORECORE or mmap call */
2883 char* brk; /* return value from MORECORE */
2884
2885 long correction; /* arg to 2nd MORECORE call */
2886 char* snd_brk; /* 2nd return val */
2887
2888 INTERNAL_SIZE_T front_misalign; /* unusable bytes at front of new space */
2889 INTERNAL_SIZE_T end_misalign; /* partial page left at end of new space */
2890 char* aligned_brk; /* aligned offset into brk */
2891
2892 mchunkptr p; /* the allocated/returned chunk */
2893 mchunkptr remainder; /* remainder from allocation */
2894 CHUNK_SIZE_T remainder_size; /* its size */
2895
2896 CHUNK_SIZE_T sum; /* for updating stats */
2897
2898 size_t pagemask = av->pagesize - 1;
2899
2900 /*
2901 If there is space available in fastbins, consolidate and retry
2902 malloc from scratch rather than getting memory from system. This
2903 can occur only if nb is in smallbin range so we didn't consolidate
2904 upon entry to malloc. It is much easier to handle this case here
2905 than in malloc proper.
2906 */
2907
2908 if (have_fastchunks(av)) {
2909 assert(in_smallbin_range(nb));
2910 malloc_consolidate(av);
2911 return mALLOc(nb - MALLOC_ALIGN_MASK);
2912 }
2913
2914
2915#if HAVE_MMAP
2916
2917 /*
2918 If have mmap, and the request size meets the mmap threshold, and
2919 the system supports mmap, and there are few enough currently
2920 allocated mmapped regions, try to directly map this request
2921 rather than expanding top.
2922 */
2923
2924 if ((CHUNK_SIZE_T)(nb) >= (CHUNK_SIZE_T)(av->mmap_threshold) &&
2925 (av->n_mmaps < av->n_mmaps_max)) {
2926
2927 char* mm; /* return value from mmap call*/
2928
2929 /*
2930 Round up size to nearest page. For mmapped chunks, the overhead
2931 is one SIZE_SZ unit larger than for normal chunks, because there
2932 is no following chunk whose prev_size field could be used.
2933 */
2934 size = (nb + SIZE_SZ + MALLOC_ALIGN_MASK + pagemask) & ~pagemask;
2935
2936 /* Don't try if size wraps around 0 */
2937 if ((CHUNK_SIZE_T)(size) > (CHUNK_SIZE_T)(nb)) {
2938
2939 mm = (char*)(MMAP(0, size, PROT_READ|PROT_WRITE, MAP_PRIVATE));
2940
2941 if (mm != (char*)(MORECORE_FAILURE)) {
2942
2943 /*
2944 The offset to the start of the mmapped region is stored
2945 in the prev_size field of the chunk. This allows us to adjust
2946 returned start address to meet alignment requirements here
2947 and in memalign(), and still be able to compute proper
2948 address argument for later munmap in free() and realloc().
2949 */
2950
2951 front_misalign = (INTERNAL_SIZE_T)chunk2mem(mm) & MALLOC_ALIGN_MASK;
2952 if (front_misalign > 0) {
2953 correction = MALLOC_ALIGNMENT - front_misalign;
2954 p = (mchunkptr)(mm + correction);
2955 p->prev_size = correction;
2956 set_head(p, (size - correction) |IS_MMAPPED);
2957 }
2958 else {
2959 p = (mchunkptr)mm;
2960 p->prev_size = 0;
2961 set_head(p, size|IS_MMAPPED);
2962 }
2963
2964 /* update statistics */
2965
2966 if (++av->n_mmaps > av->max_n_mmaps)
2967 av->max_n_mmaps = av->n_mmaps;
2968
2969 sum = av->mmapped_mem += size;
2970 if (sum > (CHUNK_SIZE_T)(av->max_mmapped_mem))
2971 av->max_mmapped_mem = sum;
2972 sum += av->sbrked_mem;
2973 if (sum > (CHUNK_SIZE_T)(av->max_total_mem))
2974 av->max_total_mem = sum;
2975
2976 check_chunk(p);
2977
2978 return chunk2mem(p);
2979 }
2980 }
2981 }
2982#endif
2983
2984 /* Record incoming configuration of top */
2985
2986 old_top = av->top;
2987 old_size = chunksize(old_top);
2988 old_end = (char*)(chunk_at_offset(old_top, old_size));
2989
2990 brk = snd_brk = (char*)(MORECORE_FAILURE);
2991
2992 /*
2993 If not the first time through, we require old_size to be
2994 at least MINSIZE and to have prev_inuse set.
2995 */
2996
2997 assert((old_top == initial_top(av) && old_size == 0) ||
2998 ((CHUNK_SIZE_T) (old_size) >= MINSIZE &&
2999 prev_inuse(old_top)));
3000
3001 /* Precondition: not enough current space to satisfy nb request */
3002 assert((CHUNK_SIZE_T)(old_size) < (CHUNK_SIZE_T)(nb + MINSIZE));
3003
3004 /* Precondition: all fastbins are consolidated */
3005 assert(!have_fastchunks(av));
3006
3007
3008 /* Request enough space for nb + pad + overhead */
3009
3010 size = nb + av->top_pad + MINSIZE;
3011
3012 /*
3013 If contiguous, we can subtract out existing space that we hope to
3014 combine with new space. We add it back later only if
3015 we don't actually get contiguous space.
3016 */
3017
3018 if (contiguous(av))
3019 size -= old_size;
3020
3021 /*
3022 Round to a multiple of page size.
3023 If MORECORE is not contiguous, this ensures that we only call it
3024 with whole-page arguments. And if MORECORE is contiguous and
3025 this is not first time through, this preserves page-alignment of
3026 previous calls. Otherwise, we correct to page-align below.
3027 */
3028
3029 size = (size + pagemask) & ~pagemask;
3030
3031 /*
3032 Don't try to call MORECORE if argument is so big as to appear
3033 negative. Note that since mmap takes size_t arg, it may succeed
3034 below even if we cannot call MORECORE.
3035 */
3036
3037 if (size > 0)
3038 brk = (char*)(MORECORE(size));
3039
3040 /*
3041 If have mmap, try using it as a backup when MORECORE fails or
3042 cannot be used. This is worth doing on systems that have "holes" in
3043 address space, so sbrk cannot extend to give contiguous space, but
3044 space is available elsewhere. Note that we ignore mmap max count
3045 and threshold limits, since the space will not be used as a
3046 segregated mmap region.
3047 */
3048
3049#if HAVE_MMAP
3050 if (brk == (char*)(MORECORE_FAILURE)) {
3051
3052 /* Cannot merge with old top, so add its size back in */
3053 if (contiguous(av))
3054 size = (size + old_size + pagemask) & ~pagemask;
3055
3056 /* If we are relying on mmap as backup, then use larger units */
3057 if ((CHUNK_SIZE_T)(size) < (CHUNK_SIZE_T)(MMAP_AS_MORECORE_SIZE))
3058 size = MMAP_AS_MORECORE_SIZE;
3059
3060 /* Don't try if size wraps around 0 */
3061 if ((CHUNK_SIZE_T)(size) > (CHUNK_SIZE_T)(nb)) {
3062
3063 brk = (char*)(MMAP(0, size, PROT_READ|PROT_WRITE, MAP_PRIVATE));
3064
3065 if (brk != (char*)(MORECORE_FAILURE)) {
3066
3067 /* We do not need, and cannot use, another sbrk call to find end */
3068 snd_brk = brk + size;
3069
3070 /*
3071 Record that we no longer have a contiguous sbrk region.
3072 After the first time mmap is used as backup, we do not
3073 ever rely on contiguous space since this could incorrectly
3074 bridge regions.
3075 */
3076 set_noncontiguous(av);
3077 }
3078 }
3079 }
3080#endif
3081
3082 if (brk != (char*)(MORECORE_FAILURE)) {
3083 av->sbrked_mem += size;
3084
3085 /*
3086 If MORECORE extends previous space, we can likewise extend top size.
3087 */
3088
3089 if (brk == old_end && snd_brk == (char*)(MORECORE_FAILURE)) {
3090 set_head(old_top, (size + old_size) | PREV_INUSE);
3091 }
3092
3093 /*
3094 Otherwise, make adjustments:
3095
3096 * If the first time through or noncontiguous, we need to call sbrk
3097 just to find out where the end of memory lies.
3098
3099 * We need to ensure that all returned chunks from malloc will meet
3100 MALLOC_ALIGNMENT
3101
3102 * If there was an intervening foreign sbrk, we need to adjust sbrk
3103 request size to account for fact that we will not be able to
3104 combine new space with existing space in old_top.
3105
3106 * Almost all systems internally allocate whole pages at a time, in
3107 which case we might as well use the whole last page of request.
3108 So we allocate enough more memory to hit a page boundary now,
3109 which in turn causes future contiguous calls to page-align.
3110 */
3111
3112 else {
3113 front_misalign = 0;
3114 end_misalign = 0;
3115 correction = 0;
3116 aligned_brk = brk;
3117
3118 /*
3119 If MORECORE returns an address lower than we have seen before,
3120 we know it isn't really contiguous. This and some subsequent
3121 checks help cope with non-conforming MORECORE functions and
3122 the presence of "foreign" calls to MORECORE from outside of
3123 malloc or by other threads. We cannot guarantee to detect
3124 these in all cases, but cope with the ones we do detect.
3125 */
3126 if (contiguous(av) && old_size != 0 && brk < old_end) {
3127 set_noncontiguous(av);
3128 }
3129
3130 /* handle contiguous cases */
3131 if (contiguous(av)) {
3132
3133 /*
3134 We can tolerate forward non-contiguities here (usually due
3135 to foreign calls) but treat them as part of our space for
3136 stats reporting.
3137 */
3138 if (old_size != 0)
3139 av->sbrked_mem += brk - old_end;
3140
3141 /* Guarantee alignment of first new chunk made from this space */
3142
3143 front_misalign = (INTERNAL_SIZE_T)chunk2mem(brk) & MALLOC_ALIGN_MASK;
3144 if (front_misalign > 0) {
3145
3146 /*
3147 Skip over some bytes to arrive at an aligned position.
3148 We don't need to specially mark these wasted front bytes.
3149 They will never be accessed anyway because
3150 prev_inuse of av->top (and any chunk created from its start)
3151 is always true after initialization.
3152 */
3153
3154 correction = MALLOC_ALIGNMENT - front_misalign;
3155 aligned_brk += correction;
3156 }
3157
3158 /*
3159 If this isn't adjacent to existing space, then we will not
3160 be able to merge with old_top space, so must add to 2nd request.
3161 */
3162
3163 correction += old_size;
3164
3165 /* Extend the end address to hit a page boundary */
3166 end_misalign = (INTERNAL_SIZE_T)(brk + size + correction);
3167 correction += ((end_misalign + pagemask) & ~pagemask) - end_misalign;
3168
3169 assert(correction >= 0);
3170 snd_brk = (char*)(MORECORE(correction));
3171
3172 if (snd_brk == (char*)(MORECORE_FAILURE)) {
3173 /*
3174 If can't allocate correction, try to at least find out current
3175 brk. It might be enough to proceed without failing.
3176 */
3177 correction = 0;
3178 snd_brk = (char*)(MORECORE(0));
3179 }
3180 else if (snd_brk < brk) {
3181 /*
3182 If the second call gives noncontiguous space even though
3183 it says it won't, the only course of action is to ignore
3184 results of second call, and conservatively estimate where
3185 the first call left us. Also set noncontiguous, so this
3186 won't happen again, leaving at most one hole.
3187
3188 Note that this check is intrinsically incomplete. Because
3189 MORECORE is allowed to give more space than we ask for,
3190 there is no reliable way to detect a noncontiguity
3191 producing a forward gap for the second call.
3192 */
3193 snd_brk = brk + size;
3194 correction = 0;
3195 set_noncontiguous(av);
3196 }
3197
3198 }
3199
3200 /* handle non-contiguous cases */
3201 else {
3202 /* MORECORE/mmap must correctly align */
3203 assert(aligned_OK(chunk2mem(brk)));
3204
3205 /* Find out current end of memory */
3206 if (snd_brk == (char*)(MORECORE_FAILURE)) {
3207 snd_brk = (char*)(MORECORE(0));
3208 av->sbrked_mem += snd_brk - brk - size;
3209 }
3210 }
3211
3212 /* Adjust top based on results of second sbrk */
3213 if (snd_brk != (char*)(MORECORE_FAILURE)) {
3214 av->top = (mchunkptr)aligned_brk;
3215 set_head(av->top, (snd_brk - aligned_brk + correction) | PREV_INUSE);
3216 av->sbrked_mem += correction;
3217
3218 /*
3219 If not the first time through, we either have a
3220 gap due to foreign sbrk or a non-contiguous region. Insert a
3221 double fencepost at old_top to prevent consolidation with space
3222 we don't own. These fenceposts are artificial chunks that are
3223 marked as inuse and are in any case too small to use. We need
3224 two to make sizes and alignments work out.
3225 */
3226
3227 if (old_size != 0) {
3228 /*
3229 Shrink old_top to insert fenceposts, keeping size a
3230 multiple of MALLOC_ALIGNMENT. We know there is at least
3231 enough space in old_top to do this.
3232 */
3233 old_size = (old_size - 3*SIZE_SZ) & ~MALLOC_ALIGN_MASK;
3234 set_head(old_top, old_size | PREV_INUSE);
3235
3236 /*
3237 Note that the following assignments completely overwrite
3238 old_top when old_size was previously MINSIZE. This is
3239 intentional. We need the fencepost, even if old_top otherwise gets
3240 lost.
3241 */
3242 chunk_at_offset(old_top, old_size )->size =
3243 SIZE_SZ|PREV_INUSE;
3244
3245 chunk_at_offset(old_top, old_size + SIZE_SZ)->size =
3246 SIZE_SZ|PREV_INUSE;
3247
3248 /*
3249 If possible, release the rest, suppressing trimming.
3250 */
3251 if (old_size >= MINSIZE) {
3252 INTERNAL_SIZE_T tt = av->trim_threshold;
3253 av->trim_threshold = (INTERNAL_SIZE_T)(-1);
3254 fREe(chunk2mem(old_top));
3255 av->trim_threshold = tt;
3256 }
3257 }
3258 }
3259 }
3260
3261 /* Update statistics */
3262 sum = av->sbrked_mem;
3263 if (sum > (CHUNK_SIZE_T)(av->max_sbrked_mem))
3264 av->max_sbrked_mem = sum;
3265
3266 sum += av->mmapped_mem;
3267 if (sum > (CHUNK_SIZE_T)(av->max_total_mem))
3268 av->max_total_mem = sum;
3269
3270 check_malloc_state();
3271
3272 /* finally, do the allocation */
3273
3274 p = av->top;
3275 size = chunksize(p);
3276
3277 /* check that one of the above allocation paths succeeded */
3278 if ((CHUNK_SIZE_T)(size) >= (CHUNK_SIZE_T)(nb + MINSIZE)) {
3279 remainder_size = size - nb;
3280 remainder = chunk_at_offset(p, nb);
3281 av->top = remainder;
3282 set_head(p, nb | PREV_INUSE);
3283 set_head(remainder, remainder_size | PREV_INUSE);
3284 check_malloced_chunk(p, nb);
3285 return chunk2mem(p);
3286 }
3287
3288 }
3289
3290 /* catch all failure paths */
3291 MALLOC_FAILURE_ACTION;
3292 return 0;
3293}
3294
3295
3296
8dca9e23 3297#ifndef MORECORE_CANNOT_TRIM
c7e2187a
CF
3298/*
3299 sYSTRIm is an inverse of sorts to sYSMALLOc. It gives memory back
3300 to the system (via negative arguments to sbrk) if there is unused
3301 memory at the `high' end of the malloc pool. It is called
3302 automatically by free() when top space exceeds the trim
3303 threshold. It is also called by the public malloc_trim routine. It
3304 returns 1 if it actually released any memory, else 0.
3305*/
3306
3307#if __STD_C
3308static int sYSTRIm(size_t pad, mstate av)
3309#else
3310static int sYSTRIm(pad, av) size_t pad; mstate av;
3311#endif
3312{
3313 long top_size; /* Amount of top-most memory */
3314 long extra; /* Amount to release */
3315 long released; /* Amount actually released */
3316 char* current_brk; /* address returned by pre-check sbrk call */
3317 char* new_brk; /* address returned by post-check sbrk call */
3318 size_t pagesz;
3319
3320 pagesz = av->pagesize;
3321 top_size = chunksize(av->top);
3322
3323 /* Release in pagesize units, keeping at least one page */
3324 extra = ((top_size - pad - MINSIZE + (pagesz-1)) / pagesz - 1) * pagesz;
3325
3326 if (extra > 0) {
3327
3328 /*
3329 Only proceed if end of memory is where we last set it.
3330 This avoids problems if there were foreign sbrk calls.
3331 */
3332 current_brk = (char*)(MORECORE(0));
3333 if (current_brk == (char*)(av->top) + top_size) {
3334
3335 /*
3336 Attempt to release memory. We ignore MORECORE return value,
3337 and instead call again to find out where new end of memory is.
3338 This avoids problems if first call releases less than we asked,
3339 of if failure somehow altered brk value. (We could still
3340 encounter problems if it altered brk in some very bad way,
3341 but the only thing we can do is adjust anyway, which will cause
3342 some downstream failure.)
3343 */
3344
3345 MORECORE(-extra);
3346 new_brk = (char*)(MORECORE(0));
3347
3348 if (new_brk != (char*)MORECORE_FAILURE) {
3349 released = (long)(current_brk - new_brk);
3350
3351 if (released != 0) {
3352 /* Success. Adjust top. */
3353 av->sbrked_mem -= released;
3354 set_head(av->top, (top_size - released) | PREV_INUSE);
3355 check_malloc_state();
3356 return 1;
3357 }
3358 }
3359 }
3360 }
3361 return 0;
3362}
8dca9e23 3363#endif /*MORECORE_CANNOT_TRIM*/
c7e2187a
CF
3364
3365/*
3366 ------------------------------ malloc ------------------------------
3367*/
3368
3369
3370#if __STD_C
3371Void_t* mALLOc(size_t bytes)
3372#else
3373 Void_t* mALLOc(bytes) size_t bytes;
3374#endif
3375{
3376 mstate av = get_malloc_state();
3377
3378 INTERNAL_SIZE_T nb; /* normalized request size */
3379 unsigned int idx; /* associated bin index */
3380 mbinptr bin; /* associated bin */
3381 mfastbinptr* fb; /* associated fastbin */
3382
3383 mchunkptr victim; /* inspected/selected chunk */
3384 INTERNAL_SIZE_T size; /* its size */
3385 int victim_index; /* its bin index */
3386
3387 mchunkptr remainder; /* remainder from a split */
3388 CHUNK_SIZE_T remainder_size; /* its size */
3389
3390 unsigned int block; /* bit map traverser */
3391 unsigned int bit; /* bit map traverser */
3392 unsigned int map; /* current word of binmap */
3393
3394 mchunkptr fwd; /* misc temp for linking */
3395 mchunkptr bck; /* misc temp for linking */
3396
3397 /*
3398 Convert request size to internal form by adding SIZE_SZ bytes
3399 overhead plus possibly more to obtain necessary alignment and/or
3400 to obtain a size of at least MINSIZE, the smallest allocatable
3401 size. Also, checked_request2size traps (returning 0) request sizes
3402 that are so large that they wrap around zero when padded and
3403 aligned.
3404 */
3405
3406 checked_request2size(bytes, nb);
3407
3408 /*
3409 Bypass search if no frees yet
3410 */
3411 if (!have_anychunks(av)) {
3412 if (av->max_fast == 0) /* initialization check */
3413 malloc_consolidate(av);
3414 goto use_top;
3415 }
3416
3417 /*
3418 If the size qualifies as a fastbin, first check corresponding bin.
3419 */
3420
3421 if ((CHUNK_SIZE_T)(nb) <= (CHUNK_SIZE_T)(av->max_fast)) {
3422 fb = &(av->fastbins[(fastbin_index(nb))]);
3423 if ( (victim = *fb) != 0) {
3424 *fb = victim->fd;
3425 check_remalloced_chunk(victim, nb);
3426 return chunk2mem(victim);
3427 }
3428 }
3429
3430 /*
3431 If a small request, check regular bin. Since these "smallbins"
3432 hold one size each, no searching within bins is necessary.
3433 (For a large request, we need to wait until unsorted chunks are
3434 processed to find best fit. But for small ones, fits are exact
3435 anyway, so we can check now, which is faster.)
3436 */
3437
3438 if (in_smallbin_range(nb)) {
3439 idx = smallbin_index(nb);
3440 bin = bin_at(av,idx);
3441
3442 if ( (victim = last(bin)) != bin) {
3443 bck = victim->bk;
3444 set_inuse_bit_at_offset(victim, nb);
3445 bin->bk = bck;
3446 bck->fd = bin;
3447
3448 check_malloced_chunk(victim, nb);
3449 return chunk2mem(victim);
3450 }
3451 }
3452
3453 /*
3454 If this is a large request, consolidate fastbins before continuing.
3455 While it might look excessive to kill all fastbins before
3456 even seeing if there is space available, this avoids
3457 fragmentation problems normally associated with fastbins.
3458 Also, in practice, programs tend to have runs of either small or
3459 large requests, but less often mixtures, so consolidation is not
3460 invoked all that often in most programs. And the programs that
3461 it is called frequently in otherwise tend to fragment.
3462 */
3463
3464 else {
3465 idx = largebin_index(nb);
3466 if (have_fastchunks(av))
3467 malloc_consolidate(av);
3468 }
3469
3470 /*
3471 Process recently freed or remaindered chunks, taking one only if
3472 it is exact fit, or, if this a small request, the chunk is remainder from
3473 the most recent non-exact fit. Place other traversed chunks in
3474 bins. Note that this step is the only place in any routine where
3475 chunks are placed in bins.
3476 */
3477
3478 while ( (victim = unsorted_chunks(av)->bk) != unsorted_chunks(av)) {
3479 bck = victim->bk;
3480 size = chunksize(victim);
3481
3482 /*
3483 If a small request, try to use last remainder if it is the
3484 only chunk in unsorted bin. This helps promote locality for
3485 runs of consecutive small requests. This is the only
3486 exception to best-fit, and applies only when there is
3487 no exact fit for a small chunk.
3488 */
3489
3490 if (in_smallbin_range(nb) &&
3491 bck == unsorted_chunks(av) &&
3492 victim == av->last_remainder &&
3493 (CHUNK_SIZE_T)(size) > (CHUNK_SIZE_T)(nb + MINSIZE)) {
3494
3495 /* split and reattach remainder */
3496 remainder_size = size - nb;
3497 remainder = chunk_at_offset(victim, nb);
3498 unsorted_chunks(av)->bk = unsorted_chunks(av)->fd = remainder;
3499 av->last_remainder = remainder;
3500 remainder->bk = remainder->fd = unsorted_chunks(av);
3501
3502 set_head(victim, nb | PREV_INUSE);
3503 set_head(remainder, remainder_size | PREV_INUSE);
3504 set_foot(remainder, remainder_size);
3505
3506 check_malloced_chunk(victim, nb);
3507 return chunk2mem(victim);
3508 }
3509
3510 /* remove from unsorted list */
3511 unsorted_chunks(av)->bk = bck;
3512 bck->fd = unsorted_chunks(av);
3513
3514 /* Take now instead of binning if exact fit */
3515
3516 if (size == nb) {
3517 set_inuse_bit_at_offset(victim, size);
3518 check_malloced_chunk(victim, nb);
3519 return chunk2mem(victim);
3520 }
3521
3522 /* place chunk in bin */
3523
3524 if (in_smallbin_range(size)) {
3525 victim_index = smallbin_index(size);
3526 bck = bin_at(av, victim_index);
3527 fwd = bck->fd;
3528 }
3529 else {
3530 victim_index = largebin_index(size);
3531 bck = bin_at(av, victim_index);
3532 fwd = bck->fd;
3533
3534 if (fwd != bck) {
3535 /* if smaller than smallest, place first */
3536 if ((CHUNK_SIZE_T)(size) < (CHUNK_SIZE_T)(bck->bk->size)) {
3537 fwd = bck;
3538 bck = bck->bk;
3539 }
3540 else if ((CHUNK_SIZE_T)(size) >=
3541 (CHUNK_SIZE_T)(FIRST_SORTED_BIN_SIZE)) {
3542
3543 /* maintain large bins in sorted order */
3544 size |= PREV_INUSE; /* Or with inuse bit to speed comparisons */
3545 while ((CHUNK_SIZE_T)(size) < (CHUNK_SIZE_T)(fwd->size))
3546 fwd = fwd->fd;
3547 bck = fwd->bk;
3548 }
3549 }
3550 }
3551
3552 mark_bin(av, victim_index);
3553 victim->bk = bck;
3554 victim->fd = fwd;
3555 fwd->bk = victim;
3556 bck->fd = victim;
3557 }
3558
3559 /*
3560 If a large request, scan through the chunks of current bin to
3561 find one that fits. (This will be the smallest that fits unless
3562 FIRST_SORTED_BIN_SIZE has been changed from default.) This is
3563 the only step where an unbounded number of chunks might be
3564 scanned without doing anything useful with them. However the
3565 lists tend to be short.
3566 */
3567
3568 if (!in_smallbin_range(nb)) {
3569 bin = bin_at(av, idx);
3570
3571 for (victim = last(bin); victim != bin; victim = victim->bk) {
3572 size = chunksize(victim);
3573
3574 if ((CHUNK_SIZE_T)(size) >= (CHUNK_SIZE_T)(nb)) {
3575 remainder_size = size - nb;
3576 unlink(victim, bck, fwd);
3577
3578 /* Exhaust */
3579 if (remainder_size < MINSIZE) {
3580 set_inuse_bit_at_offset(victim, size);
3581 check_malloced_chunk(victim, nb);
3582 return chunk2mem(victim);
3583 }
3584 /* Split */
3585 else {
3586 remainder = chunk_at_offset(victim, nb);
3587 unsorted_chunks(av)->bk = unsorted_chunks(av)->fd = remainder;
3588 remainder->bk = remainder->fd = unsorted_chunks(av);
3589 set_head(victim, nb | PREV_INUSE);
3590 set_head(remainder, remainder_size | PREV_INUSE);
3591 set_foot(remainder, remainder_size);
3592 check_malloced_chunk(victim, nb);
3593 return chunk2mem(victim);
3594 }
3595 }
3596 }
3597 }
3598
3599 /*
3600 Search for a chunk by scanning bins, starting with next largest
3601 bin. This search is strictly by best-fit; i.e., the smallest
3602 (with ties going to approximately the least recently used) chunk
3603 that fits is selected.
3604
3605 The bitmap avoids needing to check that most blocks are nonempty.
3606 */
3607
3608 ++idx;
3609 bin = bin_at(av,idx);
3610 block = idx2block(idx);
3611 map = av->binmap[block];
3612 bit = idx2bit(idx);
3613
3614 for (;;) {
3615
3616 /* Skip rest of block if there are no more set bits in this block. */
3617 if (bit > map || bit == 0) {
3618 do {
3619 if (++block >= BINMAPSIZE) /* out of bins */
3620 goto use_top;
3621 } while ( (map = av->binmap[block]) == 0);
3622
3623 bin = bin_at(av, (block << BINMAPSHIFT));
3624 bit = 1;
3625 }
3626
3627 /* Advance to bin with set bit. There must be one. */
3628 while ((bit & map) == 0) {
3629 bin = next_bin(bin);
3630 bit <<= 1;
3631 assert(bit != 0);
3632 }
3633
3634 /* Inspect the bin. It is likely to be non-empty */
3635 victim = last(bin);
3636
3637 /* If a false alarm (empty bin), clear the bit. */
3638 if (victim == bin) {
3639 av->binmap[block] = map &= ~bit; /* Write through */
3640 bin = next_bin(bin);
3641 bit <<= 1;
3642 }
3643
3644 else {
3645 size = chunksize(victim);
3646
3647 /* We know the first chunk in this bin is big enough to use. */
3648 assert((CHUNK_SIZE_T)(size) >= (CHUNK_SIZE_T)(nb));
3649
3650 remainder_size = size - nb;
3651
3652 /* unlink */
3653 bck = victim->bk;
3654 bin->bk = bck;
3655 bck->fd = bin;
3656
3657 /* Exhaust */
3658 if (remainder_size < MINSIZE) {
3659 set_inuse_bit_at_offset(victim, size);
3660 check_malloced_chunk(victim, nb);
3661 return chunk2mem(victim);
3662 }
3663
3664 /* Split */
3665 else {
3666 remainder = chunk_at_offset(victim, nb);
3667
3668 unsorted_chunks(av)->bk = unsorted_chunks(av)->fd = remainder;
3669 remainder->bk = remainder->fd = unsorted_chunks(av);
3670 /* advertise as last remainder */
3671 if (in_smallbin_range(nb))
3672 av->last_remainder = remainder;
3673
3674 set_head(victim, nb | PREV_INUSE);
3675 set_head(remainder, remainder_size | PREV_INUSE);
3676 set_foot(remainder, remainder_size);
3677 check_malloced_chunk(victim, nb);
3678 return chunk2mem(victim);
3679 }
3680 }
3681 }
3682
3683 use_top:
3684 /*
3685 If large enough, split off the chunk bordering the end of memory
3686 (held in av->top). Note that this is in accord with the best-fit
3687 search rule. In effect, av->top is treated as larger (and thus
3688 less well fitting) than any other available chunk since it can
3689 be extended to be as large as necessary (up to system
3690 limitations).
3691
3692 We require that av->top always exists (i.e., has size >=
3693 MINSIZE) after initialization, so if it would otherwise be
3694 exhuasted by current request, it is replenished. (The main
3695 reason for ensuring it exists is that we may need MINSIZE space
3696 to put in fenceposts in sysmalloc.)
3697 */
3698
3699 victim = av->top;
3700 size = chunksize(victim);
3701
3702 if ((CHUNK_SIZE_T)(size) >= (CHUNK_SIZE_T)(nb + MINSIZE)) {
3703 remainder_size = size - nb;
3704 remainder = chunk_at_offset(victim, nb);
3705 av->top = remainder;
3706 set_head(victim, nb | PREV_INUSE);
3707 set_head(remainder, remainder_size | PREV_INUSE);
3708
3709 check_malloced_chunk(victim, nb);
3710 return chunk2mem(victim);
3711 }
3712
3713 /*
3714 If no space in top, relay to handle system-dependent cases
3715 */
3716 return sYSMALLOc(nb, av);
3717}
3718
3719/*
3720 ------------------------------ free ------------------------------
3721*/
3722
3723#if __STD_C
3724void fREe(Void_t* mem)
3725#else
3726void fREe(mem) Void_t* mem;
3727#endif
3728{
3729 mstate av = get_malloc_state();
3730
3731 mchunkptr p; /* chunk corresponding to mem */
3732 INTERNAL_SIZE_T size; /* its size */
3733 mfastbinptr* fb; /* associated fastbin */
3734 mchunkptr nextchunk; /* next contiguous chunk */
3735 INTERNAL_SIZE_T nextsize; /* its size */
3736 int nextinuse; /* true if nextchunk is used */
3737 INTERNAL_SIZE_T prevsize; /* size of previous contiguous chunk */
3738 mchunkptr bck; /* misc temp for linking */
3739 mchunkptr fwd; /* misc temp for linking */
3740
3741 /* free(0) has no effect */
3742 if (mem != 0) {
3743 p = mem2chunk(mem);
3744 size = chunksize(p);
3745
3746 check_inuse_chunk(p);
3747
3748 /*
3749 If eligible, place chunk on a fastbin so it can be found
3750 and used quickly in malloc.
3751 */
3752
3753 if ((CHUNK_SIZE_T)(size) <= (CHUNK_SIZE_T)(av->max_fast)
3754
3755#if TRIM_FASTBINS
3756 /*
3757 If TRIM_FASTBINS set, don't place chunks
3758 bordering top into fastbins
3759 */
3760 && (chunk_at_offset(p, size) != av->top)
3761#endif
3762 ) {
3763
3764 set_fastchunks(av);
3765 fb = &(av->fastbins[fastbin_index(size)]);
3766 p->fd = *fb;
3767 *fb = p;
3768 }
3769
3770 /*
3771 Consolidate other non-mmapped chunks as they arrive.
3772 */
3773
3774 else if (!chunk_is_mmapped(p)) {
3775 set_anychunks(av);
3776
3777 nextchunk = chunk_at_offset(p, size);
3778 nextsize = chunksize(nextchunk);
3779
3780 /* consolidate backward */
3781 if (!prev_inuse(p)) {
3782 prevsize = p->prev_size;
3783 size += prevsize;
3784 p = chunk_at_offset(p, -((long) prevsize));
3785 unlink(p, bck, fwd);
3786 }
3787
3788 if (nextchunk != av->top) {
3789 /* get and clear inuse bit */
3790 nextinuse = inuse_bit_at_offset(nextchunk, nextsize);
3791 set_head(nextchunk, nextsize);
3792
3793 /* consolidate forward */
3794 if (!nextinuse) {
3795 unlink(nextchunk, bck, fwd);
3796 size += nextsize;
3797 }
3798
3799 /*
3800 Place the chunk in unsorted chunk list. Chunks are
3801 not placed into regular bins until after they have
3802 been given one chance to be used in malloc.
3803 */
3804
3805 bck = unsorted_chunks(av);
3806 fwd = bck->fd;
3807 p->bk = bck;
3808 p->fd = fwd;
3809 bck->fd = p;
3810 fwd->bk = p;
3811
3812 set_head(p, size | PREV_INUSE);
3813 set_foot(p, size);
3814
3815 check_free_chunk(p);
3816 }
3817
3818 /*
3819 If the chunk borders the current high end of memory,
3820 consolidate into top
3821 */
3822
3823 else {
3824 size += nextsize;
3825 set_head(p, size | PREV_INUSE);
3826 av->top = p;
3827 check_chunk(p);
3828 }
3829
3830 /*
3831 If freeing a large space, consolidate possibly-surrounding
3832 chunks. Then, if the total unused topmost memory exceeds trim
3833 threshold, ask malloc_trim to reduce top.
3834
3835 Unless max_fast is 0, we don't know if there are fastbins
3836 bordering top, so we cannot tell for sure whether threshold
3837 has been reached unless fastbins are consolidated. But we
3838 don't want to consolidate on each free. As a compromise,
3839 consolidation is performed if FASTBIN_CONSOLIDATION_THRESHOLD
3840 is reached.
3841 */
3842
3843 if ((CHUNK_SIZE_T)(size) >= FASTBIN_CONSOLIDATION_THRESHOLD) {
3844 if (have_fastchunks(av))
3845 malloc_consolidate(av);
3846
3847#ifndef MORECORE_CANNOT_TRIM
3848 if ((CHUNK_SIZE_T)(chunksize(av->top)) >=
3849 (CHUNK_SIZE_T)(av->trim_threshold))
3850 sYSTRIm(av->top_pad, av);
3851#endif
3852 }
3853
3854 }
3855 /*
3856 If the chunk was allocated via mmap, release via munmap()
3857 Note that if HAVE_MMAP is false but chunk_is_mmapped is
3858 true, then user must have overwritten memory. There's nothing
3859 we can do to catch this error unless DEBUG is set, in which case
3860 check_inuse_chunk (above) will have triggered error.
3861 */
3862
3863 else {
3864#if HAVE_MMAP
3865 int ret;
3866 INTERNAL_SIZE_T offset = p->prev_size;
3867 av->n_mmaps--;
3868 av->mmapped_mem -= (size + offset);
3869 ret = munmap((char*)p - offset, size + offset);
3870 /* munmap returns non-zero on failure */
3871 assert(ret == 0);
3872#endif
3873 }
3874 }
3875}
3876
3877/*
3878 ------------------------- malloc_consolidate -------------------------
3879
3880 malloc_consolidate is a specialized version of free() that tears
3881 down chunks held in fastbins. Free itself cannot be used for this
3882 purpose since, among other things, it might place chunks back onto
3883 fastbins. So, instead, we need to use a minor variant of the same
3884 code.
3885
3886 Also, because this routine needs to be called the first time through
3887 malloc anyway, it turns out to be the perfect place to trigger
3888 initialization code.
3889*/
3890
3891#if __STD_C
3892static void malloc_consolidate(mstate av)
3893#else
3894static void malloc_consolidate(av) mstate av;
3895#endif
3896{
3897 mfastbinptr* fb; /* current fastbin being consolidated */
3898 mfastbinptr* maxfb; /* last fastbin (for loop control) */
3899 mchunkptr p; /* current chunk being consolidated */
3900 mchunkptr nextp; /* next chunk to consolidate */
3901 mchunkptr unsorted_bin; /* bin header */
3902 mchunkptr first_unsorted; /* chunk to link to */
3903
3904 /* These have same use as in free() */
3905 mchunkptr nextchunk;
3906 INTERNAL_SIZE_T size;
3907 INTERNAL_SIZE_T nextsize;
3908 INTERNAL_SIZE_T prevsize;
3909 int nextinuse;
3910 mchunkptr bck;
3911 mchunkptr fwd;
3912
3913 /*
3914 If max_fast is 0, we know that av hasn't
3915 yet been initialized, in which case do so below
3916 */
3917
3918 if (av->max_fast != 0) {
3919 clear_fastchunks(av);
3920
3921 unsorted_bin = unsorted_chunks(av);
3922
3923 /*
3924 Remove each chunk from fast bin and consolidate it, placing it
3925 then in unsorted bin. Among other reasons for doing this,
3926 placing in unsorted bin avoids needing to calculate actual bins
3927 until malloc is sure that chunks aren't immediately going to be
3928 reused anyway.
3929 */
3930
3931 maxfb = &(av->fastbins[fastbin_index(av->max_fast)]);
3932 fb = &(av->fastbins[0]);
3933 do {
3934 if ( (p = *fb) != 0) {
3935 *fb = 0;
3936
3937 do {
3938 check_inuse_chunk(p);
3939 nextp = p->fd;
3940
3941 /* Slightly streamlined version of consolidation code in free() */
3942 size = p->size & ~PREV_INUSE;
3943 nextchunk = chunk_at_offset(p, size);
3944 nextsize = chunksize(nextchunk);
3945
3946 if (!prev_inuse(p)) {
3947 prevsize = p->prev_size;
3948 size += prevsize;
3949 p = chunk_at_offset(p, -((long) prevsize));
3950 unlink(p, bck, fwd);
3951 }
3952
3953 if (nextchunk != av->top) {
3954 nextinuse = inuse_bit_at_offset(nextchunk, nextsize);
3955 set_head(nextchunk, nextsize);
3956
3957 if (!nextinuse) {
3958 size += nextsize;
3959 unlink(nextchunk, bck, fwd);
3960 }
3961
3962 first_unsorted = unsorted_bin->fd;
3963 unsorted_bin->fd = p;
3964 first_unsorted->bk = p;
3965
3966 set_head(p, size | PREV_INUSE);
3967 p->bk = unsorted_bin;
3968 p->fd = first_unsorted;
3969 set_foot(p, size);
3970 }
3971
3972 else {
3973 size += nextsize;
3974 set_head(p, size | PREV_INUSE);
3975 av->top = p;
3976 }
3977
3978 } while ( (p = nextp) != 0);
3979
3980 }
3981 } while (fb++ != maxfb);
3982 }
3983 else {
3984 malloc_init_state(av);
3985 check_malloc_state();
3986 }
3987}
3988
3989/*
3990 ------------------------------ realloc ------------------------------
3991*/
3992
3993
3994#if __STD_C
3995Void_t* rEALLOc(Void_t* oldmem, size_t bytes)
3996#else
3997Void_t* rEALLOc(oldmem, bytes) Void_t* oldmem; size_t bytes;
3998#endif
3999{
4000 mstate av = get_malloc_state();
4001
4002 INTERNAL_SIZE_T nb; /* padded request size */
4003
4004 mchunkptr oldp; /* chunk corresponding to oldmem */
4005 INTERNAL_SIZE_T oldsize; /* its size */
4006
4007 mchunkptr newp; /* chunk to return */
4008 INTERNAL_SIZE_T newsize; /* its size */
4009 Void_t* newmem; /* corresponding user mem */
4010
4011 mchunkptr next; /* next contiguous chunk after oldp */
4012
4013 mchunkptr remainder; /* extra space at end of newp */
4014 CHUNK_SIZE_T remainder_size; /* its size */
4015
4016 mchunkptr bck; /* misc temp for linking */
4017 mchunkptr fwd; /* misc temp for linking */
4018
4019 CHUNK_SIZE_T copysize; /* bytes to copy */
4020 unsigned int ncopies; /* INTERNAL_SIZE_T words to copy */
4021 INTERNAL_SIZE_T* s; /* copy source */
4022 INTERNAL_SIZE_T* d; /* copy destination */
4023
4024
4025#ifdef REALLOC_ZERO_BYTES_FREES
4026 if (bytes == 0) {
4027 fREe(oldmem);
4028 return 0;
4029 }
4030#endif
4031
4032 /* realloc of null is supposed to be same as malloc */
4033 if (oldmem == 0) return mALLOc(bytes);
4034
4035 checked_request2size(bytes, nb);
4036
4037 oldp = mem2chunk(oldmem);
4038 oldsize = chunksize(oldp);
4039
4040 check_inuse_chunk(oldp);
4041
4042 if (!chunk_is_mmapped(oldp)) {
4043
4044 if ((CHUNK_SIZE_T)(oldsize) >= (CHUNK_SIZE_T)(nb)) {
4045 /* already big enough; split below */
4046 newp = oldp;
4047 newsize = oldsize;
4048 }
4049
4050 else {
4051 next = chunk_at_offset(oldp, oldsize);
4052
4053 /* Try to expand forward into top */
4054 if (next == av->top &&
4055 (CHUNK_SIZE_T)(newsize = oldsize + chunksize(next)) >=
4056 (CHUNK_SIZE_T)(nb + MINSIZE)) {
4057 set_head_size(oldp, nb);
4058 av->top = chunk_at_offset(oldp, nb);
4059 set_head(av->top, (newsize - nb) | PREV_INUSE);
4060 return chunk2mem(oldp);
4061 }
4062
4063 /* Try to expand forward into next chunk; split off remainder below */
4064 else if (next != av->top &&
4065 !inuse(next) &&
4066 (CHUNK_SIZE_T)(newsize = oldsize + chunksize(next)) >=
4067 (CHUNK_SIZE_T)(nb)) {
4068 newp = oldp;
4069 unlink(next, bck, fwd);
4070 }
4071
4072 /* allocate, copy, free */
4073 else {
4074 newmem = mALLOc(nb - MALLOC_ALIGN_MASK);
4075 if (newmem == 0)
4076 return 0; /* propagate failure */
4077
4078 newp = mem2chunk(newmem);
4079 newsize = chunksize(newp);
4080
4081 /*
4082 Avoid copy if newp is next chunk after oldp.
4083 */
4084 if (newp == next) {
4085 newsize += oldsize;
4086 newp = oldp;
4087 }
4088 else {
4089 /*
4090 Unroll copy of <= 36 bytes (72 if 8byte sizes)
4091 We know that contents have an odd number of
4092 INTERNAL_SIZE_T-sized words; minimally 3.
4093 */
4094
4095 copysize = oldsize - SIZE_SZ;
4096 s = (INTERNAL_SIZE_T*)(oldmem);
4097 d = (INTERNAL_SIZE_T*)(newmem);
4098 ncopies = copysize / sizeof(INTERNAL_SIZE_T);
4099 assert(ncopies >= 3);
4100
4101 if (ncopies > 9)
4102 MALLOC_COPY(d, s, copysize);
4103
4104 else {
4105 *(d+0) = *(s+0);
4106 *(d+1) = *(s+1);
4107 *(d+2) = *(s+2);
4108 if (ncopies > 4) {
4109 *(d+3) = *(s+3);
4110 *(d+4) = *(s+4);
4111 if (ncopies > 6) {
4112 *(d+5) = *(s+5);
4113 *(d+6) = *(s+6);
4114 if (ncopies > 8) {
4115 *(d+7) = *(s+7);
4116 *(d+8) = *(s+8);
4117 }
4118 }
4119 }
4120 }
4121
4122 fREe(oldmem);
4123 check_inuse_chunk(newp);
4124 return chunk2mem(newp);
4125 }
4126 }
4127 }
4128
4129 /* If possible, free extra space in old or extended chunk */
4130
4131 assert((CHUNK_SIZE_T)(newsize) >= (CHUNK_SIZE_T)(nb));
4132
4133 remainder_size = newsize - nb;
4134
4135 if (remainder_size < MINSIZE) { /* not enough extra to split off */
4136 set_head_size(newp, newsize);
4137 set_inuse_bit_at_offset(newp, newsize);
4138 }
4139 else { /* split remainder */
4140 remainder = chunk_at_offset(newp, nb);
4141 set_head_size(newp, nb);
4142 set_head(remainder, remainder_size | PREV_INUSE);
4143 /* Mark remainder as inuse so free() won't complain */
4144 set_inuse_bit_at_offset(remainder, remainder_size);
4145 fREe(chunk2mem(remainder));
4146 }
4147
4148 check_inuse_chunk(newp);
4149 return chunk2mem(newp);
4150 }
4151
4152 /*
4153 Handle mmap cases
4154 */
4155
4156 else {
4157#if HAVE_MMAP
4158
4159#if HAVE_MREMAP
4160 INTERNAL_SIZE_T offset = oldp->prev_size;
4161 size_t pagemask = av->pagesize - 1;
4162 char *cp;
4163 CHUNK_SIZE_T sum;
4164
4165 /* Note the extra SIZE_SZ overhead */
4166 newsize = (nb + offset + SIZE_SZ + pagemask) & ~pagemask;
4167
4168 /* don't need to remap if still within same page */
4169 if (oldsize == newsize - offset)
4170 return oldmem;
4171
4172 cp = (char*)mremap((char*)oldp - offset, oldsize + offset, newsize, 1);
4173
4174 if (cp != (char*)MORECORE_FAILURE) {
4175
4176 newp = (mchunkptr)(cp + offset);
4177 set_head(newp, (newsize - offset)|IS_MMAPPED);
4178
4179 assert(aligned_OK(chunk2mem(newp)));
4180 assert((newp->prev_size == offset));
4181
4182 /* update statistics */
4183 sum = av->mmapped_mem += newsize - oldsize;
4184 if (sum > (CHUNK_SIZE_T)(av->max_mmapped_mem))
4185 av->max_mmapped_mem = sum;
4186 sum += av->sbrked_mem;
4187 if (sum > (CHUNK_SIZE_T)(av->max_total_mem))
4188 av->max_total_mem = sum;
4189
4190 return chunk2mem(newp);
4191 }
4192#endif
4193
4194 /* Note the extra SIZE_SZ overhead. */
4195 if ((CHUNK_SIZE_T)(oldsize) >= (CHUNK_SIZE_T)(nb + SIZE_SZ))
4196 newmem = oldmem; /* do nothing */
4197 else {
4198 /* Must alloc, copy, free. */
4199 newmem = mALLOc(nb - MALLOC_ALIGN_MASK);
4200 if (newmem != 0) {
4201 MALLOC_COPY(newmem, oldmem, oldsize - 2*SIZE_SZ);
4202 fREe(oldmem);
4203 }
4204 }
4205 return newmem;
4206
4207#else
4208 /* If !HAVE_MMAP, but chunk_is_mmapped, user must have overwritten mem */
4209 check_malloc_state();
4210 MALLOC_FAILURE_ACTION;
4211 return 0;
4212#endif
4213 }
4214}
4215
4216/*
4217 ------------------------------ memalign ------------------------------
4218*/
4219
4220#if __STD_C
4221Void_t* mEMALIGn(size_t alignment, size_t bytes)
4222#else
4223Void_t* mEMALIGn(alignment, bytes) size_t alignment; size_t bytes;
4224#endif
4225{
4226 INTERNAL_SIZE_T nb; /* padded request size */
4227 char* m; /* memory returned by malloc call */
4228 mchunkptr p; /* corresponding chunk */
4229 char* brk; /* alignment point within p */
4230 mchunkptr newp; /* chunk to return */
4231 INTERNAL_SIZE_T newsize; /* its size */
4232 INTERNAL_SIZE_T leadsize; /* leading space before alignment point */
4233 mchunkptr remainder; /* spare room at end to split off */
4234 CHUNK_SIZE_T remainder_size; /* its size */
4235 INTERNAL_SIZE_T size;
4236
4237 /* If need less alignment than we give anyway, just relay to malloc */
4238
4239 if (alignment <= MALLOC_ALIGNMENT) return mALLOc(bytes);
4240
4241 /* Otherwise, ensure that it is at least a minimum chunk size */
4242
4243 if (alignment < MINSIZE) alignment = MINSIZE;
4244
4245 /* Make sure alignment is power of 2 (in case MINSIZE is not). */
4246 if ((alignment & (alignment - 1)) != 0) {
4247 size_t a = MALLOC_ALIGNMENT * 2;
4248 while ((CHUNK_SIZE_T)a < (CHUNK_SIZE_T)alignment) a <<= 1;
4249 alignment = a;
4250 }
4251
4252 checked_request2size(bytes, nb);
4253
4254 /*
4255 Strategy: find a spot within that chunk that meets the alignment
4256 request, and then possibly free the leading and trailing space.
4257 */
4258
4259
4260 /* Call malloc with worst case padding to hit alignment. */
4261
4262 m = (char*)(mALLOc(nb + alignment + MINSIZE));
4263
4264 if (m == 0) return 0; /* propagate failure */
4265
4266 p = mem2chunk(m);
4267
4268 if ((((PTR_UINT)(m)) % alignment) != 0) { /* misaligned */
4269
4270 /*
4271 Find an aligned spot inside chunk. Since we need to give back
4272 leading space in a chunk of at least MINSIZE, if the first
4273 calculation places us at a spot with less than MINSIZE leader,
4274 we can move to the next aligned spot -- we've allocated enough
4275 total room so that this is always possible.
4276 */
4277
4278 brk = (char*)mem2chunk((PTR_UINT)(((PTR_UINT)(m + alignment - 1)) &
4279 -((signed long) alignment)));
4280 if ((CHUNK_SIZE_T)(brk - (char*)(p)) < MINSIZE)
4281 brk += alignment;
4282
4283 newp = (mchunkptr)brk;
4284 leadsize = brk - (char*)(p);
4285 newsize = chunksize(p) - leadsize;
4286
4287 /* For mmapped chunks, just adjust offset */
4288 if (chunk_is_mmapped(p)) {
4289 newp->prev_size = p->prev_size + leadsize;
4290 set_head(newp, newsize|IS_MMAPPED);
4291 return chunk2mem(newp);
4292 }
4293
4294 /* Otherwise, give back leader, use the rest */
4295 set_head(newp, newsize | PREV_INUSE);
4296 set_inuse_bit_at_offset(newp, newsize);
4297 set_head_size(p, leadsize);
4298 fREe(chunk2mem(p));
4299 p = newp;
4300
4301 assert (newsize >= nb &&
4302 (((PTR_UINT)(chunk2mem(p))) % alignment) == 0);
4303 }
4304
4305 /* Also give back spare room at the end */
4306 if (!chunk_is_mmapped(p)) {
4307 size = chunksize(p);
4308 if ((CHUNK_SIZE_T)(size) > (CHUNK_SIZE_T)(nb + MINSIZE)) {
4309 remainder_size = size - nb;
4310 remainder = chunk_at_offset(p, nb);
4311 set_head(remainder, remainder_size | PREV_INUSE);
4312 set_head_size(p, nb);
4313 fREe(chunk2mem(remainder));
4314 }
4315 }
4316
4317 check_inuse_chunk(p);
4318 return chunk2mem(p);
4319}
4320
4321/*
4322 ------------------------------ calloc ------------------------------
4323*/
4324
4325#if __STD_C
4326Void_t* cALLOc(size_t n_elements, size_t elem_size)
4327#else
4328Void_t* cALLOc(n_elements, elem_size) size_t n_elements; size_t elem_size;
4329#endif
4330{
4331 mchunkptr p;
4332 CHUNK_SIZE_T clearsize;
4333 CHUNK_SIZE_T nclears;
4334 INTERNAL_SIZE_T* d;
4335
4336 Void_t* mem = mALLOc(n_elements * elem_size);
4337
4338 if (mem != 0) {
4339 p = mem2chunk(mem);
4340
4341 if (!chunk_is_mmapped(p))
4342 {
4343 /*
4344 Unroll clear of <= 36 bytes (72 if 8byte sizes)
4345 We know that contents have an odd number of
4346 INTERNAL_SIZE_T-sized words; minimally 3.
4347 */
4348
4349 d = (INTERNAL_SIZE_T*)mem;
4350 clearsize = chunksize(p) - SIZE_SZ;
4351 nclears = clearsize / sizeof(INTERNAL_SIZE_T);
4352 assert(nclears >= 3);
4353
4354 if (nclears > 9)
4355 MALLOC_ZERO(d, clearsize);
4356
4357 else {
4358 *(d+0) = 0;
4359 *(d+1) = 0;
4360 *(d+2) = 0;
4361 if (nclears > 4) {
4362 *(d+3) = 0;
4363 *(d+4) = 0;
4364 if (nclears > 6) {
4365 *(d+5) = 0;
4366 *(d+6) = 0;
4367 if (nclears > 8) {
4368 *(d+7) = 0;
4369 *(d+8) = 0;
4370 }
4371 }
4372 }
4373 }
4374 }
4375#if ! MMAP_CLEARS
4376 else
4377 {
4378 d = (INTERNAL_SIZE_T*)mem;
4379 /*
4380 Note the additional SIZE_SZ
4381 */
4382 clearsize = chunksize(p) - 2*SIZE_SZ;
4383 MALLOC_ZERO(d, clearsize);
4384 }
4385#endif
4386 }
4387 return mem;
4388}
4389
4390/*
4391 ------------------------------ cfree ------------------------------
4392*/
4393
4394#if __STD_C
4395void cFREe(Void_t *mem)
4396#else
4397void cFREe(mem) Void_t *mem;
4398#endif
4399{
4400 fREe(mem);
4401}
4402
4403#ifdef NEED_INDEPENDENT
4404/*
4405 ------------------------- independent_calloc -------------------------
4406*/
4407
4408#if __STD_C
4409Void_t** iCALLOc(size_t n_elements, size_t elem_size, Void_t* chunks[])
4410#else
4411Void_t** iCALLOc(n_elements, elem_size, chunks) size_t n_elements; size_t elem_size; Void_t* chunks[];
4412#endif
4413{
4414 size_t sz = elem_size; /* serves as 1-element array */
4415 /* opts arg of 3 means all elements are same size, and should be cleared */
4416 return iALLOc(n_elements, &sz, 3, chunks);
4417}
4418
4419/*
4420 ------------------------- independent_comalloc -------------------------
4421*/
4422
4423#if __STD_C
4424Void_t** iCOMALLOc(size_t n_elements, size_t sizes[], Void_t* chunks[])
4425#else
4426Void_t** iCOMALLOc(n_elements, sizes, chunks) size_t n_elements; size_t sizes[]; Void_t* chunks[];
4427#endif
4428{
4429 return iALLOc(n_elements, sizes, 0, chunks);
4430}
4431
4432/*
4433 ------------------------------ ialloc ------------------------------
4434 ialloc provides common support for independent_X routines, handling all of
4435 the combinations that can result.
4436
4437 The opts arg has:
4438 bit 0 set if all elements are same size (using sizes[0])
4439 bit 1 set if elements should be zeroed
4440*/
4441
4442
4443#if __STD_C
4444static Void_t** iALLOc(size_t n_elements,
4445 size_t* sizes,
4446 int opts,
4447 Void_t* chunks[])
4448#else
4449static Void_t** iALLOc(n_elements, sizes, opts, chunks) size_t n_elements; size_t* sizes; int opts; Void_t* chunks[];
4450#endif
4451{
4452 mstate av = get_malloc_state();
4453 INTERNAL_SIZE_T element_size; /* chunksize of each element, if all same */
4454 INTERNAL_SIZE_T contents_size; /* total size of elements */
4455 INTERNAL_SIZE_T array_size; /* request size of pointer array */
4456 Void_t* mem; /* malloced aggregate space */
4457 mchunkptr p; /* corresponding chunk */
4458 INTERNAL_SIZE_T remainder_size; /* remaining bytes while splitting */
4459 Void_t** marray; /* either "chunks" or malloced ptr array */
4460 mchunkptr array_chunk; /* chunk for malloced ptr array */
4461 int mmx; /* to disable mmap */
4462 INTERNAL_SIZE_T size;
4463 size_t i;
4464
4465 /* Ensure initialization */
4466 if (av->max_fast == 0) malloc_consolidate(av);
4467
4468 /* compute array length, if needed */
4469 if (chunks != 0) {
4470 if (n_elements == 0)
4471 return chunks; /* nothing to do */
4472 marray = chunks;
4473 array_size = 0;
4474 }
4475 else {
4476 /* if empty req, must still return chunk representing empty array */
4477 if (n_elements == 0)
4478 return (Void_t**) mALLOc(0);
4479 marray = 0;
4480 array_size = request2size(n_elements * (sizeof(Void_t*)));
4481 }
4482
4483 /* compute total element size */
4484 if (opts & 0x1) { /* all-same-size */
4485 element_size = request2size(*sizes);
4486 contents_size = n_elements * element_size;
4487 }
4488 else { /* add up all the sizes */
4489 element_size = 0;
4490 contents_size = 0;
4491 for (i = 0; i != n_elements; ++i)
4492 contents_size += request2size(sizes[i]);
4493 }
4494
4495 /* subtract out alignment bytes from total to minimize overallocation */
4496 size = contents_size + array_size - MALLOC_ALIGN_MASK;
4497
4498 /*
4499 Allocate the aggregate chunk.
4500 But first disable mmap so malloc won't use it, since
4501 we would not be able to later free/realloc space internal
4502 to a segregated mmap region.
4503 */
4504 mmx = av->n_mmaps_max; /* disable mmap */
4505 av->n_mmaps_max = 0;
4506 mem = mALLOc(size);
4507 av->n_mmaps_max = mmx; /* reset mmap */
4508 if (mem == 0)
4509 return 0;
4510
4511 p = mem2chunk(mem);
4512 assert(!chunk_is_mmapped(p));
4513 remainder_size = chunksize(p);
4514
4515 if (opts & 0x2) { /* optionally clear the elements */
4516 MALLOC_ZERO(mem, remainder_size - SIZE_SZ - array_size);
4517 }
4518
4519 /* If not provided, allocate the pointer array as final part of chunk */
4520 if (marray == 0) {
4521 array_chunk = chunk_at_offset(p, contents_size);
4522 marray = (Void_t**) (chunk2mem(array_chunk));
4523 set_head(array_chunk, (remainder_size - contents_size) | PREV_INUSE);
4524 remainder_size = contents_size;
4525 }
4526
4527 /* split out elements */
4528 for (i = 0; ; ++i) {
4529 marray[i] = chunk2mem(p);
4530 if (i != n_elements-1) {
4531 if (element_size != 0)
4532 size = element_size;
4533 else
4534 size = request2size(sizes[i]);
4535 remainder_size -= size;
4536 set_head(p, size | PREV_INUSE);
4537 p = chunk_at_offset(p, size);
4538 }
4539 else { /* the final element absorbs any overallocation slop */
4540 set_head(p, remainder_size | PREV_INUSE);
4541 break;
4542 }
4543 }
4544
4545#if DEBUG
4546 if (marray != chunks) {
4547 /* final element must have exactly exhausted chunk */
4548 if (element_size != 0)
4549 assert(remainder_size == element_size);
4550 else
4551 assert(remainder_size == request2size(sizes[i]));
4552 check_inuse_chunk(mem2chunk(marray));
4553 }
4554
4555 for (i = 0; i != n_elements; ++i)
4556 check_inuse_chunk(mem2chunk(marray[i]));
4557#endif
4558
4559 return marray;
4560}
4561#endif /* NEED_INDEPENDENT */
4562
4563
4564/*
4565 ------------------------------ valloc ------------------------------
4566*/
4567
4568#if __STD_C
4569Void_t* vALLOc(size_t bytes)
4570#else
4571Void_t* vALLOc(bytes) size_t bytes;
4572#endif
4573{
4574 /* Ensure initialization */
4575 mstate av = get_malloc_state();
4576 if (av->max_fast == 0) malloc_consolidate(av);
4577 return mEMALIGn(av->pagesize, bytes);
4578}
4579
4580#ifdef NEED_PVALLOC
4581/*
4582 ------------------------------ pvalloc ------------------------------
4583*/
4584
4585
4586#if __STD_C
4587Void_t* pVALLOc(size_t bytes)
4588#else
4589Void_t* pVALLOc(bytes) size_t bytes;
4590#endif
4591{
4592 mstate av = get_malloc_state();
4593 size_t pagesz;
4594
4595 /* Ensure initialization */
4596 if (av->max_fast == 0) malloc_consolidate(av);
4597 pagesz = av->pagesize;
4598 return mEMALIGn(pagesz, (bytes + pagesz - 1) & ~(pagesz - 1));
4599}
4600#endif /*NEED_PVALLOC*/
4601
4602
4603/*
4604 ------------------------------ malloc_trim ------------------------------
4605*/
4606
4607#if __STD_C
4608int mTRIm(size_t pad)
4609#else
4610int mTRIm(pad) size_t pad;
4611#endif
4612{
4613 mstate av = get_malloc_state();
4614 /* Ensure initialization/consolidation */
4615 malloc_consolidate(av);
4616
4617#ifndef MORECORE_CANNOT_TRIM
4618 return sYSTRIm(pad, av);
4619#else
4620 return 0;
4621#endif
4622}
4623
4624
4625/*
4626 ------------------------- malloc_usable_size -------------------------
4627*/
4628
4629#if __STD_C
4630size_t mUSABLe(Void_t* mem)
4631#else
4632size_t mUSABLe(mem) Void_t* mem;
4633#endif
4634{
4635 mchunkptr p;
4636 if (mem != 0) {
4637 p = mem2chunk(mem);
4638 if (chunk_is_mmapped(p))
4639 return chunksize(p) - 2*SIZE_SZ;
4640 else if (inuse(p))
4641 return chunksize(p) - SIZE_SZ;
4642 }
4643 return 0;
4644}
4645
4646/*
4647 ------------------------------ mallinfo ------------------------------
4648*/
4649
4650struct mallinfo mALLINFo()
4651{
4652 mstate av = get_malloc_state();
4653 struct mallinfo mi;
4654 unsigned i;
4655 mbinptr b;
4656 mchunkptr p;
4657 INTERNAL_SIZE_T avail;
4658 INTERNAL_SIZE_T fastavail;
4659 int nblocks;
4660 int nfastblocks;
4661
4662 /* Ensure initialization */
4663 if (av->top == 0) malloc_consolidate(av);
4664
4665 check_malloc_state();
4666
4667 /* Account for top */
4668 avail = chunksize(av->top);
4669 nblocks = 1; /* top always exists */
4670
4671 /* traverse fastbins */
4672 nfastblocks = 0;
4673 fastavail = 0;
4674
4675 for (i = 0; i < NFASTBINS; ++i) {
4676 for (p = av->fastbins[i]; p != 0; p = p->fd) {
4677 ++nfastblocks;
4678 fastavail += chunksize(p);
4679 }
4680 }
4681
4682 avail += fastavail;
4683
4684 /* traverse regular bins */
4685 for (i = 1; i < NBINS; ++i) {
4686 b = bin_at(av, i);
4687 for (p = last(b); p != b; p = p->bk) {
4688 ++nblocks;
4689 avail += chunksize(p);
4690 }
4691 }
4692
4693 mi.smblks = nfastblocks;
4694 mi.ordblks = nblocks;
4695 mi.fordblks = avail;
4696 mi.uordblks = av->sbrked_mem - avail;
4697 mi.arena = av->sbrked_mem;
4698 mi.hblks = av->n_mmaps;
4699 mi.hblkhd = av->mmapped_mem;
4700 mi.fsmblks = fastavail;
4701 mi.keepcost = chunksize(av->top);
4702 mi.usmblks = av->max_total_mem;
4703 return mi;
4704}
4705
4706/*
4707 ------------------------------ malloc_stats ------------------------------
4708*/
4709
4710void mSTATs()
4711{
4712 struct mallinfo mi = mALLINFo();
4713
4714#ifdef WIN32
4715 {
4716 CHUNK_SIZE_T free, reserved, committed;
4717 vminfo (&free, &reserved, &committed);
4718 fprintf(stderr, "free bytes = %10lu\n",
4719 free);
4720 fprintf(stderr, "reserved bytes = %10lu\n",
4721 reserved);
4722 fprintf(stderr, "committed bytes = %10lu\n",
4723 committed);
4724 }
4725#endif
4726
4727
4728 fprintf(stderr, "max system bytes = %10lu\n",
4729 (CHUNK_SIZE_T)(mi.usmblks));
4730 fprintf(stderr, "system bytes = %10lu\n",
4731 (CHUNK_SIZE_T)(mi.arena + mi.hblkhd));
4732 fprintf(stderr, "in use bytes = %10lu\n",
4733 (CHUNK_SIZE_T)(mi.uordblks + mi.hblkhd));
4734
4735#ifdef WIN32
4736 {
4737 CHUNK_SIZE_T kernel, user;
4738 if (cpuinfo (TRUE, &kernel, &user)) {
4739 fprintf(stderr, "kernel ms = %10lu\n",
4740 kernel);
4741 fprintf(stderr, "user ms = %10lu\n",
4742 user);
4743 }
4744 }
4745#endif
4746}
4747
4748
4749/*
4750 ------------------------------ mallopt ------------------------------
4751*/
4752
4753#if __STD_C
4754int mALLOPt(int param_number, int value)
4755#else
4756int mALLOPt(param_number, value) int param_number; int value;
4757#endif
4758{
4759 mstate av = get_malloc_state();
4760 /* Ensure initialization/consolidation */
4761 malloc_consolidate(av);
4762
4763 switch(param_number) {
4764 case M_MXFAST:
4765 if (value >= 0 && value <= MAX_FAST_SIZE) {
4766 set_max_fast(av, value);
4767 return 1;
4768 }
4769 else
4770 return 0;
4771
4772 case M_TRIM_THRESHOLD:
4773 av->trim_threshold = value;
4774 return 1;
4775
4776 case M_TOP_PAD:
4777 av->top_pad = value;
4778 return 1;
4779
4780 case M_MMAP_THRESHOLD:
4781 av->mmap_threshold = value;
4782 return 1;
4783
4784 case M_MMAP_MAX:
4785#if !HAVE_MMAP
4786 if (value != 0)
4787 return 0;
4788#endif
4789 av->n_mmaps_max = value;
4790 return 1;
4791
4792 default:
4793 return 0;
4794 }
4795}
4796
4797/*
4798 -------------------- Alternative MORECORE functions --------------------
4799*/
4800
4801
4802/*
4803 General Requirements for MORECORE.
4804
4805 The MORECORE function must have the following properties:
4806
4807 If MORECORE_CONTIGUOUS is false:
4808
4809 * MORECORE must allocate in multiples of pagesize. It will
4810 only be called with arguments that are multiples of pagesize.
4811
4812 * MORECORE(0) must return an address that is at least
4813 MALLOC_ALIGNMENT aligned. (Page-aligning always suffices.)
4814
4815 else (i.e. If MORECORE_CONTIGUOUS is true):
4816
4817 * Consecutive calls to MORECORE with positive arguments
4818 return increasing addresses, indicating that space has been
4819 contiguously extended.
4820
4821 * MORECORE need not allocate in multiples of pagesize.
4822 Calls to MORECORE need not have args of multiples of pagesize.
4823
4824 * MORECORE need not page-align.
4825
4826 In either case:
4827
4828 * MORECORE may allocate more memory than requested. (Or even less,
4829 but this will generally result in a malloc failure.)
4830
4831 * MORECORE must not allocate memory when given argument zero, but
4832 instead return one past the end address of memory from previous
4833 nonzero call. This malloc does NOT call MORECORE(0)
4834 until at least one call with positive arguments is made, so
4835 the initial value returned is not important.
4836
4837 * Even though consecutive calls to MORECORE need not return contiguous
4838 addresses, it must be OK for malloc'ed chunks to span multiple
4839 regions in those cases where they do happen to be contiguous.
4840
4841 * MORECORE need not handle negative arguments -- it may instead
4842 just return MORECORE_FAILURE when given negative arguments.
4843 Negative arguments are always multiples of pagesize. MORECORE
4844 must not misinterpret negative args as large positive unsigned
4845 args. You can suppress all such calls from even occurring by defining
4846 MORECORE_CANNOT_TRIM,
4847
4848 There is some variation across systems about the type of the
4849 argument to sbrk/MORECORE. If size_t is unsigned, then it cannot
4850 actually be size_t, because sbrk supports negative args, so it is
4851 normally the signed type of the same width as size_t (sometimes
4852 declared as "intptr_t", and sometimes "ptrdiff_t"). It doesn't much
4853 matter though. Internally, we use "long" as arguments, which should
4854 work across all reasonable possibilities.
4855
4856 Additionally, if MORECORE ever returns failure for a positive
4857 request, and HAVE_MMAP is true, then mmap is used as a noncontiguous
4858 system allocator. This is a useful backup strategy for systems with
4859 holes in address spaces -- in this case sbrk cannot contiguously
4860 expand the heap, but mmap may be able to map noncontiguous space.
4861
4862 If you'd like mmap to ALWAYS be used, you can define MORECORE to be
4863 a function that always returns MORECORE_FAILURE.
4864
4865 Malloc only has limited ability to detect failures of MORECORE
4866 to supply contiguous space when it says it can. In particular,
4867 multithreaded programs that do not use locks may result in
4868 rece conditions across calls to MORECORE that result in gaps
4869 that cannot be detected as such, and subsequent corruption.
4870
4871 If you are using this malloc with something other than sbrk (or its
4872 emulation) to supply memory regions, you probably want to set
4873 MORECORE_CONTIGUOUS as false. As an example, here is a custom
4874 allocator kindly contributed for pre-OSX macOS. It uses virtually
4875 but not necessarily physically contiguous non-paged memory (locked
4876 in, present and won't get swapped out). You can use it by
4877 uncommenting this section, adding some #includes, and setting up the
4878 appropriate defines above:
4879
4880 #define MORECORE osMoreCore
4881 #define MORECORE_CONTIGUOUS 0
4882
4883 There is also a shutdown routine that should somehow be called for
4884 cleanup upon program exit.
4885
4886 #define MAX_POOL_ENTRIES 100
4887 #define MINIMUM_MORECORE_SIZE (64 * 1024)
4888 static int next_os_pool;
4889 void *our_os_pools[MAX_POOL_ENTRIES];
4890
4891 void *osMoreCore(int size)
4892 {
4893 void *ptr = 0;
4894 static void *sbrk_top = 0;
4895
4896 if (size > 0)
4897 {
4898 if (size < MINIMUM_MORECORE_SIZE)
4899 size = MINIMUM_MORECORE_SIZE;
4900 if (CurrentExecutionLevel() == kTaskLevel)
4901 ptr = PoolAllocateResident(size + RM_PAGE_SIZE, 0);
4902 if (ptr == 0)
4903 {
4904 return (void *) MORECORE_FAILURE;
4905 }
4906 // save ptrs so they can be freed during cleanup
4907 our_os_pools[next_os_pool] = ptr;
4908 next_os_pool++;
4909 ptr = (void *) ((((CHUNK_SIZE_T) ptr) + RM_PAGE_MASK) & ~RM_PAGE_MASK);
4910 sbrk_top = (char *) ptr + size;
4911 return ptr;
4912 }
4913 else if (size < 0)
4914 {
4915 // we don't currently support shrink behavior
4916 return (void *) MORECORE_FAILURE;
4917 }
4918 else
4919 {
4920 return sbrk_top;
4921 }
4922 }
4923
4924 // cleanup any allocated memory pools
4925 // called as last thing before shutting down driver
4926
4927 void osCleanupMem(void)
4928 {
4929 void **ptr;
4930
4931 for (ptr = our_os_pools; ptr < &our_os_pools[MAX_POOL_ENTRIES]; ptr++)
4932 if (*ptr)
4933 {
4934 PoolDeallocate(*ptr);
4935 *ptr = 0;
4936 }
4937 }
4938
4939*/
4940
4941
4942/*
4943 --------------------------------------------------------------
4944
4945 Emulation of sbrk for win32.
4946 Donated by J. Walter <Walter@GeNeSys-e.de>.
4947 For additional information about this code, and malloc on Win32, see
4948 http://www.genesys-e.de/jwalter/
4949*/
4950
4951
4952#ifdef WIN32
4953
4954#ifdef _DEBUG
4955/* #define TRACE */
4956#endif
4957
4958/* Support for USE_MALLOC_LOCK */
4959#ifdef USE_MALLOC_LOCK
4960
4961/* Wait for spin lock */
4962static int slwait (int *sl) {
4963 while (InterlockedCompareExchange ((void **) sl, (void *) 1, (void *) 0) != 0)
4964 Sleep (0);
4965 return 0;
4966}
4967
4968/* Release spin lock */
4969static int slrelease (int *sl) {
4970 InterlockedExchange (sl, 0);
4971 return 0;
4972}
4973
4974#ifdef NEEDED
4975/* Spin lock for emulation code */
4976static int g_sl;
4977#endif
4978
4979#endif /* USE_MALLOC_LOCK */
4980
4981/* getpagesize for windows */
4982static long getpagesize (void) {
4983 static long g_pagesize = 0;
4984 if (! g_pagesize) {
4985 SYSTEM_INFO system_info;
4986 GetSystemInfo (&system_info);
4987 g_pagesize = system_info.dwPageSize;
4988 }
4989 return g_pagesize;
4990}
4991static long getregionsize (void) {
4992 static long g_regionsize = 0;
4993 if (! g_regionsize) {
4994 SYSTEM_INFO system_info;
4995 GetSystemInfo (&system_info);
4996 g_regionsize = system_info.dwAllocationGranularity;
4997 }
4998 return g_regionsize;
4999}
5000
5001/* A region list entry */
5002typedef struct _region_list_entry {
5003 void *top_allocated;
5004 void *top_committed;
5005 void *top_reserved;
5006 long reserve_size;
5007 struct _region_list_entry *previous;
5008} region_list_entry;
5009
5010/* Allocate and link a region entry in the region list */
5011static int region_list_append (region_list_entry **last, void *base_reserved, long reserve_size) {
5012 region_list_entry *next = HeapAlloc (GetProcessHeap (), 0, sizeof (region_list_entry));
5013 if (! next)
5014 return FALSE;
5015 next->top_allocated = (char *) base_reserved;
5016 next->top_committed = (char *) base_reserved;
5017 next->top_reserved = (char *) base_reserved + reserve_size;
5018 next->reserve_size = reserve_size;
5019 next->previous = *last;
5020 *last = next;
5021 return TRUE;
5022}
5023/* Free and unlink the last region entry from the region list */
5024static int region_list_remove (region_list_entry **last) {
5025 region_list_entry *previous = (*last)->previous;
5026 if (! HeapFree (GetProcessHeap (), sizeof (region_list_entry), *last))
5027 return FALSE;
5028 *last = previous;
5029 return TRUE;
5030}
5031
5032#define CEIL(size,to) (((size)+(to)-1)&~((to)-1))
5033#define FLOOR(size,to) ((size)&~((to)-1))
5034
5035#define SBRK_SCALE 0
5036/* #define SBRK_SCALE 1 */
5037/* #define SBRK_SCALE 2 */
5038/* #define SBRK_SCALE 4 */
5039
5040/* sbrk for windows */
5041static void *sbrk (long size) {
5042 static long g_pagesize, g_my_pagesize;
5043 static long g_regionsize, g_my_regionsize;
5044 static region_list_entry *g_last;
5045 void *result = (void *) MORECORE_FAILURE;
5046#ifdef TRACE
5047 printf ("sbrk %d\n", size);
5048#endif
5049#if defined (USE_MALLOC_LOCK) && defined (NEEDED)
5050 /* Wait for spin lock */
5051 slwait (&g_sl);
5052#endif
5053 /* First time initialization */
5054 if (! g_pagesize) {
5055 g_pagesize = getpagesize ();
5056 g_my_pagesize = g_pagesize << SBRK_SCALE;
5057 }
5058 if (! g_regionsize) {
5059 g_regionsize = getregionsize ();
5060 g_my_regionsize = g_regionsize << SBRK_SCALE;
5061 }
5062 if (! g_last) {
5063 if (! region_list_append (&g_last, 0, 0))
5064 goto sbrk_exit;
5065 }
5066 /* Assert invariants */
5067 assert (g_last);
5068 assert ((char *) g_last->top_reserved - g_last->reserve_size <= (char *) g_last->top_allocated &&
5069 g_last->top_allocated <= g_last->top_committed);
5070 assert ((char *) g_last->top_reserved - g_last->reserve_size <= (char *) g_last->top_committed &&
5071 g_last->top_committed <= g_last->top_reserved &&
5072 (unsigned) g_last->top_committed % g_pagesize == 0);
5073 assert ((unsigned) g_last->top_reserved % g_regionsize == 0);
5074 assert ((unsigned) g_last->reserve_size % g_regionsize == 0);
5075 /* Allocation requested? */
5076 if (size >= 0) {
5077 /* Allocation size is the requested size */
5078 long allocate_size = size;
5079 /* Compute the size to commit */
5080 long to_commit = (char *) g_last->top_allocated + allocate_size - (char *) g_last->top_committed;
5081 /* Do we reach the commit limit? */
5082 if (to_commit > 0) {
5083 /* Round size to commit */
5084 long commit_size = CEIL (to_commit, g_my_pagesize);
5085 /* Compute the size to reserve */
5086 long to_reserve = (char *) g_last->top_committed + commit_size - (char *) g_last->top_reserved;
5087 /* Do we reach the reserve limit? */
5088 if (to_reserve > 0) {
5089 /* Compute the remaining size to commit in the current region */
5090 long remaining_commit_size = (char *) g_last->top_reserved - (char *) g_last->top_committed;
5091 if (remaining_commit_size > 0) {
5092 /* Assert preconditions */
5093 assert ((unsigned) g_last->top_committed % g_pagesize == 0);
5094 assert (0 < remaining_commit_size && remaining_commit_size % g_pagesize == 0); {
5095 /* Commit this */
5096 void *base_committed = VirtualAlloc (g_last->top_committed, remaining_commit_size,
5097 MEM_COMMIT, PAGE_READWRITE);
5098 /* Check returned pointer for consistency */
5099 if (base_committed != g_last->top_committed)
5100 goto sbrk_exit;
5101 /* Assert postconditions */
5102 assert ((unsigned) base_committed % g_pagesize == 0);
5103#ifdef TRACE
5104 printf ("Commit %p %d\n", base_committed, remaining_commit_size);
5105#endif
5106 /* Adjust the regions commit top */
5107 g_last->top_committed = (char *) base_committed + remaining_commit_size;
5108 }
5109 } {
5110 /* Now we are going to search and reserve. */
5111 int contiguous = -1;
5112 int found = FALSE;
5113 MEMORY_BASIC_INFORMATION memory_info;
5114 void *base_reserved;
5115 long reserve_size;
5116 do {
5117 /* Assume contiguous memory */
5118 contiguous = TRUE;
5119 /* Round size to reserve */
5120 reserve_size = CEIL (to_reserve, g_my_regionsize);
5121 /* Start with the current region's top */
5122 memory_info.BaseAddress = g_last->top_reserved;
5123 /* Assert preconditions */
5124 assert ((unsigned) memory_info.BaseAddress % g_pagesize == 0);
5125 assert (0 < reserve_size && reserve_size % g_regionsize == 0);
5126 while (VirtualQuery (memory_info.BaseAddress, &memory_info, sizeof (memory_info))) {
5127 /* Assert postconditions */
5128 assert ((unsigned) memory_info.BaseAddress % g_pagesize == 0);
5129#ifdef TRACE
5130 printf ("Query %p %d %s\n", memory_info.BaseAddress, memory_info.RegionSize,
5131 memory_info.State == MEM_FREE ? "FREE":
5132 (memory_info.State == MEM_RESERVE ? "RESERVED":
5133 (memory_info.State == MEM_COMMIT ? "COMMITTED": "?")));
5134#endif
5135 /* Region is free, well aligned and big enough: we are done */
5136 if (memory_info.State == MEM_FREE &&
5137 (unsigned) memory_info.BaseAddress % g_regionsize == 0 &&
5138 memory_info.RegionSize >= (unsigned) reserve_size) {
5139 found = TRUE;
5140 break;
5141 }
5142 /* From now on we can't get contiguous memory! */
5143 contiguous = FALSE;
5144 /* Recompute size to reserve */
5145 reserve_size = CEIL (allocate_size, g_my_regionsize);
5146 memory_info.BaseAddress = (char *) memory_info.BaseAddress + memory_info.RegionSize;
5147 /* Assert preconditions */
5148 assert ((unsigned) memory_info.BaseAddress % g_pagesize == 0);
5149 assert (0 < reserve_size && reserve_size % g_regionsize == 0);
5150 }
5151 /* Search failed? */
5152 if (! found)
5153 goto sbrk_exit;
5154 /* Assert preconditions */
5155 assert ((unsigned) memory_info.BaseAddress % g_regionsize == 0);
5156 assert (0 < reserve_size && reserve_size % g_regionsize == 0);
5157 /* Try to reserve this */
5158 base_reserved = VirtualAlloc (memory_info.BaseAddress, reserve_size,
5159 MEM_RESERVE, PAGE_NOACCESS);
5160 if (! base_reserved) {
5161 int rc = GetLastError ();
5162 if (rc != ERROR_INVALID_ADDRESS)
5163 goto sbrk_exit;
5164 }
5165 /* A null pointer signals (hopefully) a race condition with another thread. */
5166 /* In this case, we try again. */
5167 } while (! base_reserved);
5168 /* Check returned pointer for consistency */
5169 if (memory_info.BaseAddress && base_reserved != memory_info.BaseAddress)
5170 goto sbrk_exit;
5171 /* Assert postconditions */
5172 assert ((unsigned) base_reserved % g_regionsize == 0);
5173#ifdef TRACE
5174 printf ("Reserve %p %d\n", base_reserved, reserve_size);
5175#endif
5176 /* Did we get contiguous memory? */
5177 if (contiguous) {
5178 long start_size = (char *) g_last->top_committed - (char *) g_last->top_allocated;
5179 /* Adjust allocation size */
5180 allocate_size -= start_size;
5181 /* Adjust the regions allocation top */
5182 g_last->top_allocated = g_last->top_committed;
5183 /* Recompute the size to commit */
5184 to_commit = (char *) g_last->top_allocated + allocate_size - (char *) g_last->top_committed;
5185 /* Round size to commit */
5186 commit_size = CEIL (to_commit, g_my_pagesize);
5187 }
5188 /* Append the new region to the list */
5189 if (! region_list_append (&g_last, base_reserved, reserve_size))
5190 goto sbrk_exit;
5191 /* Didn't we get contiguous memory? */
5192 if (! contiguous) {
5193 /* Recompute the size to commit */
5194 to_commit = (char *) g_last->top_allocated + allocate_size - (char *) g_last->top_committed;
5195 /* Round size to commit */
5196 commit_size = CEIL (to_commit, g_my_pagesize);
5197 }
5198 }
5199 }
5200 /* Assert preconditions */
5201 assert ((unsigned) g_last->top_committed % g_pagesize == 0);
5202 assert (0 < commit_size && commit_size % g_pagesize == 0); {
5203 /* Commit this */
5204 void *base_committed = VirtualAlloc (g_last->top_committed, commit_size,
5205 MEM_COMMIT, PAGE_READWRITE);
5206 /* Check returned pointer for consistency */
5207 if (base_committed != g_last->top_committed)
5208 goto sbrk_exit;
5209 /* Assert postconditions */
5210 assert ((unsigned) base_committed % g_pagesize == 0);
5211#ifdef TRACE
5212 printf ("Commit %p %d\n", base_committed, commit_size);
5213#endif
5214 /* Adjust the regions commit top */
5215 g_last->top_committed = (char *) base_committed + commit_size;
5216 }
5217 }
5218 /* Adjust the regions allocation top */
5219 g_last->top_allocated = (char *) g_last->top_allocated + allocate_size;
5220 result = (char *) g_last->top_allocated - size;
5221 /* Deallocation requested? */
5222 } else if (size < 0) {
5223 long deallocate_size = - size;
5224 /* As long as we have a region to release */
5225 while ((char *) g_last->top_allocated - deallocate_size < (char *) g_last->top_reserved - g_last->reserve_size) {
5226 /* Get the size to release */
5227 long release_size = g_last->reserve_size;
5228 /* Get the base address */
5229 void *base_reserved = (char *) g_last->top_reserved - release_size;
5230 /* Assert preconditions */
5231 assert ((unsigned) base_reserved % g_regionsize == 0);
5232 assert (0 < release_size && release_size % g_regionsize == 0); {
5233 /* Release this */
5234 int rc = VirtualFree (base_reserved, 0,
5235 MEM_RELEASE);
5236 /* Check returned code for consistency */
5237 if (! rc)
5238 goto sbrk_exit;
5239#ifdef TRACE
5240 printf ("Release %p %d\n", base_reserved, release_size);
5241#endif
5242 }
5243 /* Adjust deallocation size */
5244 deallocate_size -= (char *) g_last->top_allocated - (char *) base_reserved;
5245 /* Remove the old region from the list */
5246 if (! region_list_remove (&g_last))
5247 goto sbrk_exit;
5248 } {
5249 /* Compute the size to decommit */
5250 long to_decommit = (char *) g_last->top_committed - ((char *) g_last->top_allocated - deallocate_size);
5251 if (to_decommit >= g_my_pagesize) {
5252 /* Compute the size to decommit */
5253 long decommit_size = FLOOR (to_decommit, g_my_pagesize);
5254 /* Compute the base address */
5255 void *base_committed = (char *) g_last->top_committed - decommit_size;
5256 /* Assert preconditions */
5257 assert ((unsigned) base_committed % g_pagesize == 0);
5258 assert (0 < decommit_size && decommit_size % g_pagesize == 0); {
5259 /* Decommit this */
5260 int rc = VirtualFree ((char *) base_committed, decommit_size,
5261 MEM_DECOMMIT);
5262 /* Check returned code for consistency */
5263 if (! rc)
5264 goto sbrk_exit;
5265#ifdef TRACE
5266 printf ("Decommit %p %d\n", base_committed, decommit_size);
5267#endif
5268 }
5269 /* Adjust deallocation size and regions commit and allocate top */
5270 deallocate_size -= (char *) g_last->top_allocated - (char *) base_committed;
5271 g_last->top_committed = base_committed;
5272 g_last->top_allocated = base_committed;
5273 }
5274 }
5275 /* Adjust regions allocate top */
5276 g_last->top_allocated = (char *) g_last->top_allocated - deallocate_size;
5277 /* Check for underflow */
5278 if ((char *) g_last->top_reserved - g_last->reserve_size > (char *) g_last->top_allocated ||
5279 g_last->top_allocated > g_last->top_committed) {
5280 /* Adjust regions allocate top */
5281 g_last->top_allocated = (char *) g_last->top_reserved - g_last->reserve_size;
5282 goto sbrk_exit;
5283 }
5284 result = g_last->top_allocated;
5285 }
5286 /* Assert invariants */
5287 assert (g_last);
5288 assert ((char *) g_last->top_reserved - g_last->reserve_size <= (char *) g_last->top_allocated &&
5289 g_last->top_allocated <= g_last->top_committed);
5290 assert ((char *) g_last->top_reserved - g_last->reserve_size <= (char *) g_last->top_committed &&
5291 g_last->top_committed <= g_last->top_reserved &&
5292 (unsigned) g_last->top_committed % g_pagesize == 0);
5293 assert ((unsigned) g_last->top_reserved % g_regionsize == 0);
5294 assert ((unsigned) g_last->reserve_size % g_regionsize == 0);
5295
5296sbrk_exit:
5297#if defined (USE_MALLOC_LOCK) && defined (NEEDED)
5298 /* Release spin lock */
5299 slrelease (&g_sl);
5300#endif
5301 return result;
5302}
5303
5304/* mmap for windows */
5305static void *mmap (void *ptr, long size, long prot, long type, long handle, long arg) {
5306 static long g_pagesize;
5307 static long g_regionsize;
5308#ifdef TRACE
5309 printf ("mmap %d\n", size);
5310#endif
5311#if defined (USE_MALLOC_LOCK) && defined (NEEDED)
5312 /* Wait for spin lock */
5313 slwait (&g_sl);
5314#endif
5315 /* First time initialization */
5316 if (! g_pagesize)
5317 g_pagesize = getpagesize ();
5318 if (! g_regionsize)
5319 g_regionsize = getregionsize ();
5320 /* Assert preconditions */
5321 assert ((unsigned) ptr % g_regionsize == 0);
5322 assert (size % g_pagesize == 0);
5323 /* Allocate this */
5324 ptr = VirtualAlloc (ptr, size,
5325 MEM_RESERVE | MEM_COMMIT | MEM_TOP_DOWN, PAGE_READWRITE);
5326 if (! ptr) {
5327 ptr = (void *) MORECORE_FAILURE;
5328 goto mmap_exit;
5329 }
5330 /* Assert postconditions */
5331 assert ((unsigned) ptr % g_regionsize == 0);
5332#ifdef TRACE
5333 printf ("Commit %p %d\n", ptr, size);
5334#endif
5335mmap_exit:
5336#if defined (USE_MALLOC_LOCK) && defined (NEEDED)
5337 /* Release spin lock */
5338 slrelease (&g_sl);
5339#endif
5340 return ptr;
5341}
5342
5343/* munmap for windows */
5344static long munmap (void *ptr, long size) {
5345 static long g_pagesize;
5346 static long g_regionsize;
5347 int rc = MUNMAP_FAILURE;
5348#ifdef TRACE
5349 printf ("munmap %p %d\n", ptr, size);
5350#endif
5351#if defined (USE_MALLOC_LOCK) && defined (NEEDED)
5352 /* Wait for spin lock */
5353 slwait (&g_sl);
5354#endif
5355 /* First time initialization */
5356 if (! g_pagesize)
5357 g_pagesize = getpagesize ();
5358 if (! g_regionsize)
5359 g_regionsize = getregionsize ();
5360 /* Assert preconditions */
5361 assert ((unsigned) ptr % g_regionsize == 0);
5362 assert (size % g_pagesize == 0);
5363 /* Free this */
5364 if (! VirtualFree (ptr, 0,
5365 MEM_RELEASE))
5366 goto munmap_exit;
5367 rc = 0;
5368#ifdef TRACE
5369 printf ("Release %p %d\n", ptr, size);
5370#endif
5371munmap_exit:
5372#if defined (USE_MALLOC_LOCK) && defined (NEEDED)
5373 /* Release spin lock */
5374 slrelease (&g_sl);
5375#endif
5376 return rc;
5377}
5378
5379static void vminfo (CHUNK_SIZE_T *free, CHUNK_SIZE_T *reserved, CHUNK_SIZE_T *committed) {
5380 MEMORY_BASIC_INFORMATION memory_info;
5381 memory_info.BaseAddress = 0;
5382 *free = *reserved = *committed = 0;
5383 while (VirtualQuery (memory_info.BaseAddress, &memory_info, sizeof (memory_info))) {
5384 switch (memory_info.State) {
5385 case MEM_FREE:
5386 *free += memory_info.RegionSize;
5387 break;
5388 case MEM_RESERVE:
5389 *reserved += memory_info.RegionSize;
5390 break;
5391 case MEM_COMMIT:
5392 *committed += memory_info.RegionSize;
5393 break;
5394 }
5395 memory_info.BaseAddress = (char *) memory_info.BaseAddress + memory_info.RegionSize;
5396 }
5397}
5398
5399static int cpuinfo (int whole, CHUNK_SIZE_T *kernel, CHUNK_SIZE_T *user) {
5400 if (whole) {
5401 __int64 creation64, exit64, kernel64, user64;
5402 int rc = GetProcessTimes (GetCurrentProcess (),
5403 (FILETIME *) &creation64,
5404 (FILETIME *) &exit64,
5405 (FILETIME *) &kernel64,
5406 (FILETIME *) &user64);
5407 if (! rc) {
5408 *kernel = 0;
5409 *user = 0;
5410 return FALSE;
5411 }
5412 *kernel = (CHUNK_SIZE_T) (kernel64 / 10000);
5413 *user = (CHUNK_SIZE_T) (user64 / 10000);
5414 return TRUE;
5415 } else {
5416 __int64 creation64, exit64, kernel64, user64;
5417 int rc = GetThreadTimes (GetCurrentThread (),
5418 (FILETIME *) &creation64,
5419 (FILETIME *) &exit64,
5420 (FILETIME *) &kernel64,
5421 (FILETIME *) &user64);
5422 if (! rc) {
5423 *kernel = 0;
5424 *user = 0;
5425 return FALSE;
5426 }
5427 *kernel = (CHUNK_SIZE_T) (kernel64 / 10000);
5428 *user = (CHUNK_SIZE_T) (user64 / 10000);
5429 return TRUE;
5430 }
5431}
5432
5433#endif /* WIN32 */
5434
5435/* ------------------------------------------------------------
5436History:
a80add95
CF
5437 V2.7.2 Sat Aug 17 09:07:30 2002 Doug Lea (dl at gee)
5438 * Fix malloc_state bitmap array misdeclaration
5439
c7e2187a
CF
5440 V2.7.1 Thu Jul 25 10:58:03 2002 Doug Lea (dl at gee)
5441 * Allow tuning of FIRST_SORTED_BIN_SIZE
5442 * Use PTR_UINT as type for all ptr->int casts. Thanks to John Belmonte.
5443 * Better detection and support for non-contiguousness of MORECORE.
5444 Thanks to Andreas Mueller, Conal Walsh, and Wolfram Gloger
5445 * Bypass most of malloc if no frees. Thanks To Emery Berger.
5446 * Fix freeing of old top non-contiguous chunk im sysmalloc.
5447 * Raised default trim and map thresholds to 256K.
5448 * Fix mmap-related #defines. Thanks to Lubos Lunak.
5449 * Fix copy macros; added LACKS_FCNTL_H. Thanks to Neal Walfield.
5450 * Branch-free bin calculation
5451 * Default trim and mmap thresholds now 256K.
5452
5453 V2.7.0 Sun Mar 11 14:14:06 2001 Doug Lea (dl at gee)
5454 * Introduce independent_comalloc and independent_calloc.
5455 Thanks to Michael Pachos for motivation and help.
5456 * Make optional .h file available
5457 * Allow > 2GB requests on 32bit systems.
5458 * new WIN32 sbrk, mmap, munmap, lock code from <Walter@GeNeSys-e.de>.
5459 Thanks also to Andreas Mueller <a.mueller at paradatec.de>,
5460 and Anonymous.
5461 * Allow override of MALLOC_ALIGNMENT (Thanks to Ruud Waij for
5462 helping test this.)
5463 * memalign: check alignment arg
5464 * realloc: don't try to shift chunks backwards, since this
5465 leads to more fragmentation in some programs and doesn't
5466 seem to help in any others.
5467 * Collect all cases in malloc requiring system memory into sYSMALLOc
5468 * Use mmap as backup to sbrk
5469 * Place all internal state in malloc_state
5470 * Introduce fastbins (although similar to 2.5.1)
5471 * Many minor tunings and cosmetic improvements
5472 * Introduce USE_PUBLIC_MALLOC_WRAPPERS, USE_MALLOC_LOCK
5473 * Introduce MALLOC_FAILURE_ACTION, MORECORE_CONTIGUOUS
5474 Thanks to Tony E. Bennett <tbennett@nvidia.com> and others.
5475 * Include errno.h to support default failure action.
5476
5477 V2.6.6 Sun Dec 5 07:42:19 1999 Doug Lea (dl at gee)
5478 * return null for negative arguments
5479 * Added Several WIN32 cleanups from Martin C. Fong <mcfong at yahoo.com>
5480 * Add 'LACKS_SYS_PARAM_H' for those systems without 'sys/param.h'
5481 (e.g. WIN32 platforms)
5482 * Cleanup header file inclusion for WIN32 platforms
5483 * Cleanup code to avoid Microsoft Visual C++ compiler complaints
5484 * Add 'USE_DL_PREFIX' to quickly allow co-existence with existing
5485 memory allocation routines
5486 * Set 'malloc_getpagesize' for WIN32 platforms (needs more work)
5487 * Use 'assert' rather than 'ASSERT' in WIN32 code to conform to
5488 usage of 'assert' in non-WIN32 code
5489 * Improve WIN32 'sbrk()' emulation's 'findRegion()' routine to
5490 avoid infinite loop
5491 * Always call 'fREe()' rather than 'free()'
5492
5493 V2.6.5 Wed Jun 17 15:57:31 1998 Doug Lea (dl at gee)
5494 * Fixed ordering problem with boundary-stamping
5495
5496 V2.6.3 Sun May 19 08:17:58 1996 Doug Lea (dl at gee)
5497 * Added pvalloc, as recommended by H.J. Liu
5498 * Added 64bit pointer support mainly from Wolfram Gloger
5499 * Added anonymously donated WIN32 sbrk emulation
5500 * Malloc, calloc, getpagesize: add optimizations from Raymond Nijssen
5501 * malloc_extend_top: fix mask error that caused wastage after
5502 foreign sbrks
5503 * Add linux mremap support code from HJ Liu
5504
5505 V2.6.2 Tue Dec 5 06:52:55 1995 Doug Lea (dl at gee)
5506 * Integrated most documentation with the code.
5507 * Add support for mmap, with help from
5508 Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
5509 * Use last_remainder in more cases.
5510 * Pack bins using idea from colin@nyx10.cs.du.edu
5511 * Use ordered bins instead of best-fit threshhold
5512 * Eliminate block-local decls to simplify tracing and debugging.
5513 * Support another case of realloc via move into top
5514 * Fix error occuring when initial sbrk_base not word-aligned.
5515 * Rely on page size for units instead of SBRK_UNIT to
5516 avoid surprises about sbrk alignment conventions.
5517 * Add mallinfo, mallopt. Thanks to Raymond Nijssen
5518 (raymond@es.ele.tue.nl) for the suggestion.
5519 * Add `pad' argument to malloc_trim and top_pad mallopt parameter.
5520 * More precautions for cases where other routines call sbrk,
5521 courtesy of Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
5522 * Added macros etc., allowing use in linux libc from
5523 H.J. Lu (hjl@gnu.ai.mit.edu)
5524 * Inverted this history list
5525
5526 V2.6.1 Sat Dec 2 14:10:57 1995 Doug Lea (dl at gee)
5527 * Re-tuned and fixed to behave more nicely with V2.6.0 changes.
5528 * Removed all preallocation code since under current scheme
5529 the work required to undo bad preallocations exceeds
5530 the work saved in good cases for most test programs.
5531 * No longer use return list or unconsolidated bins since
5532 no scheme using them consistently outperforms those that don't
5533 given above changes.
5534 * Use best fit for very large chunks to prevent some worst-cases.
5535 * Added some support for debugging
5536
5537 V2.6.0 Sat Nov 4 07:05:23 1995 Doug Lea (dl at gee)
5538 * Removed footers when chunks are in use. Thanks to
5539 Paul Wilson (wilson@cs.texas.edu) for the suggestion.
5540
5541 V2.5.4 Wed Nov 1 07:54:51 1995 Doug Lea (dl at gee)
5542 * Added malloc_trim, with help from Wolfram Gloger
5543 (wmglo@Dent.MED.Uni-Muenchen.DE).
5544
5545 V2.5.3 Tue Apr 26 10:16:01 1994 Doug Lea (dl at g)
5546
5547 V2.5.2 Tue Apr 5 16:20:40 1994 Doug Lea (dl at g)
5548 * realloc: try to expand in both directions
5549 * malloc: swap order of clean-bin strategy;
5550 * realloc: only conditionally expand backwards
5551 * Try not to scavenge used bins
5552 * Use bin counts as a guide to preallocation
5553 * Occasionally bin return list chunks in first scan
5554 * Add a few optimizations from colin@nyx10.cs.du.edu
5555
5556 V2.5.1 Sat Aug 14 15:40:43 1993 Doug Lea (dl at g)
5557 * faster bin computation & slightly different binning
5558 * merged all consolidations to one part of malloc proper
5559 (eliminating old malloc_find_space & malloc_clean_bin)
5560 * Scan 2 returns chunks (not just 1)
5561 * Propagate failure in realloc if malloc returns 0
5562 * Add stuff to allow compilation on non-ANSI compilers
5563 from kpv@research.att.com
5564
5565 V2.5 Sat Aug 7 07:41:59 1993 Doug Lea (dl at g.oswego.edu)
5566 * removed potential for odd address access in prev_chunk
5567 * removed dependency on getpagesize.h
5568 * misc cosmetics and a bit more internal documentation
5569 * anticosmetics: mangled names in macros to evade debugger strangeness
5570 * tested on sparc, hp-700, dec-mips, rs6000
5571 with gcc & native cc (hp, dec only) allowing
5572 Detlefs & Zorn comparison study (in SIGPLAN Notices.)
5573
5574 Trial version Fri Aug 28 13:14:29 1992 Doug Lea (dl at g.oswego.edu)
5575 * Based loosely on libg++-1.2X malloc. (It retains some of the overall
5576 structure of old version, but most details differ.)
5577
5578*/
This page took 0.466039 seconds and 5 git commands to generate.