]> sourceware.org Git - glibc.git/blame - malloc/malloc.c
(__libc_argv, __libc_argc): Make extern. (__hurd_threadvar_max, __hurd_threadvar_st...
[glibc.git] / malloc / malloc.c
CommitLineData
f65fd747 1/* Malloc implementation for multiple threads without lock contention.
5290baf0 2 Copyright (C) 1996, 1997 Free Software Foundation, Inc.
f65fd747 3 This file is part of the GNU C Library.
6259ec0d
UD
4 Contributed by Wolfram Gloger <wmglo@dent.med.uni-muenchen.de>
5 and Doug Lea <dl@cs.oswego.edu>, 1996.
f65fd747
UD
6
7 The GNU C Library is free software; you can redistribute it and/or
8 modify it under the terms of the GNU Library General Public License as
9 published by the Free Software Foundation; either version 2 of the
10 License, or (at your option) any later version.
11
12 The GNU C Library is distributed in the hope that it will be useful,
13 but WITHOUT ANY WARRANTY; without even the implied warranty of
14 MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
15 Library General Public License for more details.
16
17 You should have received a copy of the GNU Library General Public
18 License along with the GNU C Library; see the file COPYING.LIB. If not,
19 write to the Free Software Foundation, Inc., 59 Temple Place - Suite 330,
20 Boston, MA 02111-1307, USA. */
21
10dc2a90 22/* V2.6.4-pt2 Sat Dec 14 1996
f65fd747
UD
23
24 This work is mainly derived from malloc-2.6.4 by Doug Lea
25 <dl@cs.oswego.edu>, which is available from:
26
27 ftp://g.oswego.edu/pub/misc/malloc.c
28
29 Most of the original comments are reproduced in the code below.
30
31* Why use this malloc?
32
33 This is not the fastest, most space-conserving, most portable, or
34 most tunable malloc ever written. However it is among the fastest
35 while also being among the most space-conserving, portable and tunable.
36 Consistent balance across these factors results in a good general-purpose
37 allocator. For a high-level description, see
38 http://g.oswego.edu/dl/html/malloc.html
39
40 On many systems, the standard malloc implementation is by itself not
41 thread-safe, and therefore wrapped with a single global lock around
42 all malloc-related functions. In some applications, especially with
43 multiple available processors, this can lead to contention problems
44 and bad performance. This malloc version was designed with the goal
45 to avoid waiting for locks as much as possible. Statistics indicate
46 that this goal is achieved in many cases.
47
48* Synopsis of public routines
49
50 (Much fuller descriptions are contained in the program documentation below.)
51
52 ptmalloc_init();
53 Initialize global configuration. When compiled for multiple threads,
54 this function must be called once before any other function in the
10dc2a90
UD
55 package. It is not required otherwise. It is called automatically
56 in the Linux/GNU C libray or when compiling with MALLOC_HOOKS.
f65fd747
UD
57 malloc(size_t n);
58 Return a pointer to a newly allocated chunk of at least n bytes, or null
59 if no space is available.
60 free(Void_t* p);
61 Release the chunk of memory pointed to by p, or no effect if p is null.
62 realloc(Void_t* p, size_t n);
63 Return a pointer to a chunk of size n that contains the same data
64 as does chunk p up to the minimum of (n, p's size) bytes, or null
65 if no space is available. The returned pointer may or may not be
66 the same as p. If p is null, equivalent to malloc. Unless the
67 #define REALLOC_ZERO_BYTES_FREES below is set, realloc with a
68 size argument of zero (re)allocates a minimum-sized chunk.
69 memalign(size_t alignment, size_t n);
70 Return a pointer to a newly allocated chunk of n bytes, aligned
71 in accord with the alignment argument, which must be a power of
72 two.
73 valloc(size_t n);
74 Equivalent to memalign(pagesize, n), where pagesize is the page
75 size of the system (or as near to this as can be figured out from
76 all the includes/defines below.)
77 pvalloc(size_t n);
78 Equivalent to valloc(minimum-page-that-holds(n)), that is,
79 round up n to nearest pagesize.
80 calloc(size_t unit, size_t quantity);
81 Returns a pointer to quantity * unit bytes, with all locations
82 set to zero.
83 cfree(Void_t* p);
84 Equivalent to free(p).
85 malloc_trim(size_t pad);
86 Release all but pad bytes of freed top-most memory back
87 to the system. Return 1 if successful, else 0.
88 malloc_usable_size(Void_t* p);
89 Report the number usable allocated bytes associated with allocated
90 chunk p. This may or may not report more bytes than were requested,
91 due to alignment and minimum size constraints.
92 malloc_stats();
93 Prints brief summary statistics on stderr.
94 mallinfo()
95 Returns (by copy) a struct containing various summary statistics.
96 mallopt(int parameter_number, int parameter_value)
97 Changes one of the tunable parameters described below. Returns
98 1 if successful in changing the parameter, else 0.
99
100* Vital statistics:
101
102 Alignment: 8-byte
103 8 byte alignment is currently hardwired into the design. This
104 seems to suffice for all current machines and C compilers.
105
106 Assumed pointer representation: 4 or 8 bytes
107 Code for 8-byte pointers is untested by me but has worked
108 reliably by Wolfram Gloger, who contributed most of the
109 changes supporting this.
110
111 Assumed size_t representation: 4 or 8 bytes
112 Note that size_t is allowed to be 4 bytes even if pointers are 8.
113
114 Minimum overhead per allocated chunk: 4 or 8 bytes
115 Each malloced chunk has a hidden overhead of 4 bytes holding size
116 and status information.
117
118 Minimum allocated size: 4-byte ptrs: 16 bytes (including 4 overhead)
119 8-byte ptrs: 24/32 bytes (including, 4/8 overhead)
120
121 When a chunk is freed, 12 (for 4byte ptrs) or 20 (for 8 byte
122 ptrs but 4 byte size) or 24 (for 8/8) additional bytes are
123 needed; 4 (8) for a trailing size field
124 and 8 (16) bytes for free list pointers. Thus, the minimum
125 allocatable size is 16/24/32 bytes.
126
127 Even a request for zero bytes (i.e., malloc(0)) returns a
128 pointer to something of the minimum allocatable size.
129
130 Maximum allocated size: 4-byte size_t: 2^31 - 8 bytes
131 8-byte size_t: 2^63 - 16 bytes
132
133 It is assumed that (possibly signed) size_t bit values suffice to
134 represent chunk sizes. `Possibly signed' is due to the fact
135 that `size_t' may be defined on a system as either a signed or
136 an unsigned type. To be conservative, values that would appear
137 as negative numbers are avoided.
138 Requests for sizes with a negative sign bit will return a
139 minimum-sized chunk.
140
141 Maximum overhead wastage per allocated chunk: normally 15 bytes
142
6d52618b 143 Alignment demands, plus the minimum allocatable size restriction
f65fd747
UD
144 make the normal worst-case wastage 15 bytes (i.e., up to 15
145 more bytes will be allocated than were requested in malloc), with
146 two exceptions:
147 1. Because requests for zero bytes allocate non-zero space,
148 the worst case wastage for a request of zero bytes is 24 bytes.
149 2. For requests >= mmap_threshold that are serviced via
150 mmap(), the worst case wastage is 8 bytes plus the remainder
151 from a system page (the minimal mmap unit); typically 4096 bytes.
152
153* Limitations
154
155 Here are some features that are NOT currently supported
156
f65fd747
UD
157 * No automated mechanism for fully checking that all accesses
158 to malloced memory stay within their bounds.
159 * No support for compaction.
160
161* Synopsis of compile-time options:
162
163 People have reported using previous versions of this malloc on all
164 versions of Unix, sometimes by tweaking some of the defines
165 below. It has been tested most extensively on Solaris and
166 Linux. People have also reported adapting this malloc for use in
167 stand-alone embedded systems.
168
169 The implementation is in straight, hand-tuned ANSI C. Among other
170 consequences, it uses a lot of macros. Because of this, to be at
171 all usable, this code should be compiled using an optimizing compiler
172 (for example gcc -O2) that can simplify expressions and control
173 paths.
174
175 __STD_C (default: derived from C compiler defines)
176 Nonzero if using ANSI-standard C compiler, a C++ compiler, or
177 a C compiler sufficiently close to ANSI to get away with it.
178 MALLOC_DEBUG (default: NOT defined)
179 Define to enable debugging. Adds fairly extensive assertion-based
180 checking to help track down memory errors, but noticeably slows down
181 execution.
7e3be507 182 MALLOC_HOOKS (default: NOT defined)
10dc2a90
UD
183 Define to enable support run-time replacement of the allocation
184 functions through user-defined `hooks'.
f65fd747
UD
185 REALLOC_ZERO_BYTES_FREES (default: NOT defined)
186 Define this if you think that realloc(p, 0) should be equivalent
187 to free(p). Otherwise, since malloc returns a unique pointer for
188 malloc(0), so does realloc(p, 0).
189 HAVE_MEMCPY (default: defined)
190 Define if you are not otherwise using ANSI STD C, but still
191 have memcpy and memset in your C library and want to use them.
192 Otherwise, simple internal versions are supplied.
193 USE_MEMCPY (default: 1 if HAVE_MEMCPY is defined, 0 otherwise)
194 Define as 1 if you want the C library versions of memset and
195 memcpy called in realloc and calloc (otherwise macro versions are used).
196 At least on some platforms, the simple macro versions usually
197 outperform libc versions.
198 HAVE_MMAP (default: defined as 1)
199 Define to non-zero to optionally make malloc() use mmap() to
200 allocate very large blocks.
201 HAVE_MREMAP (default: defined as 0 unless Linux libc set)
202 Define to non-zero to optionally make realloc() use mremap() to
203 reallocate very large blocks.
204 malloc_getpagesize (default: derived from system #includes)
205 Either a constant or routine call returning the system page size.
206 HAVE_USR_INCLUDE_MALLOC_H (default: NOT defined)
207 Optionally define if you are on a system with a /usr/include/malloc.h
208 that declares struct mallinfo. It is not at all necessary to
209 define this even if you do, but will ensure consistency.
210 INTERNAL_SIZE_T (default: size_t)
211 Define to a 32-bit type (probably `unsigned int') if you are on a
212 64-bit machine, yet do not want or need to allow malloc requests of
213 greater than 2^31 to be handled. This saves space, especially for
214 very small chunks.
215 _LIBC (default: NOT defined)
216 Defined only when compiled as part of the Linux libc/glibc.
217 Also note that there is some odd internal name-mangling via defines
218 (for example, internally, `malloc' is named `mALLOc') needed
219 when compiling in this case. These look funny but don't otherwise
220 affect anything.
221 LACKS_UNISTD_H (default: undefined)
222 Define this if your system does not have a <unistd.h>.
223 MORECORE (default: sbrk)
224 The name of the routine to call to obtain more memory from the system.
225 MORECORE_FAILURE (default: -1)
226 The value returned upon failure of MORECORE.
227 MORECORE_CLEARS (default 1)
228 True (1) if the routine mapped to MORECORE zeroes out memory (which
229 holds for sbrk).
230 DEFAULT_TRIM_THRESHOLD
231 DEFAULT_TOP_PAD
232 DEFAULT_MMAP_THRESHOLD
233 DEFAULT_MMAP_MAX
234 Default values of tunable parameters (described in detail below)
235 controlling interaction with host system routines (sbrk, mmap, etc).
236 These values may also be changed dynamically via mallopt(). The
237 preset defaults are those that give best performance for typical
238 programs/systems.
10dc2a90
UD
239 DEFAULT_CHECK_ACTION
240 When the standard debugging hooks are in place, and a pointer is
241 detected as corrupt, do nothing (0), print an error message (1),
242 or call abort() (2).
f65fd747
UD
243
244
245*/
246
247/*
248
249* Compile-time options for multiple threads:
250
251 USE_PTHREADS, USE_THR, USE_SPROC
252 Define one of these as 1 to select the thread interface:
253 POSIX threads, Solaris threads or SGI sproc's, respectively.
254 If none of these is defined as non-zero, you get a `normal'
255 malloc implementation which is not thread-safe. Support for
256 multiple threads requires HAVE_MMAP=1. As an exception, when
257 compiling for GNU libc, i.e. when _LIBC is defined, then none of
258 the USE_... symbols have to be defined.
259
260 HEAP_MIN_SIZE
261 HEAP_MAX_SIZE
262 When thread support is enabled, additional `heap's are created
263 with mmap calls. These are limited in size; HEAP_MIN_SIZE should
264 be a multiple of the page size, while HEAP_MAX_SIZE must be a power
265 of two for alignment reasons. HEAP_MAX_SIZE should be at least
266 twice as large as the mmap threshold.
267 THREAD_STATS
268 When this is defined as non-zero, some statistics on mutex locking
269 are computed.
270
271*/
272
273\f
274
275
f65fd747
UD
276/* Preliminaries */
277
278#ifndef __STD_C
279#if defined (__STDC__)
280#define __STD_C 1
281#else
282#if __cplusplus
283#define __STD_C 1
284#else
285#define __STD_C 0
286#endif /*__cplusplus*/
287#endif /*__STDC__*/
288#endif /*__STD_C*/
289
290#ifndef Void_t
291#if __STD_C
292#define Void_t void
293#else
294#define Void_t char
295#endif
296#endif /*Void_t*/
297
298#if __STD_C
10dc2a90
UD
299# include <stddef.h> /* for size_t */
300# if defined(_LIBC) || defined(MALLOC_HOOKS)
7e3be507 301# include <stdlib.h> /* for getenv(), abort() */
10dc2a90 302# endif
f65fd747 303#else
10dc2a90 304# include <sys/types.h>
f65fd747
UD
305#endif
306
8a4b65b4
UD
307/* Macros for handling mutexes and thread-specific data. This is
308 included early, because some thread-related header files (such as
309 pthread.h) should be included before any others. */
310#include "thread-m.h"
311
f65fd747
UD
312#ifdef __cplusplus
313extern "C" {
314#endif
315
316#include <stdio.h> /* needed for malloc_stats */
317
318
319/*
320 Compile-time options
321*/
322
323
324/*
325 Debugging:
326
327 Because freed chunks may be overwritten with link fields, this
328 malloc will often die when freed memory is overwritten by user
329 programs. This can be very effective (albeit in an annoying way)
330 in helping track down dangling pointers.
331
332 If you compile with -DMALLOC_DEBUG, a number of assertion checks are
333 enabled that will catch more memory errors. You probably won't be
334 able to make much sense of the actual assertion errors, but they
335 should help you locate incorrectly overwritten memory. The
336 checking is fairly extensive, and will slow down execution
337 noticeably. Calling malloc_stats or mallinfo with MALLOC_DEBUG set will
338 attempt to check every non-mmapped allocated and free chunk in the
6d52618b 339 course of computing the summaries. (By nature, mmapped regions
f65fd747
UD
340 cannot be checked very much automatically.)
341
342 Setting MALLOC_DEBUG may also be helpful if you are trying to modify
343 this code. The assertions in the check routines spell out in more
344 detail the assumptions and invariants underlying the algorithms.
345
346*/
347
348#if MALLOC_DEBUG
349#include <assert.h>
350#else
351#define assert(x) ((void)0)
352#endif
353
354
355/*
356 INTERNAL_SIZE_T is the word-size used for internal bookkeeping
357 of chunk sizes. On a 64-bit machine, you can reduce malloc
358 overhead by defining INTERNAL_SIZE_T to be a 32 bit `unsigned int'
359 at the expense of not being able to handle requests greater than
360 2^31. This limitation is hardly ever a concern; you are encouraged
361 to set this. However, the default version is the same as size_t.
362*/
363
364#ifndef INTERNAL_SIZE_T
365#define INTERNAL_SIZE_T size_t
366#endif
367
368/*
369 REALLOC_ZERO_BYTES_FREES should be set if a call to
370 realloc with zero bytes should be the same as a call to free.
371 Some people think it should. Otherwise, since this malloc
372 returns a unique pointer for malloc(0), so does realloc(p, 0).
373*/
374
375
376/* #define REALLOC_ZERO_BYTES_FREES */
377
378
379/*
380 HAVE_MEMCPY should be defined if you are not otherwise using
381 ANSI STD C, but still have memcpy and memset in your C library
382 and want to use them in calloc and realloc. Otherwise simple
383 macro versions are defined here.
384
385 USE_MEMCPY should be defined as 1 if you actually want to
386 have memset and memcpy called. People report that the macro
387 versions are often enough faster than libc versions on many
388 systems that it is better to use them.
389
390*/
391
10dc2a90 392#define HAVE_MEMCPY 1
f65fd747
UD
393
394#ifndef USE_MEMCPY
395#ifdef HAVE_MEMCPY
396#define USE_MEMCPY 1
397#else
398#define USE_MEMCPY 0
399#endif
400#endif
401
402#if (__STD_C || defined(HAVE_MEMCPY))
403
404#if __STD_C
405void* memset(void*, int, size_t);
406void* memcpy(void*, const void*, size_t);
407#else
408Void_t* memset();
409Void_t* memcpy();
410#endif
411#endif
412
413#if USE_MEMCPY
414
415/* The following macros are only invoked with (2n+1)-multiples of
416 INTERNAL_SIZE_T units, with a positive integer n. This is exploited
417 for fast inline execution when n is small. */
418
419#define MALLOC_ZERO(charp, nbytes) \
420do { \
421 INTERNAL_SIZE_T mzsz = (nbytes); \
422 if(mzsz <= 9*sizeof(mzsz)) { \
423 INTERNAL_SIZE_T* mz = (INTERNAL_SIZE_T*) (charp); \
424 if(mzsz >= 5*sizeof(mzsz)) { *mz++ = 0; \
425 *mz++ = 0; \
426 if(mzsz >= 7*sizeof(mzsz)) { *mz++ = 0; \
427 *mz++ = 0; \
428 if(mzsz >= 9*sizeof(mzsz)) { *mz++ = 0; \
429 *mz++ = 0; }}} \
430 *mz++ = 0; \
431 *mz++ = 0; \
432 *mz = 0; \
433 } else memset((charp), 0, mzsz); \
434} while(0)
435
436#define MALLOC_COPY(dest,src,nbytes) \
437do { \
438 INTERNAL_SIZE_T mcsz = (nbytes); \
439 if(mcsz <= 9*sizeof(mcsz)) { \
440 INTERNAL_SIZE_T* mcsrc = (INTERNAL_SIZE_T*) (src); \
441 INTERNAL_SIZE_T* mcdst = (INTERNAL_SIZE_T*) (dest); \
442 if(mcsz >= 5*sizeof(mcsz)) { *mcdst++ = *mcsrc++; \
443 *mcdst++ = *mcsrc++; \
444 if(mcsz >= 7*sizeof(mcsz)) { *mcdst++ = *mcsrc++; \
445 *mcdst++ = *mcsrc++; \
446 if(mcsz >= 9*sizeof(mcsz)) { *mcdst++ = *mcsrc++; \
447 *mcdst++ = *mcsrc++; }}} \
448 *mcdst++ = *mcsrc++; \
449 *mcdst++ = *mcsrc++; \
450 *mcdst = *mcsrc ; \
451 } else memcpy(dest, src, mcsz); \
452} while(0)
453
454#else /* !USE_MEMCPY */
455
456/* Use Duff's device for good zeroing/copying performance. */
457
458#define MALLOC_ZERO(charp, nbytes) \
459do { \
460 INTERNAL_SIZE_T* mzp = (INTERNAL_SIZE_T*)(charp); \
461 long mctmp = (nbytes)/sizeof(INTERNAL_SIZE_T), mcn; \
462 if (mctmp < 8) mcn = 0; else { mcn = (mctmp-1)/8; mctmp %= 8; } \
463 switch (mctmp) { \
464 case 0: for(;;) { *mzp++ = 0; \
465 case 7: *mzp++ = 0; \
466 case 6: *mzp++ = 0; \
467 case 5: *mzp++ = 0; \
468 case 4: *mzp++ = 0; \
469 case 3: *mzp++ = 0; \
470 case 2: *mzp++ = 0; \
471 case 1: *mzp++ = 0; if(mcn <= 0) break; mcn--; } \
472 } \
473} while(0)
474
475#define MALLOC_COPY(dest,src,nbytes) \
476do { \
477 INTERNAL_SIZE_T* mcsrc = (INTERNAL_SIZE_T*) src; \
478 INTERNAL_SIZE_T* mcdst = (INTERNAL_SIZE_T*) dest; \
479 long mctmp = (nbytes)/sizeof(INTERNAL_SIZE_T), mcn; \
480 if (mctmp < 8) mcn = 0; else { mcn = (mctmp-1)/8; mctmp %= 8; } \
481 switch (mctmp) { \
482 case 0: for(;;) { *mcdst++ = *mcsrc++; \
483 case 7: *mcdst++ = *mcsrc++; \
484 case 6: *mcdst++ = *mcsrc++; \
485 case 5: *mcdst++ = *mcsrc++; \
486 case 4: *mcdst++ = *mcsrc++; \
487 case 3: *mcdst++ = *mcsrc++; \
488 case 2: *mcdst++ = *mcsrc++; \
489 case 1: *mcdst++ = *mcsrc++; if(mcn <= 0) break; mcn--; } \
490 } \
491} while(0)
492
493#endif
494
495
496/*
497 Define HAVE_MMAP to optionally make malloc() use mmap() to
498 allocate very large blocks. These will be returned to the
499 operating system immediately after a free().
500*/
501
502#ifndef HAVE_MMAP
503#define HAVE_MMAP 1
504#endif
505
506/*
507 Define HAVE_MREMAP to make realloc() use mremap() to re-allocate
508 large blocks. This is currently only possible on Linux with
509 kernel versions newer than 1.3.77.
510*/
511
512#ifndef HAVE_MREMAP
513#define HAVE_MREMAP defined(__linux__)
514#endif
515
516#if HAVE_MMAP
517
518#include <unistd.h>
519#include <fcntl.h>
520#include <sys/mman.h>
521
522#if !defined(MAP_ANONYMOUS) && defined(MAP_ANON)
523#define MAP_ANONYMOUS MAP_ANON
524#endif
525
526#endif /* HAVE_MMAP */
527
528/*
529 Access to system page size. To the extent possible, this malloc
530 manages memory from the system in page-size units.
531
532 The following mechanics for getpagesize were adapted from
533 bsd/gnu getpagesize.h
534*/
535
536#ifndef LACKS_UNISTD_H
537# include <unistd.h>
538#endif
539
540#ifndef malloc_getpagesize
541# ifdef _SC_PAGESIZE /* some SVR4 systems omit an underscore */
542# ifndef _SC_PAGE_SIZE
543# define _SC_PAGE_SIZE _SC_PAGESIZE
544# endif
545# endif
546# ifdef _SC_PAGE_SIZE
547# define malloc_getpagesize sysconf(_SC_PAGE_SIZE)
548# else
549# if defined(BSD) || defined(DGUX) || defined(HAVE_GETPAGESIZE)
550 extern size_t getpagesize();
551# define malloc_getpagesize getpagesize()
552# else
553# include <sys/param.h>
554# ifdef EXEC_PAGESIZE
555# define malloc_getpagesize EXEC_PAGESIZE
556# else
557# ifdef NBPG
558# ifndef CLSIZE
559# define malloc_getpagesize NBPG
560# else
561# define malloc_getpagesize (NBPG * CLSIZE)
562# endif
563# else
564# ifdef NBPC
565# define malloc_getpagesize NBPC
566# else
567# ifdef PAGESIZE
568# define malloc_getpagesize PAGESIZE
569# else
570# define malloc_getpagesize (4096) /* just guess */
571# endif
572# endif
573# endif
574# endif
575# endif
576# endif
577#endif
578
579
580
581/*
582
583 This version of malloc supports the standard SVID/XPG mallinfo
584 routine that returns a struct containing the same kind of
585 information you can get from malloc_stats. It should work on
586 any SVID/XPG compliant system that has a /usr/include/malloc.h
587 defining struct mallinfo. (If you'd like to install such a thing
588 yourself, cut out the preliminary declarations as described above
589 and below and save them in a malloc.h file. But there's no
590 compelling reason to bother to do this.)
591
592 The main declaration needed is the mallinfo struct that is returned
593 (by-copy) by mallinfo(). The SVID/XPG malloinfo struct contains a
594 bunch of fields, most of which are not even meaningful in this
595 version of malloc. Some of these fields are are instead filled by
596 mallinfo() with other numbers that might possibly be of interest.
597
598 HAVE_USR_INCLUDE_MALLOC_H should be set if you have a
599 /usr/include/malloc.h file that includes a declaration of struct
600 mallinfo. If so, it is included; else an SVID2/XPG2 compliant
601 version is declared below. These must be precisely the same for
602 mallinfo() to work.
603
604*/
605
606/* #define HAVE_USR_INCLUDE_MALLOC_H */
607
608#if HAVE_USR_INCLUDE_MALLOC_H
8a4b65b4 609# include "/usr/include/malloc.h"
f65fd747 610#else
8a4b65b4
UD
611# ifdef _LIBC
612# include "malloc.h"
613# else
614# include "ptmalloc.h"
615# endif
f65fd747
UD
616#endif
617
618
619
620#ifndef DEFAULT_TRIM_THRESHOLD
621#define DEFAULT_TRIM_THRESHOLD (128 * 1024)
622#endif
623
624/*
625 M_TRIM_THRESHOLD is the maximum amount of unused top-most memory
626 to keep before releasing via malloc_trim in free().
627
628 Automatic trimming is mainly useful in long-lived programs.
629 Because trimming via sbrk can be slow on some systems, and can
630 sometimes be wasteful (in cases where programs immediately
631 afterward allocate more large chunks) the value should be high
632 enough so that your overall system performance would improve by
633 releasing.
634
635 The trim threshold and the mmap control parameters (see below)
636 can be traded off with one another. Trimming and mmapping are
637 two different ways of releasing unused memory back to the
638 system. Between these two, it is often possible to keep
639 system-level demands of a long-lived program down to a bare
640 minimum. For example, in one test suite of sessions measuring
641 the XF86 X server on Linux, using a trim threshold of 128K and a
642 mmap threshold of 192K led to near-minimal long term resource
643 consumption.
644
645 If you are using this malloc in a long-lived program, it should
646 pay to experiment with these values. As a rough guide, you
647 might set to a value close to the average size of a process
648 (program) running on your system. Releasing this much memory
649 would allow such a process to run in memory. Generally, it's
831372e7 650 worth it to tune for trimming rather than memory mapping when a
f65fd747
UD
651 program undergoes phases where several large chunks are
652 allocated and released in ways that can reuse each other's
653 storage, perhaps mixed with phases where there are no such
654 chunks at all. And in well-behaved long-lived programs,
655 controlling release of large blocks via trimming versus mapping
656 is usually faster.
657
658 However, in most programs, these parameters serve mainly as
659 protection against the system-level effects of carrying around
660 massive amounts of unneeded memory. Since frequent calls to
661 sbrk, mmap, and munmap otherwise degrade performance, the default
662 parameters are set to relatively high values that serve only as
663 safeguards.
664
665 The default trim value is high enough to cause trimming only in
666 fairly extreme (by current memory consumption standards) cases.
667 It must be greater than page size to have any useful effect. To
668 disable trimming completely, you can set to (unsigned long)(-1);
669
670
671*/
672
673
674#ifndef DEFAULT_TOP_PAD
675#define DEFAULT_TOP_PAD (0)
676#endif
677
678/*
679 M_TOP_PAD is the amount of extra `padding' space to allocate or
680 retain whenever sbrk is called. It is used in two ways internally:
681
682 * When sbrk is called to extend the top of the arena to satisfy
683 a new malloc request, this much padding is added to the sbrk
684 request.
685
686 * When malloc_trim is called automatically from free(),
687 it is used as the `pad' argument.
688
689 In both cases, the actual amount of padding is rounded
690 so that the end of the arena is always a system page boundary.
691
692 The main reason for using padding is to avoid calling sbrk so
693 often. Having even a small pad greatly reduces the likelihood
694 that nearly every malloc request during program start-up (or
695 after trimming) will invoke sbrk, which needlessly wastes
696 time.
697
698 Automatic rounding-up to page-size units is normally sufficient
699 to avoid measurable overhead, so the default is 0. However, in
700 systems where sbrk is relatively slow, it can pay to increase
701 this value, at the expense of carrying around more memory than
702 the program needs.
703
704*/
705
706
707#ifndef DEFAULT_MMAP_THRESHOLD
708#define DEFAULT_MMAP_THRESHOLD (128 * 1024)
709#endif
710
711/*
712
713 M_MMAP_THRESHOLD is the request size threshold for using mmap()
714 to service a request. Requests of at least this size that cannot
715 be allocated using already-existing space will be serviced via mmap.
716 (If enough normal freed space already exists it is used instead.)
717
718 Using mmap segregates relatively large chunks of memory so that
719 they can be individually obtained and released from the host
720 system. A request serviced through mmap is never reused by any
721 other request (at least not directly; the system may just so
722 happen to remap successive requests to the same locations).
723
724 Segregating space in this way has the benefit that mmapped space
725 can ALWAYS be individually released back to the system, which
726 helps keep the system level memory demands of a long-lived
727 program low. Mapped memory can never become `locked' between
728 other chunks, as can happen with normally allocated chunks, which
729 menas that even trimming via malloc_trim would not release them.
730
731 However, it has the disadvantages that:
732
733 1. The space cannot be reclaimed, consolidated, and then
734 used to service later requests, as happens with normal chunks.
735 2. It can lead to more wastage because of mmap page alignment
736 requirements
737 3. It causes malloc performance to be more dependent on host
738 system memory management support routines which may vary in
739 implementation quality and may impose arbitrary
740 limitations. Generally, servicing a request via normal
741 malloc steps is faster than going through a system's mmap.
742
743 All together, these considerations should lead you to use mmap
744 only for relatively large requests.
745
746
747*/
748
749
750
751#ifndef DEFAULT_MMAP_MAX
752#if HAVE_MMAP
753#define DEFAULT_MMAP_MAX (1024)
754#else
755#define DEFAULT_MMAP_MAX (0)
756#endif
757#endif
758
759/*
760 M_MMAP_MAX is the maximum number of requests to simultaneously
761 service using mmap. This parameter exists because:
762
763 1. Some systems have a limited number of internal tables for
764 use by mmap.
765 2. In most systems, overreliance on mmap can degrade overall
766 performance.
767 3. If a program allocates many large regions, it is probably
768 better off using normal sbrk-based allocation routines that
769 can reclaim and reallocate normal heap memory. Using a
770 small value allows transition into this mode after the
771 first few allocations.
772
773 Setting to 0 disables all use of mmap. If HAVE_MMAP is not set,
774 the default value is 0, and attempts to set it to non-zero values
775 in mallopt will fail.
776*/
777
778
779
10dc2a90
UD
780#ifndef DEFAULT_CHECK_ACTION
781#define DEFAULT_CHECK_ACTION 1
782#endif
783
784/* What to do if the standard debugging hooks are in place and a
785 corrupt pointer is detected: do nothing (0), print an error message
786 (1), or call abort() (2). */
787
788
789
f65fd747
UD
790#define HEAP_MIN_SIZE (32*1024)
791#define HEAP_MAX_SIZE (1024*1024) /* must be a power of two */
792
793/* HEAP_MIN_SIZE and HEAP_MAX_SIZE limit the size of mmap()ed heaps
794 that are dynamically created for multi-threaded programs. The
795 maximum size must be a power of two, for fast determination of
796 which heap belongs to a chunk. It should be much larger than
797 the mmap threshold, so that requests with a size just below that
798 threshold can be fulfilled without creating too many heaps.
799*/
800
801
802
803#ifndef THREAD_STATS
804#define THREAD_STATS 0
805#endif
806
807/* If THREAD_STATS is non-zero, some statistics on mutex locking are
808 computed. */
809
810
811/*
812
813 Special defines for the Linux/GNU C library.
814
815*/
816
817
818#ifdef _LIBC
819
820#if __STD_C
821
822Void_t * __default_morecore (ptrdiff_t);
1228ed5c 823Void_t *(*__morecore)(ptrdiff_t) = __default_morecore;
f65fd747
UD
824
825#else
826
827Void_t * __default_morecore ();
1228ed5c 828Void_t *(*__morecore)() = __default_morecore;
f65fd747
UD
829
830#endif
831
832#define MORECORE (*__morecore)
833#define MORECORE_FAILURE 0
834#define MORECORE_CLEARS 1
10dc2a90
UD
835#define mmap __mmap
836#define munmap __munmap
837#define mremap __mremap
838#undef malloc_getpagesize
839#define malloc_getpagesize __getpagesize()
f65fd747
UD
840
841#else /* _LIBC */
842
843#if __STD_C
844extern Void_t* sbrk(ptrdiff_t);
845#else
846extern Void_t* sbrk();
847#endif
848
849#ifndef MORECORE
850#define MORECORE sbrk
851#endif
852
853#ifndef MORECORE_FAILURE
854#define MORECORE_FAILURE -1
855#endif
856
857#ifndef MORECORE_CLEARS
858#define MORECORE_CLEARS 1
859#endif
860
861#endif /* _LIBC */
862
10dc2a90 863#ifdef _LIBC
f65fd747
UD
864
865#define cALLOc __libc_calloc
866#define fREe __libc_free
867#define mALLOc __libc_malloc
868#define mEMALIGn __libc_memalign
869#define rEALLOc __libc_realloc
870#define vALLOc __libc_valloc
871#define pvALLOc __libc_pvalloc
872#define mALLINFo __libc_mallinfo
873#define mALLOPt __libc_mallopt
7e3be507
UD
874#define mALLOC_STATs __malloc_stats
875#define mALLOC_USABLE_SIZe __malloc_usable_size
876#define mALLOC_TRIm __malloc_trim
f65fd747 877
f65fd747
UD
878#else
879
880#define cALLOc calloc
881#define fREe free
882#define mALLOc malloc
883#define mEMALIGn memalign
884#define rEALLOc realloc
885#define vALLOc valloc
886#define pvALLOc pvalloc
887#define mALLINFo mallinfo
888#define mALLOPt mallopt
7e3be507
UD
889#define mALLOC_STATs malloc_stats
890#define mALLOC_USABLE_SIZe malloc_usable_size
891#define mALLOC_TRIm malloc_trim
f65fd747
UD
892
893#endif
894
895/* Public routines */
896
897#if __STD_C
898
899#ifndef _LIBC
900void ptmalloc_init(void);
901#endif
902Void_t* mALLOc(size_t);
903void fREe(Void_t*);
904Void_t* rEALLOc(Void_t*, size_t);
905Void_t* mEMALIGn(size_t, size_t);
906Void_t* vALLOc(size_t);
907Void_t* pvALLOc(size_t);
908Void_t* cALLOc(size_t, size_t);
909void cfree(Void_t*);
7e3be507
UD
910int mALLOC_TRIm(size_t);
911size_t mALLOC_USABLE_SIZe(Void_t*);
912void mALLOC_STATs(void);
f65fd747
UD
913int mALLOPt(int, int);
914struct mallinfo mALLINFo(void);
915#else
916#ifndef _LIBC
917void ptmalloc_init();
918#endif
919Void_t* mALLOc();
920void fREe();
921Void_t* rEALLOc();
922Void_t* mEMALIGn();
923Void_t* vALLOc();
924Void_t* pvALLOc();
925Void_t* cALLOc();
926void cfree();
7e3be507
UD
927int mALLOC_TRIm();
928size_t mALLOC_USABLE_SIZe();
929void mALLOC_STATs();
f65fd747
UD
930int mALLOPt();
931struct mallinfo mALLINFo();
932#endif
933
934
935#ifdef __cplusplus
936}; /* end of extern "C" */
937#endif
938
939#if !defined(NO_THREADS) && !HAVE_MMAP
940"Can't have threads support without mmap"
941#endif
942
943
944/*
945 Type declarations
946*/
947
948
949struct malloc_chunk
950{
951 INTERNAL_SIZE_T prev_size; /* Size of previous chunk (if free). */
952 INTERNAL_SIZE_T size; /* Size in bytes, including overhead. */
953 struct malloc_chunk* fd; /* double links -- used only if free. */
954 struct malloc_chunk* bk;
955};
956
957typedef struct malloc_chunk* mchunkptr;
958
959/*
960
961 malloc_chunk details:
962
963 (The following includes lightly edited explanations by Colin Plumb.)
964
965 Chunks of memory are maintained using a `boundary tag' method as
966 described in e.g., Knuth or Standish. (See the paper by Paul
967 Wilson ftp://ftp.cs.utexas.edu/pub/garbage/allocsrv.ps for a
968 survey of such techniques.) Sizes of free chunks are stored both
969 in the front of each chunk and at the end. This makes
970 consolidating fragmented chunks into bigger chunks very fast. The
971 size fields also hold bits representing whether chunks are free or
972 in use.
973
974 An allocated chunk looks like this:
975
976
977 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
978 | Size of previous chunk, if allocated | |
979 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
980 | Size of chunk, in bytes |P|
981 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
982 | User data starts here... .
983 . .
984 . (malloc_usable_space() bytes) .
985 . |
986nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
987 | Size of chunk |
988 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
989
990
991 Where "chunk" is the front of the chunk for the purpose of most of
992 the malloc code, but "mem" is the pointer that is returned to the
993 user. "Nextchunk" is the beginning of the next contiguous chunk.
994
6d52618b 995 Chunks always begin on even word boundaries, so the mem portion
f65fd747
UD
996 (which is returned to the user) is also on an even word boundary, and
997 thus double-word aligned.
998
999 Free chunks are stored in circular doubly-linked lists, and look like this:
1000
1001 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1002 | Size of previous chunk |
1003 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1004 `head:' | Size of chunk, in bytes |P|
1005 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1006 | Forward pointer to next chunk in list |
1007 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1008 | Back pointer to previous chunk in list |
1009 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1010 | Unused space (may be 0 bytes long) .
1011 . .
1012 . |
1013nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1014 `foot:' | Size of chunk, in bytes |
1015 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1016
1017 The P (PREV_INUSE) bit, stored in the unused low-order bit of the
1018 chunk size (which is always a multiple of two words), is an in-use
1019 bit for the *previous* chunk. If that bit is *clear*, then the
1020 word before the current chunk size contains the previous chunk
1021 size, and can be used to find the front of the previous chunk.
1022 (The very first chunk allocated always has this bit set,
1023 preventing access to non-existent (or non-owned) memory.)
1024
1025 Note that the `foot' of the current chunk is actually represented
1026 as the prev_size of the NEXT chunk. (This makes it easier to
1027 deal with alignments etc).
1028
1029 The two exceptions to all this are
1030
1031 1. The special chunk `top', which doesn't bother using the
1032 trailing size field since there is no
1033 next contiguous chunk that would have to index off it. (After
1034 initialization, `top' is forced to always exist. If it would
1035 become less than MINSIZE bytes long, it is replenished via
1036 malloc_extend_top.)
1037
1038 2. Chunks allocated via mmap, which have the second-lowest-order
1039 bit (IS_MMAPPED) set in their size fields. Because they are
1040 never merged or traversed from any other chunk, they have no
1041 foot size or inuse information.
1042
1043 Available chunks are kept in any of several places (all declared below):
1044
1045 * `av': An array of chunks serving as bin headers for consolidated
1046 chunks. Each bin is doubly linked. The bins are approximately
1047 proportionally (log) spaced. There are a lot of these bins
1048 (128). This may look excessive, but works very well in
1049 practice. All procedures maintain the invariant that no
1050 consolidated chunk physically borders another one. Chunks in
1051 bins are kept in size order, with ties going to the
1052 approximately least recently used chunk.
1053
1054 The chunks in each bin are maintained in decreasing sorted order by
1055 size. This is irrelevant for the small bins, which all contain
1056 the same-sized chunks, but facilitates best-fit allocation for
1057 larger chunks. (These lists are just sequential. Keeping them in
1058 order almost never requires enough traversal to warrant using
1059 fancier ordered data structures.) Chunks of the same size are
1060 linked with the most recently freed at the front, and allocations
1061 are taken from the back. This results in LRU or FIFO allocation
1062 order, which tends to give each chunk an equal opportunity to be
1063 consolidated with adjacent freed chunks, resulting in larger free
1064 chunks and less fragmentation.
1065
1066 * `top': The top-most available chunk (i.e., the one bordering the
1067 end of available memory) is treated specially. It is never
1068 included in any bin, is used only if no other chunk is
1069 available, and is released back to the system if it is very
1070 large (see M_TRIM_THRESHOLD).
1071
1072 * `last_remainder': A bin holding only the remainder of the
1073 most recently split (non-top) chunk. This bin is checked
1074 before other non-fitting chunks, so as to provide better
1075 locality for runs of sequentially allocated chunks.
1076
1077 * Implicitly, through the host system's memory mapping tables.
1078 If supported, requests greater than a threshold are usually
1079 serviced via calls to mmap, and then later released via munmap.
1080
1081*/
1082
1083/*
1084 Bins
1085
1086 The bins are an array of pairs of pointers serving as the
1087 heads of (initially empty) doubly-linked lists of chunks, laid out
1088 in a way so that each pair can be treated as if it were in a
1089 malloc_chunk. (This way, the fd/bk offsets for linking bin heads
1090 and chunks are the same).
1091
1092 Bins for sizes < 512 bytes contain chunks of all the same size, spaced
1093 8 bytes apart. Larger bins are approximately logarithmically
1094 spaced. (See the table below.)
1095
1096 Bin layout:
1097
1098 64 bins of size 8
1099 32 bins of size 64
1100 16 bins of size 512
1101 8 bins of size 4096
1102 4 bins of size 32768
1103 2 bins of size 262144
1104 1 bin of size what's left
1105
1106 There is actually a little bit of slop in the numbers in bin_index
1107 for the sake of speed. This makes no difference elsewhere.
1108
1109 The special chunks `top' and `last_remainder' get their own bins,
1110 (this is implemented via yet more trickery with the av array),
1111 although `top' is never properly linked to its bin since it is
1112 always handled specially.
1113
1114*/
1115
1116#define NAV 128 /* number of bins */
1117
1118typedef struct malloc_chunk* mbinptr;
1119
1120/* An arena is a configuration of malloc_chunks together with an array
1121 of bins. With multiple threads, it must be locked via a mutex
1122 before changing its data structures. One or more `heaps' are
1123 associated with each arena, except for the main_arena, which is
1124 associated only with the `main heap', i.e. the conventional free
1125 store obtained with calls to MORECORE() (usually sbrk). The `av'
1126 array is never mentioned directly in the code, but instead used via
1127 bin access macros. */
1128
1129typedef struct _arena {
1130 mbinptr av[2*NAV + 2];
1131 struct _arena *next;
8a4b65b4
UD
1132 size_t size;
1133#if THREAD_STATS
1134 long stat_lock_direct, stat_lock_loop, stat_lock_wait;
1135#endif
f65fd747
UD
1136 mutex_t mutex;
1137} arena;
1138
1139
6d52618b 1140/* A heap is a single contiguous memory region holding (coalesceable)
f65fd747
UD
1141 malloc_chunks. It is allocated with mmap() and always starts at an
1142 address aligned to HEAP_MAX_SIZE. Not used unless compiling for
1143 multiple threads. */
1144
1145typedef struct _heap_info {
8a4b65b4
UD
1146 arena *ar_ptr; /* Arena for this heap. */
1147 struct _heap_info *prev; /* Previous heap. */
1148 size_t size; /* Current size in bytes. */
1149 size_t pad; /* Make sure the following data is properly aligned. */
f65fd747
UD
1150} heap_info;
1151
1152
1153/*
1154 Static functions (forward declarations)
1155*/
1156
1157#if __STD_C
10dc2a90 1158
f65fd747
UD
1159static void chunk_free(arena *ar_ptr, mchunkptr p);
1160static mchunkptr chunk_alloc(arena *ar_ptr, INTERNAL_SIZE_T size);
10dc2a90 1161static mchunkptr chunk_realloc(arena *ar_ptr, mchunkptr oldp,
7e3be507 1162 INTERNAL_SIZE_T oldsize, INTERNAL_SIZE_T nb);
10dc2a90 1163static mchunkptr chunk_align(arena *ar_ptr, INTERNAL_SIZE_T nb,
7e3be507 1164 size_t alignment);
8a4b65b4
UD
1165static int main_trim(size_t pad);
1166#ifndef NO_THREADS
1167static int heap_trim(heap_info *heap, size_t pad);
1168#endif
10dc2a90
UD
1169#if defined(_LIBC) || defined(MALLOC_HOOKS)
1170static Void_t* malloc_check(size_t sz);
1171static void free_check(Void_t* mem);
1172static Void_t* realloc_check(Void_t* oldmem, size_t bytes);
1173static Void_t* memalign_check(size_t alignment, size_t bytes);
7e3be507
UD
1174static Void_t* malloc_starter(size_t sz);
1175static void free_starter(Void_t* mem);
10dc2a90
UD
1176#endif
1177
f65fd747 1178#else
10dc2a90 1179
f65fd747
UD
1180static void chunk_free();
1181static mchunkptr chunk_alloc();
10dc2a90
UD
1182static mchunkptr chunk_realloc();
1183static mchunkptr chunk_align();
8a4b65b4
UD
1184static int main_trim();
1185#ifndef NO_THREADS
1186static int heap_trim();
1187#endif
10dc2a90
UD
1188#if defined(_LIBC) || defined(MALLOC_HOOKS)
1189static Void_t* malloc_check();
1190static void free_check();
1191static Void_t* realloc_check();
1192static Void_t* memalign_check();
7e3be507
UD
1193static Void_t* malloc_starter();
1194static void free_starter();
10dc2a90
UD
1195#endif
1196
f65fd747
UD
1197#endif
1198
1199\f
1200
1201/* sizes, alignments */
1202
1203#define SIZE_SZ (sizeof(INTERNAL_SIZE_T))
1204#define MALLOC_ALIGNMENT (SIZE_SZ + SIZE_SZ)
1205#define MALLOC_ALIGN_MASK (MALLOC_ALIGNMENT - 1)
1206#define MINSIZE (sizeof(struct malloc_chunk))
1207
1208/* conversion from malloc headers to user pointers, and back */
1209
1210#define chunk2mem(p) ((Void_t*)((char*)(p) + 2*SIZE_SZ))
1211#define mem2chunk(mem) ((mchunkptr)((char*)(mem) - 2*SIZE_SZ))
1212
1213/* pad request bytes into a usable size */
1214
1215#define request2size(req) \
1216 (((long)((req) + (SIZE_SZ + MALLOC_ALIGN_MASK)) < \
1217 (long)(MINSIZE + MALLOC_ALIGN_MASK)) ? MINSIZE : \
1218 (((req) + (SIZE_SZ + MALLOC_ALIGN_MASK)) & ~(MALLOC_ALIGN_MASK)))
1219
1220/* Check if m has acceptable alignment */
1221
1222#define aligned_OK(m) (((unsigned long)((m)) & (MALLOC_ALIGN_MASK)) == 0)
1223
1224
1225\f
1226
1227/*
1228 Physical chunk operations
1229*/
1230
1231
1232/* size field is or'ed with PREV_INUSE when previous adjacent chunk in use */
1233
1234#define PREV_INUSE 0x1
1235
1236/* size field is or'ed with IS_MMAPPED if the chunk was obtained with mmap() */
1237
1238#define IS_MMAPPED 0x2
1239
1240/* Bits to mask off when extracting size */
1241
1242#define SIZE_BITS (PREV_INUSE|IS_MMAPPED)
1243
1244
1245/* Ptr to next physical malloc_chunk. */
1246
1247#define next_chunk(p) ((mchunkptr)( ((char*)(p)) + ((p)->size & ~PREV_INUSE) ))
1248
1249/* Ptr to previous physical malloc_chunk */
1250
1251#define prev_chunk(p) ((mchunkptr)( ((char*)(p)) - ((p)->prev_size) ))
1252
1253
1254/* Treat space at ptr + offset as a chunk */
1255
1256#define chunk_at_offset(p, s) ((mchunkptr)(((char*)(p)) + (s)))
1257
1258
1259\f
1260
1261/*
1262 Dealing with use bits
1263*/
1264
1265/* extract p's inuse bit */
1266
1267#define inuse(p) \
1268 ((((mchunkptr)(((char*)(p))+((p)->size & ~PREV_INUSE)))->size) & PREV_INUSE)
1269
1270/* extract inuse bit of previous chunk */
1271
1272#define prev_inuse(p) ((p)->size & PREV_INUSE)
1273
1274/* check for mmap()'ed chunk */
1275
1276#define chunk_is_mmapped(p) ((p)->size & IS_MMAPPED)
1277
1278/* set/clear chunk as in use without otherwise disturbing */
1279
1280#define set_inuse(p) \
1281 ((mchunkptr)(((char*)(p)) + ((p)->size & ~PREV_INUSE)))->size |= PREV_INUSE
1282
1283#define clear_inuse(p) \
1284 ((mchunkptr)(((char*)(p)) + ((p)->size & ~PREV_INUSE)))->size &= ~(PREV_INUSE)
1285
1286/* check/set/clear inuse bits in known places */
1287
1288#define inuse_bit_at_offset(p, s)\
1289 (((mchunkptr)(((char*)(p)) + (s)))->size & PREV_INUSE)
1290
1291#define set_inuse_bit_at_offset(p, s)\
1292 (((mchunkptr)(((char*)(p)) + (s)))->size |= PREV_INUSE)
1293
1294#define clear_inuse_bit_at_offset(p, s)\
1295 (((mchunkptr)(((char*)(p)) + (s)))->size &= ~(PREV_INUSE))
1296
1297
1298\f
1299
1300/*
1301 Dealing with size fields
1302*/
1303
1304/* Get size, ignoring use bits */
1305
1306#define chunksize(p) ((p)->size & ~(SIZE_BITS))
1307
1308/* Set size at head, without disturbing its use bit */
1309
1310#define set_head_size(p, s) ((p)->size = (((p)->size & PREV_INUSE) | (s)))
1311
1312/* Set size/use ignoring previous bits in header */
1313
1314#define set_head(p, s) ((p)->size = (s))
1315
1316/* Set size at footer (only when chunk is not in use) */
1317
1318#define set_foot(p, s) (((mchunkptr)((char*)(p) + (s)))->prev_size = (s))
1319
1320
1321\f
1322
1323
1324/* access macros */
1325
1326#define bin_at(a, i) ((mbinptr)((char*)&(((a)->av)[2*(i) + 2]) - 2*SIZE_SZ))
1327#define init_bin(a, i) ((a)->av[2*i+2] = (a)->av[2*i+3] = bin_at((a), i))
1328#define next_bin(b) ((mbinptr)((char*)(b) + 2 * sizeof(mbinptr)))
1329#define prev_bin(b) ((mbinptr)((char*)(b) - 2 * sizeof(mbinptr)))
1330
1331/*
1332 The first 2 bins are never indexed. The corresponding av cells are instead
1333 used for bookkeeping. This is not to save space, but to simplify
1334 indexing, maintain locality, and avoid some initialization tests.
1335*/
1336
1337#define binblocks(a) (bin_at(a,0)->size)/* bitvector of nonempty blocks */
1338#define top(a) (bin_at(a,0)->fd) /* The topmost chunk */
1339#define last_remainder(a) (bin_at(a,1)) /* remainder from last split */
1340
1341/*
1342 Because top initially points to its own bin with initial
1343 zero size, thus forcing extension on the first malloc request,
1344 we avoid having any special code in malloc to check whether
1345 it even exists yet. But we still need to in malloc_extend_top.
1346*/
1347
1348#define initial_top(a) ((mchunkptr)bin_at(a, 0))
1349
1350\f
1351
1352/* field-extraction macros */
1353
1354#define first(b) ((b)->fd)
1355#define last(b) ((b)->bk)
1356
1357/*
1358 Indexing into bins
1359*/
1360
1361#define bin_index(sz) \
1362(((((unsigned long)(sz)) >> 9) == 0) ? (((unsigned long)(sz)) >> 3): \
1363 ((((unsigned long)(sz)) >> 9) <= 4) ? 56 + (((unsigned long)(sz)) >> 6): \
1364 ((((unsigned long)(sz)) >> 9) <= 20) ? 91 + (((unsigned long)(sz)) >> 9): \
1365 ((((unsigned long)(sz)) >> 9) <= 84) ? 110 + (((unsigned long)(sz)) >> 12): \
1366 ((((unsigned long)(sz)) >> 9) <= 340) ? 119 + (((unsigned long)(sz)) >> 15): \
1367 ((((unsigned long)(sz)) >> 9) <= 1364) ? 124 + (((unsigned long)(sz)) >> 18): \
1368 126)
1369/*
1370 bins for chunks < 512 are all spaced 8 bytes apart, and hold
1371 identically sized chunks. This is exploited in malloc.
1372*/
1373
1374#define MAX_SMALLBIN 63
1375#define MAX_SMALLBIN_SIZE 512
1376#define SMALLBIN_WIDTH 8
1377
1378#define smallbin_index(sz) (((unsigned long)(sz)) >> 3)
1379
1380/*
1381 Requests are `small' if both the corresponding and the next bin are small
1382*/
1383
1384#define is_small_request(nb) ((nb) < MAX_SMALLBIN_SIZE - SMALLBIN_WIDTH)
1385
1386\f
1387
1388/*
1389 To help compensate for the large number of bins, a one-level index
1390 structure is used for bin-by-bin searching. `binblocks' is a
1391 one-word bitvector recording whether groups of BINBLOCKWIDTH bins
1392 have any (possibly) non-empty bins, so they can be skipped over
1393 all at once during during traversals. The bits are NOT always
1394 cleared as soon as all bins in a block are empty, but instead only
1395 when all are noticed to be empty during traversal in malloc.
1396*/
1397
1398#define BINBLOCKWIDTH 4 /* bins per block */
1399
1400/* bin<->block macros */
1401
1402#define idx2binblock(ix) ((unsigned)1 << ((ix) / BINBLOCKWIDTH))
1403#define mark_binblock(a, ii) (binblocks(a) |= idx2binblock(ii))
1404#define clear_binblock(a, ii) (binblocks(a) &= ~(idx2binblock(ii)))
1405
1406
1407\f
1408
1409/* Static bookkeeping data */
1410
1411/* Helper macro to initialize bins */
1412#define IAV(i) bin_at(&main_arena, i), bin_at(&main_arena, i)
1413
1414static arena main_arena = {
1415 {
1416 0, 0,
1417 IAV(0), IAV(1), IAV(2), IAV(3), IAV(4), IAV(5), IAV(6), IAV(7),
1418 IAV(8), IAV(9), IAV(10), IAV(11), IAV(12), IAV(13), IAV(14), IAV(15),
1419 IAV(16), IAV(17), IAV(18), IAV(19), IAV(20), IAV(21), IAV(22), IAV(23),
1420 IAV(24), IAV(25), IAV(26), IAV(27), IAV(28), IAV(29), IAV(30), IAV(31),
1421 IAV(32), IAV(33), IAV(34), IAV(35), IAV(36), IAV(37), IAV(38), IAV(39),
1422 IAV(40), IAV(41), IAV(42), IAV(43), IAV(44), IAV(45), IAV(46), IAV(47),
1423 IAV(48), IAV(49), IAV(50), IAV(51), IAV(52), IAV(53), IAV(54), IAV(55),
1424 IAV(56), IAV(57), IAV(58), IAV(59), IAV(60), IAV(61), IAV(62), IAV(63),
1425 IAV(64), IAV(65), IAV(66), IAV(67), IAV(68), IAV(69), IAV(70), IAV(71),
1426 IAV(72), IAV(73), IAV(74), IAV(75), IAV(76), IAV(77), IAV(78), IAV(79),
1427 IAV(80), IAV(81), IAV(82), IAV(83), IAV(84), IAV(85), IAV(86), IAV(87),
1428 IAV(88), IAV(89), IAV(90), IAV(91), IAV(92), IAV(93), IAV(94), IAV(95),
1429 IAV(96), IAV(97), IAV(98), IAV(99), IAV(100), IAV(101), IAV(102), IAV(103),
1430 IAV(104), IAV(105), IAV(106), IAV(107), IAV(108), IAV(109), IAV(110), IAV(111),
1431 IAV(112), IAV(113), IAV(114), IAV(115), IAV(116), IAV(117), IAV(118), IAV(119),
1432 IAV(120), IAV(121), IAV(122), IAV(123), IAV(124), IAV(125), IAV(126), IAV(127)
1433 },
7e3be507 1434 &main_arena, /* next */
8a4b65b4
UD
1435 0, /* size */
1436#if THREAD_STATS
1437 0, 0, 0, /* stat_lock_direct, stat_lock_loop, stat_lock_wait */
1438#endif
f65fd747
UD
1439 MUTEX_INITIALIZER /* mutex */
1440};
1441
1442#undef IAV
1443
1444/* Thread specific data */
1445
8a4b65b4 1446#ifndef NO_THREADS
f65fd747
UD
1447static tsd_key_t arena_key;
1448static mutex_t list_lock = MUTEX_INITIALIZER;
8a4b65b4 1449#endif
f65fd747
UD
1450
1451#if THREAD_STATS
f65fd747 1452static int stat_n_heaps = 0;
f65fd747
UD
1453#define THREAD_STAT(x) x
1454#else
1455#define THREAD_STAT(x) do ; while(0)
1456#endif
1457
1458/* variables holding tunable values */
1459
1460static unsigned long trim_threshold = DEFAULT_TRIM_THRESHOLD;
1461static unsigned long top_pad = DEFAULT_TOP_PAD;
1462static unsigned int n_mmaps_max = DEFAULT_MMAP_MAX;
1463static unsigned long mmap_threshold = DEFAULT_MMAP_THRESHOLD;
10dc2a90 1464static int check_action = DEFAULT_CHECK_ACTION;
f65fd747
UD
1465
1466/* The first value returned from sbrk */
1467static char* sbrk_base = (char*)(-1);
1468
1469/* The maximum memory obtained from system via sbrk */
1470static unsigned long max_sbrked_mem = 0;
1471
8a4b65b4
UD
1472/* The maximum via either sbrk or mmap (too difficult to track with threads) */
1473#ifdef NO_THREADS
f65fd747 1474static unsigned long max_total_mem = 0;
8a4b65b4 1475#endif
f65fd747
UD
1476
1477/* The total memory obtained from system via sbrk */
8a4b65b4 1478#define sbrked_mem (main_arena.size)
f65fd747
UD
1479
1480/* Tracking mmaps */
1481
1482static unsigned int n_mmaps = 0;
1483static unsigned int max_n_mmaps = 0;
1484static unsigned long mmapped_mem = 0;
1485static unsigned long max_mmapped_mem = 0;
1486
1487
1488\f
831372e7
UD
1489#ifndef _LIBC
1490#define weak_variable
1491#else
1492/* In GNU libc we want the hook variables to be weak definitions to
1493 avoid a problem with Emacs. */
1494#define weak_variable weak_function
1495#endif
7e3be507
UD
1496
1497/* Already initialized? */
1498int __malloc_initialized = 0;
f65fd747
UD
1499
1500
1501/* Initialization routine. */
1502#if defined(_LIBC)
10dc2a90 1503#if 0
f65fd747 1504static void ptmalloc_init __MALLOC_P ((void)) __attribute__ ((constructor));
10dc2a90 1505#endif
f65fd747
UD
1506
1507static void
1508ptmalloc_init __MALLOC_P((void))
1509#else
1510void
1511ptmalloc_init __MALLOC_P((void))
1512#endif
1513{
10dc2a90 1514#if defined(_LIBC) || defined(MALLOC_HOOKS)
7e3be507
UD
1515 __malloc_ptr_t (*save_malloc_hook) __MALLOC_P ((size_t __size));
1516 void (*save_free_hook) __MALLOC_P ((__malloc_ptr_t __ptr));
10dc2a90
UD
1517 const char* s;
1518#endif
f65fd747 1519
6d52618b
UD
1520 if(__malloc_initialized) return;
1521 __malloc_initialized = 1;
7e3be507
UD
1522#if defined(_LIBC) || defined(MALLOC_HOOKS)
1523 /* With some threads implementations, creating thread-specific data
1524 or initializing a mutex may call malloc() itself. Provide a
1525 simple starter version (realloc() won't work). */
1526 save_malloc_hook = __malloc_hook;
1527 save_free_hook = __free_hook;
1528 __malloc_hook = malloc_starter;
1529 __free_hook = free_starter;
1530#endif
5290baf0 1531#if defined(_LIBC) && !defined (NO_THREADS)
8a4b65b4 1532 /* Initialize the pthreads interface. */
f65fd747 1533 if (__pthread_initialize != NULL)
8a4b65b4 1534 __pthread_initialize();
f65fd747 1535#endif
8a4b65b4 1536#ifndef NO_THREADS
10dc2a90
UD
1537 mutex_init(&main_arena.mutex);
1538 mutex_init(&list_lock);
1539 tsd_key_create(&arena_key, NULL);
1540 tsd_setspecific(arena_key, (Void_t *)&main_arena);
1541#endif
1542#if defined(_LIBC) || defined(MALLOC_HOOKS)
831372e7
UD
1543 if((s = getenv("MALLOC_TRIM_THRESHOLD_")))
1544 mALLOPt(M_TRIM_THRESHOLD, atoi(s));
1545 if((s = getenv("MALLOC_TOP_PAD_")))
1546 mALLOPt(M_TOP_PAD, atoi(s));
1547 if((s = getenv("MALLOC_MMAP_THRESHOLD_")))
1548 mALLOPt(M_MMAP_THRESHOLD, atoi(s));
1549 if((s = getenv("MALLOC_MMAP_MAX_")))
1550 mALLOPt(M_MMAP_MAX, atoi(s));
10dc2a90 1551 s = getenv("MALLOC_CHECK_");
7e3be507
UD
1552 __malloc_hook = save_malloc_hook;
1553 __free_hook = save_free_hook;
10dc2a90 1554 if(s) {
831372e7
UD
1555 if(s[0]) mALLOPt(M_CHECK_ACTION, (int)(s[0] - '0'));
1556 __malloc_check_init();
f65fd747 1557 }
10dc2a90
UD
1558 if(__malloc_initialize_hook != NULL)
1559 (*__malloc_initialize_hook)();
1560#endif
f65fd747
UD
1561}
1562
10dc2a90
UD
1563#if defined(_LIBC) || defined(MALLOC_HOOKS)
1564
1565/* Hooks for debugging versions. The initial hooks just call the
1566 initialization routine, then do the normal work. */
1567
1568static Void_t*
1569#if __STD_C
1570malloc_hook_ini(size_t sz)
1571#else
1572malloc_hook_ini(sz) size_t sz;
1573#endif
1574{
1575 __malloc_hook = NULL;
1576 __realloc_hook = NULL;
1577 __memalign_hook = NULL;
1578 ptmalloc_init();
1579 return mALLOc(sz);
1580}
1581
1582static Void_t*
1583#if __STD_C
1584realloc_hook_ini(Void_t* ptr, size_t sz)
1585#else
1586realloc_hook_ini(ptr, sz) Void_t* ptr; size_t sz;
1587#endif
1588{
1589 __malloc_hook = NULL;
1590 __realloc_hook = NULL;
1591 __memalign_hook = NULL;
1592 ptmalloc_init();
1593 return rEALLOc(ptr, sz);
1594}
1595
1596static Void_t*
1597#if __STD_C
1598memalign_hook_ini(size_t sz, size_t alignment)
1599#else
1600memalign_hook_ini(sz, alignment) size_t sz; size_t alignment;
1601#endif
1602{
1603 __malloc_hook = NULL;
1604 __realloc_hook = NULL;
1605 __memalign_hook = NULL;
1606 ptmalloc_init();
1607 return mEMALIGn(sz, alignment);
1608}
1609
831372e7
UD
1610void weak_variable (*__malloc_initialize_hook) __MALLOC_P ((void)) = NULL;
1611void weak_variable (*__free_hook) __MALLOC_P ((__malloc_ptr_t __ptr)) = NULL;
1612__malloc_ptr_t weak_variable (*__malloc_hook)
10dc2a90 1613 __MALLOC_P ((size_t __size)) = malloc_hook_ini;
831372e7 1614__malloc_ptr_t weak_variable (*__realloc_hook)
10dc2a90 1615 __MALLOC_P ((__malloc_ptr_t __ptr, size_t __size)) = realloc_hook_ini;
831372e7 1616__malloc_ptr_t weak_variable (*__memalign_hook)
10dc2a90 1617 __MALLOC_P ((size_t __size, size_t __alignment)) = memalign_hook_ini;
1228ed5c 1618void weak_variable (*__after_morecore_hook) __MALLOC_P ((void)) = NULL;
10dc2a90
UD
1619
1620/* Activate a standard set of debugging hooks. */
1621void
831372e7 1622__malloc_check_init()
10dc2a90
UD
1623{
1624 __malloc_hook = malloc_check;
1625 __free_hook = free_check;
1626 __realloc_hook = realloc_check;
1627 __memalign_hook = memalign_check;
7e3be507
UD
1628 if(check_action == 1)
1629 fprintf(stderr, "malloc: using debugging hooks\n");
10dc2a90
UD
1630}
1631
1632#endif
1633
f65fd747
UD
1634
1635\f
1636
1637
1638/* Routines dealing with mmap(). */
1639
1640#if HAVE_MMAP
1641
1642#ifndef MAP_ANONYMOUS
1643
1644static int dev_zero_fd = -1; /* Cached file descriptor for /dev/zero. */
1645
1646#define MMAP(size, prot) ((dev_zero_fd < 0) ? \
1647 (dev_zero_fd = open("/dev/zero", O_RDWR), \
1648 mmap(0, (size), (prot), MAP_PRIVATE, dev_zero_fd, 0)) : \
1649 mmap(0, (size), (prot), MAP_PRIVATE, dev_zero_fd, 0))
1650
1651#else
1652
1653#define MMAP(size, prot) \
1654 (mmap(0, (size), (prot), MAP_PRIVATE|MAP_ANONYMOUS, -1, 0))
1655
1656#endif
1657
1658#if __STD_C
1659static mchunkptr mmap_chunk(size_t size)
1660#else
1661static mchunkptr mmap_chunk(size) size_t size;
1662#endif
1663{
1664 size_t page_mask = malloc_getpagesize - 1;
1665 mchunkptr p;
1666
1667 if(n_mmaps >= n_mmaps_max) return 0; /* too many regions */
1668
1669 /* For mmapped chunks, the overhead is one SIZE_SZ unit larger, because
1670 * there is no following chunk whose prev_size field could be used.
1671 */
1672 size = (size + SIZE_SZ + page_mask) & ~page_mask;
1673
1674 p = (mchunkptr)MMAP(size, PROT_READ|PROT_WRITE);
1675 if(p == (mchunkptr)-1) return 0;
1676
1677 n_mmaps++;
1678 if (n_mmaps > max_n_mmaps) max_n_mmaps = n_mmaps;
1679
1680 /* We demand that eight bytes into a page must be 8-byte aligned. */
1681 assert(aligned_OK(chunk2mem(p)));
1682
1683 /* The offset to the start of the mmapped region is stored
1684 * in the prev_size field of the chunk; normally it is zero,
1685 * but that can be changed in memalign().
1686 */
1687 p->prev_size = 0;
1688 set_head(p, size|IS_MMAPPED);
1689
1690 mmapped_mem += size;
1691 if ((unsigned long)mmapped_mem > (unsigned long)max_mmapped_mem)
1692 max_mmapped_mem = mmapped_mem;
8a4b65b4 1693#ifdef NO_THREADS
f65fd747
UD
1694 if ((unsigned long)(mmapped_mem + sbrked_mem) > (unsigned long)max_total_mem)
1695 max_total_mem = mmapped_mem + sbrked_mem;
8a4b65b4 1696#endif
f65fd747
UD
1697 return p;
1698}
1699
1700#if __STD_C
1701static void munmap_chunk(mchunkptr p)
1702#else
1703static void munmap_chunk(p) mchunkptr p;
1704#endif
1705{
1706 INTERNAL_SIZE_T size = chunksize(p);
1707 int ret;
1708
1709 assert (chunk_is_mmapped(p));
1710 assert(! ((char*)p >= sbrk_base && (char*)p < sbrk_base + sbrked_mem));
1711 assert((n_mmaps > 0));
1712 assert(((p->prev_size + size) & (malloc_getpagesize-1)) == 0);
1713
1714 n_mmaps--;
1715 mmapped_mem -= (size + p->prev_size);
1716
1717 ret = munmap((char *)p - p->prev_size, size + p->prev_size);
1718
1719 /* munmap returns non-zero on failure */
1720 assert(ret == 0);
1721}
1722
1723#if HAVE_MREMAP
1724
1725#if __STD_C
1726static mchunkptr mremap_chunk(mchunkptr p, size_t new_size)
1727#else
1728static mchunkptr mremap_chunk(p, new_size) mchunkptr p; size_t new_size;
1729#endif
1730{
1731 size_t page_mask = malloc_getpagesize - 1;
1732 INTERNAL_SIZE_T offset = p->prev_size;
1733 INTERNAL_SIZE_T size = chunksize(p);
1734 char *cp;
1735
1736 assert (chunk_is_mmapped(p));
1737 assert(! ((char*)p >= sbrk_base && (char*)p < sbrk_base + sbrked_mem));
1738 assert((n_mmaps > 0));
1739 assert(((size + offset) & (malloc_getpagesize-1)) == 0);
1740
1741 /* Note the extra SIZE_SZ overhead as in mmap_chunk(). */
1742 new_size = (new_size + offset + SIZE_SZ + page_mask) & ~page_mask;
1743
1744 cp = (char *)mremap((char *)p - offset, size + offset, new_size,
1745 MREMAP_MAYMOVE);
1746
1747 if (cp == (char *)-1) return 0;
1748
1749 p = (mchunkptr)(cp + offset);
1750
1751 assert(aligned_OK(chunk2mem(p)));
1752
1753 assert((p->prev_size == offset));
1754 set_head(p, (new_size - offset)|IS_MMAPPED);
1755
1756 mmapped_mem -= size + offset;
1757 mmapped_mem += new_size;
1758 if ((unsigned long)mmapped_mem > (unsigned long)max_mmapped_mem)
1759 max_mmapped_mem = mmapped_mem;
8a4b65b4 1760#ifdef NO_THREADS
f65fd747
UD
1761 if ((unsigned long)(mmapped_mem + sbrked_mem) > (unsigned long)max_total_mem)
1762 max_total_mem = mmapped_mem + sbrked_mem;
8a4b65b4 1763#endif
f65fd747
UD
1764 return p;
1765}
1766
1767#endif /* HAVE_MREMAP */
1768
1769#endif /* HAVE_MMAP */
1770
1771\f
1772
1773/* Managing heaps and arenas (for concurrent threads) */
1774
1775#ifndef NO_THREADS
1776
1777/* Create a new heap. size is automatically rounded up to a multiple
1778 of the page size. */
1779
1780static heap_info *
1781#if __STD_C
1782new_heap(size_t size)
1783#else
1784new_heap(size) size_t size;
1785#endif
1786{
1787 size_t page_mask = malloc_getpagesize - 1;
1788 char *p1, *p2;
1789 unsigned long ul;
1790 heap_info *h;
1791
1792 if(size < HEAP_MIN_SIZE)
1793 size = HEAP_MIN_SIZE;
1794 size = (size + page_mask) & ~page_mask;
1795 if(size > HEAP_MAX_SIZE)
1796 return 0;
1797 p1 = (char *)MMAP(HEAP_MAX_SIZE<<1, PROT_NONE);
1798 if(p1 == (char *)-1)
1799 return 0;
1800 p2 = (char *)(((unsigned long)p1 + HEAP_MAX_SIZE) & ~(HEAP_MAX_SIZE-1));
1801 ul = p2 - p1;
1802 munmap(p1, ul);
1803 munmap(p2 + HEAP_MAX_SIZE, HEAP_MAX_SIZE - ul);
1804 if(mprotect(p2, size, PROT_READ|PROT_WRITE) != 0) {
1805 munmap(p2, HEAP_MAX_SIZE);
1806 return 0;
1807 }
1808 h = (heap_info *)p2;
1809 h->size = size;
1810 THREAD_STAT(stat_n_heaps++);
1811 return h;
1812}
1813
1814/* Grow or shrink a heap. size is automatically rounded up to a
8a4b65b4 1815 multiple of the page size if it is positive. */
f65fd747
UD
1816
1817static int
1818#if __STD_C
1819grow_heap(heap_info *h, long diff)
1820#else
1821grow_heap(h, diff) heap_info *h; long diff;
1822#endif
1823{
1824 size_t page_mask = malloc_getpagesize - 1;
1825 long new_size;
1826
1827 if(diff >= 0) {
1828 diff = (diff + page_mask) & ~page_mask;
1829 new_size = (long)h->size + diff;
1830 if(new_size > HEAP_MAX_SIZE)
1831 return -1;
1832 if(mprotect((char *)h + h->size, diff, PROT_READ|PROT_WRITE) != 0)
1833 return -2;
1834 } else {
1835 new_size = (long)h->size + diff;
8a4b65b4 1836 if(new_size < (long)sizeof(*h))
f65fd747
UD
1837 return -1;
1838 if(mprotect((char *)h + new_size, -diff, PROT_NONE) != 0)
1839 return -2;
1840 }
1841 h->size = new_size;
1842 return 0;
1843}
1844
8a4b65b4
UD
1845/* Delete a heap. */
1846
1847#define delete_heap(heap) munmap((char*)(heap), HEAP_MAX_SIZE)
1848
f65fd747
UD
1849/* arena_get() acquires an arena and locks the corresponding mutex.
1850 First, try the one last locked successfully by this thread. (This
1851 is the common case and handled with a macro for speed.) Then, loop
7e3be507
UD
1852 once over the circularly linked list of arenas. If no arena is
1853 readily available, create a new one. */
f65fd747
UD
1854
1855#define arena_get(ptr, size) do { \
1856 Void_t *vptr = NULL; \
1857 ptr = (arena *)tsd_getspecific(arena_key, vptr); \
1858 if(ptr && !mutex_trylock(&ptr->mutex)) { \
8a4b65b4 1859 THREAD_STAT(++(ptr->stat_lock_direct)); \
7e3be507 1860 } else \
f65fd747 1861 ptr = arena_get2(ptr, (size)); \
f65fd747
UD
1862} while(0)
1863
1864static arena *
1865#if __STD_C
1866arena_get2(arena *a_tsd, size_t size)
1867#else
1868arena_get2(a_tsd, size) arena *a_tsd; size_t size;
1869#endif
1870{
1871 arena *a;
1872 heap_info *h;
1873 char *ptr;
1874 int i;
1875 unsigned long misalign;
1876
7e3be507
UD
1877 if(!a_tsd)
1878 a = a_tsd = &main_arena;
1879 else {
1880 a = a_tsd->next;
1881 if(!a) {
1882 /* This can only happen while initializing the new arena. */
1883 (void)mutex_lock(&main_arena.mutex);
1884 THREAD_STAT(++(main_arena.stat_lock_wait));
1885 return &main_arena;
f65fd747 1886 }
8a4b65b4 1887 }
7e3be507
UD
1888
1889 /* Check the global, circularly linked list for available arenas. */
1890 do {
1891 if(!mutex_trylock(&a->mutex)) {
1892 THREAD_STAT(++(a->stat_lock_loop));
1893 tsd_setspecific(arena_key, (Void_t *)a);
1894 return a;
1895 }
1896 a = a->next;
1897 } while(a != a_tsd);
f65fd747
UD
1898
1899 /* Nothing immediately available, so generate a new arena. */
1900 h = new_heap(size + (sizeof(*h) + sizeof(*a) + MALLOC_ALIGNMENT));
1901 if(!h)
1902 return 0;
1903 a = h->ar_ptr = (arena *)(h+1);
1904 for(i=0; i<NAV; i++)
1905 init_bin(a, i);
7e3be507 1906 a->next = NULL;
8a4b65b4 1907 a->size = h->size;
7e3be507 1908 tsd_setspecific(arena_key, (Void_t *)a);
f65fd747
UD
1909 mutex_init(&a->mutex);
1910 i = mutex_lock(&a->mutex); /* remember result */
1911
1912 /* Set up the top chunk, with proper alignment. */
1913 ptr = (char *)(a + 1);
1914 misalign = (unsigned long)chunk2mem(ptr) & MALLOC_ALIGN_MASK;
1915 if (misalign > 0)
1916 ptr += MALLOC_ALIGNMENT - misalign;
1917 top(a) = (mchunkptr)ptr;
8a4b65b4 1918 set_head(top(a), (((char*)h + h->size) - ptr) | PREV_INUSE);
f65fd747
UD
1919
1920 /* Add the new arena to the list. */
1921 (void)mutex_lock(&list_lock);
1922 a->next = main_arena.next;
1923 main_arena.next = a;
f65fd747
UD
1924 (void)mutex_unlock(&list_lock);
1925
1926 if(i) /* locking failed; keep arena for further attempts later */
1927 return 0;
1928
8a4b65b4 1929 THREAD_STAT(++(a->stat_lock_loop));
f65fd747
UD
1930 return a;
1931}
1932
1933/* find the heap and corresponding arena for a given ptr */
1934
1935#define heap_for_ptr(ptr) \
1936 ((heap_info *)((unsigned long)(ptr) & ~(HEAP_MAX_SIZE-1)))
1937#define arena_for_ptr(ptr) \
1938 (((mchunkptr)(ptr) < top(&main_arena) && (char *)(ptr) >= sbrk_base) ? \
1939 &main_arena : heap_for_ptr(ptr)->ar_ptr)
1940
1941#else /* defined(NO_THREADS) */
1942
1943/* Without concurrent threads, there is only one arena. */
1944
1945#define arena_get(ptr, sz) (ptr = &main_arena)
1946#define arena_for_ptr(ptr) (&main_arena)
1947
1948#endif /* !defined(NO_THREADS) */
1949
1950\f
1951
1952/*
1953 Debugging support
1954*/
1955
1956#if MALLOC_DEBUG
1957
1958
1959/*
1960 These routines make a number of assertions about the states
1961 of data structures that should be true at all times. If any
1962 are not true, it's very likely that a user program has somehow
1963 trashed memory. (It's also possible that there is a coding error
1964 in malloc. In which case, please report it!)
1965*/
1966
1967#if __STD_C
1968static void do_check_chunk(arena *ar_ptr, mchunkptr p)
1969#else
1970static void do_check_chunk(ar_ptr, p) arena *ar_ptr; mchunkptr p;
1971#endif
1972{
1973 INTERNAL_SIZE_T sz = p->size & ~PREV_INUSE;
1974
1975 /* No checkable chunk is mmapped */
1976 assert(!chunk_is_mmapped(p));
1977
1978#ifndef NO_THREADS
1979 if(ar_ptr != &main_arena) {
1980 heap_info *heap = heap_for_ptr(p);
1981 assert(heap->ar_ptr == ar_ptr);
1982 assert((char *)p + sz <= (char *)heap + heap->size);
1983 return;
1984 }
1985#endif
1986
1987 /* Check for legal address ... */
1988 assert((char*)p >= sbrk_base);
1989 if (p != top(ar_ptr))
1990 assert((char*)p + sz <= (char*)top(ar_ptr));
1991 else
1992 assert((char*)p + sz <= sbrk_base + sbrked_mem);
1993
1994}
1995
1996
1997#if __STD_C
1998static void do_check_free_chunk(arena *ar_ptr, mchunkptr p)
1999#else
2000static void do_check_free_chunk(ar_ptr, p) arena *ar_ptr; mchunkptr p;
2001#endif
2002{
2003 INTERNAL_SIZE_T sz = p->size & ~PREV_INUSE;
2004 mchunkptr next = chunk_at_offset(p, sz);
2005
2006 do_check_chunk(ar_ptr, p);
2007
2008 /* Check whether it claims to be free ... */
2009 assert(!inuse(p));
2010
8a4b65b4
UD
2011 /* Must have OK size and fields */
2012 assert((long)sz >= (long)MINSIZE);
2013 assert((sz & MALLOC_ALIGN_MASK) == 0);
2014 assert(aligned_OK(chunk2mem(p)));
2015 /* ... matching footer field */
2016 assert(next->prev_size == sz);
2017 /* ... and is fully consolidated */
2018 assert(prev_inuse(p));
2019 assert (next == top(ar_ptr) || inuse(next));
2020
2021 /* ... and has minimally sane links */
2022 assert(p->fd->bk == p);
2023 assert(p->bk->fd == p);
f65fd747
UD
2024}
2025
2026#if __STD_C
2027static void do_check_inuse_chunk(arena *ar_ptr, mchunkptr p)
2028#else
2029static void do_check_inuse_chunk(ar_ptr, p) arena *ar_ptr; mchunkptr p;
2030#endif
2031{
2032 mchunkptr next = next_chunk(p);
2033 do_check_chunk(ar_ptr, p);
2034
2035 /* Check whether it claims to be in use ... */
2036 assert(inuse(p));
2037
8a4b65b4
UD
2038 /* ... whether its size is OK (it might be a fencepost) ... */
2039 assert(chunksize(p) >= MINSIZE || next->size == (0|PREV_INUSE));
2040
f65fd747
UD
2041 /* ... and is surrounded by OK chunks.
2042 Since more things can be checked with free chunks than inuse ones,
2043 if an inuse chunk borders them and debug is on, it's worth doing them.
2044 */
2045 if (!prev_inuse(p))
2046 {
2047 mchunkptr prv = prev_chunk(p);
2048 assert(next_chunk(prv) == p);
2049 do_check_free_chunk(ar_ptr, prv);
2050 }
2051 if (next == top(ar_ptr))
2052 {
2053 assert(prev_inuse(next));
2054 assert(chunksize(next) >= MINSIZE);
2055 }
2056 else if (!inuse(next))
2057 do_check_free_chunk(ar_ptr, next);
2058
2059}
2060
2061#if __STD_C
2062static void do_check_malloced_chunk(arena *ar_ptr,
2063 mchunkptr p, INTERNAL_SIZE_T s)
2064#else
2065static void do_check_malloced_chunk(ar_ptr, p, s)
2066arena *ar_ptr; mchunkptr p; INTERNAL_SIZE_T s;
2067#endif
2068{
2069 INTERNAL_SIZE_T sz = p->size & ~PREV_INUSE;
2070 long room = sz - s;
2071
2072 do_check_inuse_chunk(ar_ptr, p);
2073
2074 /* Legal size ... */
2075 assert((long)sz >= (long)MINSIZE);
2076 assert((sz & MALLOC_ALIGN_MASK) == 0);
2077 assert(room >= 0);
2078 assert(room < (long)MINSIZE);
2079
2080 /* ... and alignment */
2081 assert(aligned_OK(chunk2mem(p)));
2082
2083
2084 /* ... and was allocated at front of an available chunk */
2085 assert(prev_inuse(p));
2086
2087}
2088
2089
2090#define check_free_chunk(A,P) do_check_free_chunk(A,P)
2091#define check_inuse_chunk(A,P) do_check_inuse_chunk(A,P)
2092#define check_chunk(A,P) do_check_chunk(A,P)
2093#define check_malloced_chunk(A,P,N) do_check_malloced_chunk(A,P,N)
2094#else
2095#define check_free_chunk(A,P)
2096#define check_inuse_chunk(A,P)
2097#define check_chunk(A,P)
2098#define check_malloced_chunk(A,P,N)
2099#endif
2100
2101\f
2102
2103/*
2104 Macro-based internal utilities
2105*/
2106
2107
2108/*
2109 Linking chunks in bin lists.
2110 Call these only with variables, not arbitrary expressions, as arguments.
2111*/
2112
2113/*
2114 Place chunk p of size s in its bin, in size order,
2115 putting it ahead of others of same size.
2116*/
2117
2118
2119#define frontlink(A, P, S, IDX, BK, FD) \
2120{ \
2121 if (S < MAX_SMALLBIN_SIZE) \
2122 { \
2123 IDX = smallbin_index(S); \
2124 mark_binblock(A, IDX); \
2125 BK = bin_at(A, IDX); \
2126 FD = BK->fd; \
2127 P->bk = BK; \
2128 P->fd = FD; \
2129 FD->bk = BK->fd = P; \
2130 } \
2131 else \
2132 { \
2133 IDX = bin_index(S); \
2134 BK = bin_at(A, IDX); \
2135 FD = BK->fd; \
2136 if (FD == BK) mark_binblock(A, IDX); \
2137 else \
2138 { \
2139 while (FD != BK && S < chunksize(FD)) FD = FD->fd; \
2140 BK = FD->bk; \
2141 } \
2142 P->bk = BK; \
2143 P->fd = FD; \
2144 FD->bk = BK->fd = P; \
2145 } \
2146}
2147
2148
2149/* take a chunk off a list */
2150
2151#define unlink(P, BK, FD) \
2152{ \
2153 BK = P->bk; \
2154 FD = P->fd; \
2155 FD->bk = BK; \
2156 BK->fd = FD; \
2157} \
2158
2159/* Place p as the last remainder */
2160
2161#define link_last_remainder(A, P) \
2162{ \
2163 last_remainder(A)->fd = last_remainder(A)->bk = P; \
2164 P->fd = P->bk = last_remainder(A); \
2165}
2166
2167/* Clear the last_remainder bin */
2168
2169#define clear_last_remainder(A) \
2170 (last_remainder(A)->fd = last_remainder(A)->bk = last_remainder(A))
2171
2172
2173
2174\f
2175
2176/*
2177 Extend the top-most chunk by obtaining memory from system.
2178 Main interface to sbrk (but see also malloc_trim).
2179*/
2180
2181#if __STD_C
2182static void malloc_extend_top(arena *ar_ptr, INTERNAL_SIZE_T nb)
2183#else
2184static void malloc_extend_top(ar_ptr, nb) arena *ar_ptr; INTERNAL_SIZE_T nb;
2185#endif
2186{
2187 unsigned long pagesz = malloc_getpagesize;
2188 mchunkptr old_top = top(ar_ptr); /* Record state of old top */
2189 INTERNAL_SIZE_T old_top_size = chunksize(old_top);
2190 INTERNAL_SIZE_T top_size; /* new size of top chunk */
2191
2192#ifndef NO_THREADS
2193 if(ar_ptr == &main_arena) {
2194#endif
2195
2196 char* brk; /* return value from sbrk */
2197 INTERNAL_SIZE_T front_misalign; /* unusable bytes at front of sbrked space */
2198 INTERNAL_SIZE_T correction; /* bytes for 2nd sbrk call */
2199 char* new_brk; /* return of 2nd sbrk call */
2200 char* old_end = (char*)(chunk_at_offset(old_top, old_top_size));
2201
2202 /* Pad request with top_pad plus minimal overhead */
2203 INTERNAL_SIZE_T sbrk_size = nb + top_pad + MINSIZE;
2204
2205 /* If not the first time through, round to preserve page boundary */
2206 /* Otherwise, we need to correct to a page size below anyway. */
2207 /* (We also correct below if an intervening foreign sbrk call.) */
2208
2209 if (sbrk_base != (char*)(-1))
2210 sbrk_size = (sbrk_size + (pagesz - 1)) & ~(pagesz - 1);
2211
2212 brk = (char*)(MORECORE (sbrk_size));
2213
2214 /* Fail if sbrk failed or if a foreign sbrk call killed our space */
2215 if (brk == (char*)(MORECORE_FAILURE) ||
2216 (brk < old_end && old_top != initial_top(&main_arena)))
2217 return;
2218
1228ed5c
UD
2219 /* Call the `morecore' hook if necessary. */
2220 if (__after_morecore_hook)
2221 (*__after_morecore_hook) ();
2222
f65fd747
UD
2223 sbrked_mem += sbrk_size;
2224
2225 if (brk == old_end) { /* can just add bytes to current top */
2226 top_size = sbrk_size + old_top_size;
2227 set_head(old_top, top_size | PREV_INUSE);
2228 old_top = 0; /* don't free below */
2229 } else {
2230 if (sbrk_base == (char*)(-1)) /* First time through. Record base */
2231 sbrk_base = brk;
2232 else
2233 /* Someone else called sbrk(). Count those bytes as sbrked_mem. */
2234 sbrked_mem += brk - (char*)old_end;
2235
2236 /* Guarantee alignment of first new chunk made from this space */
2237 front_misalign = (unsigned long)chunk2mem(brk) & MALLOC_ALIGN_MASK;
2238 if (front_misalign > 0) {
2239 correction = (MALLOC_ALIGNMENT) - front_misalign;
2240 brk += correction;
2241 } else
2242 correction = 0;
2243
2244 /* Guarantee the next brk will be at a page boundary */
2245 correction += pagesz - ((unsigned long)(brk + sbrk_size) & (pagesz - 1));
2246
2247 /* Allocate correction */
2248 new_brk = (char*)(MORECORE (correction));
2249 if (new_brk == (char*)(MORECORE_FAILURE)) return;
2250
1228ed5c
UD
2251 /* Call the `morecore' hook if necessary. */
2252 if (__after_morecore_hook)
2253 (*__after_morecore_hook) ();
2254
f65fd747
UD
2255 sbrked_mem += correction;
2256
2257 top(&main_arena) = (mchunkptr)brk;
2258 top_size = new_brk - brk + correction;
2259 set_head(top(&main_arena), top_size | PREV_INUSE);
2260
2261 if (old_top == initial_top(&main_arena))
2262 old_top = 0; /* don't free below */
2263 }
2264
2265 if ((unsigned long)sbrked_mem > (unsigned long)max_sbrked_mem)
2266 max_sbrked_mem = sbrked_mem;
8a4b65b4 2267#ifdef NO_THREADS
f65fd747
UD
2268 if ((unsigned long)(mmapped_mem + sbrked_mem) >
2269 (unsigned long)max_total_mem)
2270 max_total_mem = mmapped_mem + sbrked_mem;
8a4b65b4 2271#endif
f65fd747
UD
2272
2273#ifndef NO_THREADS
2274 } else { /* ar_ptr != &main_arena */
8a4b65b4
UD
2275 heap_info *old_heap, *heap;
2276 size_t old_heap_size;
f65fd747
UD
2277
2278 if(old_top_size < MINSIZE) /* this should never happen */
2279 return;
2280
2281 /* First try to extend the current heap. */
2282 if(MINSIZE + nb <= old_top_size)
2283 return;
8a4b65b4
UD
2284 old_heap = heap_for_ptr(old_top);
2285 old_heap_size = old_heap->size;
2286 if(grow_heap(old_heap, MINSIZE + nb - old_top_size) == 0) {
2287 ar_ptr->size += old_heap->size - old_heap_size;
2288 top_size = ((char *)old_heap + old_heap->size) - (char *)old_top;
f65fd747
UD
2289 set_head(old_top, top_size | PREV_INUSE);
2290 return;
2291 }
2292
2293 /* A new heap must be created. */
2294 heap = new_heap(nb + top_pad + (MINSIZE + sizeof(*heap)));
2295 if(!heap)
2296 return;
2297 heap->ar_ptr = ar_ptr;
8a4b65b4
UD
2298 heap->prev = old_heap;
2299 ar_ptr->size += heap->size;
f65fd747
UD
2300
2301 /* Set up the new top, so we can safely use chunk_free() below. */
2302 top(ar_ptr) = chunk_at_offset(heap, sizeof(*heap));
2303 top_size = heap->size - sizeof(*heap);
2304 set_head(top(ar_ptr), top_size | PREV_INUSE);
2305 }
2306#endif /* !defined(NO_THREADS) */
2307
2308 /* We always land on a page boundary */
2309 assert(((unsigned long)((char*)top(ar_ptr) + top_size) & (pagesz-1)) == 0);
2310
2311 /* Setup fencepost and free the old top chunk. */
2312 if(old_top) {
8a4b65b4
UD
2313 /* The fencepost takes at least MINSIZE bytes, because it might
2314 become the top chunk again later. Note that a footer is set
2315 up, too, although the chunk is marked in use. */
2316 old_top_size -= MINSIZE;
2317 set_head(chunk_at_offset(old_top, old_top_size + 2*SIZE_SZ), 0|PREV_INUSE);
2318 if(old_top_size >= MINSIZE) {
2319 set_head(chunk_at_offset(old_top, old_top_size), (2*SIZE_SZ)|PREV_INUSE);
2320 set_foot(chunk_at_offset(old_top, old_top_size), (2*SIZE_SZ));
f65fd747
UD
2321 set_head_size(old_top, old_top_size);
2322 chunk_free(ar_ptr, old_top);
2323 } else {
8a4b65b4
UD
2324 set_head(old_top, (old_top_size + 2*SIZE_SZ)|PREV_INUSE);
2325 set_foot(old_top, (old_top_size + 2*SIZE_SZ));
f65fd747
UD
2326 }
2327 }
2328}
2329
2330
2331\f
2332
2333/* Main public routines */
2334
2335
2336/*
8a4b65b4 2337 Malloc Algorithm:
f65fd747
UD
2338
2339 The requested size is first converted into a usable form, `nb'.
2340 This currently means to add 4 bytes overhead plus possibly more to
2341 obtain 8-byte alignment and/or to obtain a size of at least
8a4b65b4
UD
2342 MINSIZE (currently 16, 24, or 32 bytes), the smallest allocatable
2343 size. (All fits are considered `exact' if they are within MINSIZE
2344 bytes.)
f65fd747
UD
2345
2346 From there, the first successful of the following steps is taken:
2347
2348 1. The bin corresponding to the request size is scanned, and if
2349 a chunk of exactly the right size is found, it is taken.
2350
2351 2. The most recently remaindered chunk is used if it is big
2352 enough. This is a form of (roving) first fit, used only in
2353 the absence of exact fits. Runs of consecutive requests use
2354 the remainder of the chunk used for the previous such request
2355 whenever possible. This limited use of a first-fit style
2356 allocation strategy tends to give contiguous chunks
2357 coextensive lifetimes, which improves locality and can reduce
2358 fragmentation in the long run.
2359
2360 3. Other bins are scanned in increasing size order, using a
2361 chunk big enough to fulfill the request, and splitting off
2362 any remainder. This search is strictly by best-fit; i.e.,
2363 the smallest (with ties going to approximately the least
2364 recently used) chunk that fits is selected.
2365
2366 4. If large enough, the chunk bordering the end of memory
2367 (`top') is split off. (This use of `top' is in accord with
2368 the best-fit search rule. In effect, `top' is treated as
2369 larger (and thus less well fitting) than any other available
2370 chunk since it can be extended to be as large as necessary
2371 (up to system limitations).
2372
2373 5. If the request size meets the mmap threshold and the
2374 system supports mmap, and there are few enough currently
2375 allocated mmapped regions, and a call to mmap succeeds,
2376 the request is allocated via direct memory mapping.
2377
2378 6. Otherwise, the top of memory is extended by
2379 obtaining more space from the system (normally using sbrk,
2380 but definable to anything else via the MORECORE macro).
2381 Memory is gathered from the system (in system page-sized
2382 units) in a way that allows chunks obtained across different
2383 sbrk calls to be consolidated, but does not require
2384 contiguous memory. Thus, it should be safe to intersperse
2385 mallocs with other sbrk calls.
2386
2387
2388 All allocations are made from the the `lowest' part of any found
2389 chunk. (The implementation invariant is that prev_inuse is
2390 always true of any allocated chunk; i.e., that each allocated
2391 chunk borders either a previously allocated and still in-use chunk,
2392 or the base of its memory arena.)
2393
2394*/
2395
2396#if __STD_C
2397Void_t* mALLOc(size_t bytes)
2398#else
2399Void_t* mALLOc(bytes) size_t bytes;
2400#endif
2401{
2402 arena *ar_ptr;
10dc2a90 2403 INTERNAL_SIZE_T nb; /* padded request size */
f65fd747
UD
2404 mchunkptr victim;
2405
10dc2a90
UD
2406#if defined(_LIBC) || defined(MALLOC_HOOKS)
2407 if (__malloc_hook != NULL) {
2408 Void_t* result;
2409
2410 result = (*__malloc_hook)(bytes);
2411 return result;
2412 }
2413#endif
2414
2415 nb = request2size(bytes);
f65fd747
UD
2416 arena_get(ar_ptr, nb + top_pad);
2417 if(!ar_ptr)
2418 return 0;
2419 victim = chunk_alloc(ar_ptr, nb);
2420 (void)mutex_unlock(&ar_ptr->mutex);
2421 return victim ? chunk2mem(victim) : 0;
2422}
2423
2424static mchunkptr
2425#if __STD_C
2426chunk_alloc(arena *ar_ptr, INTERNAL_SIZE_T nb)
2427#else
2428chunk_alloc(ar_ptr, nb) arena *ar_ptr; INTERNAL_SIZE_T nb;
2429#endif
2430{
2431 mchunkptr victim; /* inspected/selected chunk */
2432 INTERNAL_SIZE_T victim_size; /* its size */
2433 int idx; /* index for bin traversal */
2434 mbinptr bin; /* associated bin */
2435 mchunkptr remainder; /* remainder from a split */
2436 long remainder_size; /* its size */
2437 int remainder_index; /* its bin index */
2438 unsigned long block; /* block traverser bit */
2439 int startidx; /* first bin of a traversed block */
2440 mchunkptr fwd; /* misc temp for linking */
2441 mchunkptr bck; /* misc temp for linking */
2442 mbinptr q; /* misc temp */
2443
2444
2445 /* Check for exact match in a bin */
2446
2447 if (is_small_request(nb)) /* Faster version for small requests */
2448 {
2449 idx = smallbin_index(nb);
2450
2451 /* No traversal or size check necessary for small bins. */
2452
2453 q = bin_at(ar_ptr, idx);
2454 victim = last(q);
2455
2456 /* Also scan the next one, since it would have a remainder < MINSIZE */
2457 if (victim == q)
2458 {
2459 q = next_bin(q);
2460 victim = last(q);
2461 }
2462 if (victim != q)
2463 {
2464 victim_size = chunksize(victim);
2465 unlink(victim, bck, fwd);
2466 set_inuse_bit_at_offset(victim, victim_size);
2467 check_malloced_chunk(ar_ptr, victim, nb);
2468 return victim;
2469 }
2470
2471 idx += 2; /* Set for bin scan below. We've already scanned 2 bins. */
2472
2473 }
2474 else
2475 {
2476 idx = bin_index(nb);
2477 bin = bin_at(ar_ptr, idx);
2478
2479 for (victim = last(bin); victim != bin; victim = victim->bk)
2480 {
2481 victim_size = chunksize(victim);
2482 remainder_size = victim_size - nb;
2483
2484 if (remainder_size >= (long)MINSIZE) /* too big */
2485 {
2486 --idx; /* adjust to rescan below after checking last remainder */
2487 break;
2488 }
2489
2490 else if (remainder_size >= 0) /* exact fit */
2491 {
2492 unlink(victim, bck, fwd);
2493 set_inuse_bit_at_offset(victim, victim_size);
2494 check_malloced_chunk(ar_ptr, victim, nb);
2495 return victim;
2496 }
2497 }
2498
2499 ++idx;
2500
2501 }
2502
2503 /* Try to use the last split-off remainder */
2504
2505 if ( (victim = last_remainder(ar_ptr)->fd) != last_remainder(ar_ptr))
2506 {
2507 victim_size = chunksize(victim);
2508 remainder_size = victim_size - nb;
2509
2510 if (remainder_size >= (long)MINSIZE) /* re-split */
2511 {
2512 remainder = chunk_at_offset(victim, nb);
2513 set_head(victim, nb | PREV_INUSE);
2514 link_last_remainder(ar_ptr, remainder);
2515 set_head(remainder, remainder_size | PREV_INUSE);
2516 set_foot(remainder, remainder_size);
2517 check_malloced_chunk(ar_ptr, victim, nb);
2518 return victim;
2519 }
2520
2521 clear_last_remainder(ar_ptr);
2522
2523 if (remainder_size >= 0) /* exhaust */
2524 {
2525 set_inuse_bit_at_offset(victim, victim_size);
2526 check_malloced_chunk(ar_ptr, victim, nb);
2527 return victim;
2528 }
2529
2530 /* Else place in bin */
2531
2532 frontlink(ar_ptr, victim, victim_size, remainder_index, bck, fwd);
2533 }
2534
2535 /*
2536 If there are any possibly nonempty big-enough blocks,
2537 search for best fitting chunk by scanning bins in blockwidth units.
2538 */
2539
2540 if ( (block = idx2binblock(idx)) <= binblocks(ar_ptr))
2541 {
2542
2543 /* Get to the first marked block */
2544
2545 if ( (block & binblocks(ar_ptr)) == 0)
2546 {
2547 /* force to an even block boundary */
2548 idx = (idx & ~(BINBLOCKWIDTH - 1)) + BINBLOCKWIDTH;
2549 block <<= 1;
2550 while ((block & binblocks(ar_ptr)) == 0)
2551 {
2552 idx += BINBLOCKWIDTH;
2553 block <<= 1;
2554 }
2555 }
2556
2557 /* For each possibly nonempty block ... */
2558 for (;;)
2559 {
2560 startidx = idx; /* (track incomplete blocks) */
2561 q = bin = bin_at(ar_ptr, idx);
2562
2563 /* For each bin in this block ... */
2564 do
2565 {
2566 /* Find and use first big enough chunk ... */
2567
2568 for (victim = last(bin); victim != bin; victim = victim->bk)
2569 {
2570 victim_size = chunksize(victim);
2571 remainder_size = victim_size - nb;
2572
2573 if (remainder_size >= (long)MINSIZE) /* split */
2574 {
2575 remainder = chunk_at_offset(victim, nb);
2576 set_head(victim, nb | PREV_INUSE);
2577 unlink(victim, bck, fwd);
2578 link_last_remainder(ar_ptr, remainder);
2579 set_head(remainder, remainder_size | PREV_INUSE);
2580 set_foot(remainder, remainder_size);
2581 check_malloced_chunk(ar_ptr, victim, nb);
2582 return victim;
2583 }
2584
2585 else if (remainder_size >= 0) /* take */
2586 {
2587 set_inuse_bit_at_offset(victim, victim_size);
2588 unlink(victim, bck, fwd);
2589 check_malloced_chunk(ar_ptr, victim, nb);
2590 return victim;
2591 }
2592
2593 }
2594
2595 bin = next_bin(bin);
2596
2597 } while ((++idx & (BINBLOCKWIDTH - 1)) != 0);
2598
2599 /* Clear out the block bit. */
2600
2601 do /* Possibly backtrack to try to clear a partial block */
2602 {
2603 if ((startidx & (BINBLOCKWIDTH - 1)) == 0)
2604 {
2605 binblocks(ar_ptr) &= ~block;
2606 break;
2607 }
2608 --startidx;
2609 q = prev_bin(q);
2610 } while (first(q) == q);
2611
2612 /* Get to the next possibly nonempty block */
2613
2614 if ( (block <<= 1) <= binblocks(ar_ptr) && (block != 0) )
2615 {
2616 while ((block & binblocks(ar_ptr)) == 0)
2617 {
2618 idx += BINBLOCKWIDTH;
2619 block <<= 1;
2620 }
2621 }
2622 else
2623 break;
2624 }
2625 }
2626
2627
2628 /* Try to use top chunk */
2629
2630 /* Require that there be a remainder, ensuring top always exists */
2631 if ( (remainder_size = chunksize(top(ar_ptr)) - nb) < (long)MINSIZE)
2632 {
2633
2634#if HAVE_MMAP
2635 /* If big and would otherwise need to extend, try to use mmap instead */
2636 if ((unsigned long)nb >= (unsigned long)mmap_threshold &&
2637 (victim = mmap_chunk(nb)) != 0)
2638 return victim;
2639#endif
2640
2641 /* Try to extend */
2642 malloc_extend_top(ar_ptr, nb);
2643 if ((remainder_size = chunksize(top(ar_ptr)) - nb) < (long)MINSIZE)
2644 return 0; /* propagate failure */
2645 }
2646
2647 victim = top(ar_ptr);
2648 set_head(victim, nb | PREV_INUSE);
2649 top(ar_ptr) = chunk_at_offset(victim, nb);
2650 set_head(top(ar_ptr), remainder_size | PREV_INUSE);
2651 check_malloced_chunk(ar_ptr, victim, nb);
2652 return victim;
2653
2654}
2655
2656
2657\f
2658
2659/*
2660
2661 free() algorithm :
2662
2663 cases:
2664
2665 1. free(0) has no effect.
2666
2667 2. If the chunk was allocated via mmap, it is released via munmap().
2668
2669 3. If a returned chunk borders the current high end of memory,
2670 it is consolidated into the top, and if the total unused
2671 topmost memory exceeds the trim threshold, malloc_trim is
2672 called.
2673
2674 4. Other chunks are consolidated as they arrive, and
2675 placed in corresponding bins. (This includes the case of
2676 consolidating with the current `last_remainder').
2677
2678*/
2679
2680
2681#if __STD_C
2682void fREe(Void_t* mem)
2683#else
2684void fREe(mem) Void_t* mem;
2685#endif
2686{
2687 arena *ar_ptr;
2688 mchunkptr p; /* chunk corresponding to mem */
2689
10dc2a90
UD
2690#if defined(_LIBC) || defined(MALLOC_HOOKS)
2691 if (__free_hook != NULL) {
2692 (*__free_hook)(mem);
2693 return;
2694 }
2695#endif
2696
f65fd747
UD
2697 if (mem == 0) /* free(0) has no effect */
2698 return;
2699
2700 p = mem2chunk(mem);
2701
2702#if HAVE_MMAP
2703 if (chunk_is_mmapped(p)) /* release mmapped memory. */
2704 {
2705 munmap_chunk(p);
2706 return;
2707 }
2708#endif
2709
2710 ar_ptr = arena_for_ptr(p);
8a4b65b4
UD
2711#if THREAD_STATS
2712 if(!mutex_trylock(&ar_ptr->mutex))
2713 ++(ar_ptr->stat_lock_direct);
2714 else {
2715 (void)mutex_lock(&ar_ptr->mutex);
2716 ++(ar_ptr->stat_lock_wait);
2717 }
2718#else
f65fd747 2719 (void)mutex_lock(&ar_ptr->mutex);
8a4b65b4 2720#endif
f65fd747
UD
2721 chunk_free(ar_ptr, p);
2722 (void)mutex_unlock(&ar_ptr->mutex);
2723}
2724
2725static void
2726#if __STD_C
2727chunk_free(arena *ar_ptr, mchunkptr p)
2728#else
2729chunk_free(ar_ptr, p) arena *ar_ptr; mchunkptr p;
2730#endif
2731{
2732 INTERNAL_SIZE_T hd = p->size; /* its head field */
2733 INTERNAL_SIZE_T sz; /* its size */
2734 int idx; /* its bin index */
2735 mchunkptr next; /* next contiguous chunk */
2736 INTERNAL_SIZE_T nextsz; /* its size */
2737 INTERNAL_SIZE_T prevsz; /* size of previous contiguous chunk */
2738 mchunkptr bck; /* misc temp for linking */
2739 mchunkptr fwd; /* misc temp for linking */
2740 int islr; /* track whether merging with last_remainder */
2741
2742 check_inuse_chunk(ar_ptr, p);
2743
2744 sz = hd & ~PREV_INUSE;
2745 next = chunk_at_offset(p, sz);
2746 nextsz = chunksize(next);
2747
2748 if (next == top(ar_ptr)) /* merge with top */
2749 {
2750 sz += nextsz;
2751
2752 if (!(hd & PREV_INUSE)) /* consolidate backward */
2753 {
2754 prevsz = p->prev_size;
2755 p = chunk_at_offset(p, -prevsz);
2756 sz += prevsz;
2757 unlink(p, bck, fwd);
2758 }
2759
2760 set_head(p, sz | PREV_INUSE);
2761 top(ar_ptr) = p;
8a4b65b4
UD
2762
2763#ifndef NO_THREADS
2764 if(ar_ptr == &main_arena) {
2765#endif
2766 if ((unsigned long)(sz) >= (unsigned long)trim_threshold)
2767 main_trim(top_pad);
2768#ifndef NO_THREADS
2769 } else {
2770 heap_info *heap = heap_for_ptr(p);
2771
2772 assert(heap->ar_ptr == ar_ptr);
2773
2774 /* Try to get rid of completely empty heaps, if possible. */
2775 if((unsigned long)(sz) >= (unsigned long)trim_threshold ||
2776 p == chunk_at_offset(heap, sizeof(*heap)))
2777 heap_trim(heap, top_pad);
2778 }
2779#endif
f65fd747
UD
2780 return;
2781 }
2782
2783 set_head(next, nextsz); /* clear inuse bit */
2784
2785 islr = 0;
2786
2787 if (!(hd & PREV_INUSE)) /* consolidate backward */
2788 {
2789 prevsz = p->prev_size;
2790 p = chunk_at_offset(p, -prevsz);
2791 sz += prevsz;
2792
2793 if (p->fd == last_remainder(ar_ptr)) /* keep as last_remainder */
2794 islr = 1;
2795 else
2796 unlink(p, bck, fwd);
2797 }
2798
2799 if (!(inuse_bit_at_offset(next, nextsz))) /* consolidate forward */
2800 {
2801 sz += nextsz;
2802
2803 if (!islr && next->fd == last_remainder(ar_ptr))
2804 /* re-insert last_remainder */
2805 {
2806 islr = 1;
2807 link_last_remainder(ar_ptr, p);
2808 }
2809 else
2810 unlink(next, bck, fwd);
2811 }
2812
2813 set_head(p, sz | PREV_INUSE);
2814 set_foot(p, sz);
2815 if (!islr)
2816 frontlink(ar_ptr, p, sz, idx, bck, fwd);
2817}
2818
2819
2820\f
2821
2822
2823/*
2824
2825 Realloc algorithm:
2826
2827 Chunks that were obtained via mmap cannot be extended or shrunk
2828 unless HAVE_MREMAP is defined, in which case mremap is used.
2829 Otherwise, if their reallocation is for additional space, they are
2830 copied. If for less, they are just left alone.
2831
2832 Otherwise, if the reallocation is for additional space, and the
2833 chunk can be extended, it is, else a malloc-copy-free sequence is
2834 taken. There are several different ways that a chunk could be
2835 extended. All are tried:
2836
2837 * Extending forward into following adjacent free chunk.
2838 * Shifting backwards, joining preceding adjacent space
2839 * Both shifting backwards and extending forward.
2840 * Extending into newly sbrked space
2841
2842 Unless the #define REALLOC_ZERO_BYTES_FREES is set, realloc with a
2843 size argument of zero (re)allocates a minimum-sized chunk.
2844
2845 If the reallocation is for less space, and the new request is for
2846 a `small' (<512 bytes) size, then the newly unused space is lopped
2847 off and freed.
2848
2849 The old unix realloc convention of allowing the last-free'd chunk
2850 to be used as an argument to realloc is no longer supported.
2851 I don't know of any programs still relying on this feature,
2852 and allowing it would also allow too many other incorrect
2853 usages of realloc to be sensible.
2854
2855
2856*/
2857
2858
2859#if __STD_C
2860Void_t* rEALLOc(Void_t* oldmem, size_t bytes)
2861#else
2862Void_t* rEALLOc(oldmem, bytes) Void_t* oldmem; size_t bytes;
2863#endif
2864{
2865 arena *ar_ptr;
2866 INTERNAL_SIZE_T nb; /* padded request size */
2867
2868 mchunkptr oldp; /* chunk corresponding to oldmem */
2869 INTERNAL_SIZE_T oldsize; /* its size */
2870
2871 mchunkptr newp; /* chunk to return */
f65fd747 2872
10dc2a90
UD
2873#if defined(_LIBC) || defined(MALLOC_HOOKS)
2874 if (__realloc_hook != NULL) {
2875 Void_t* result;
f65fd747 2876
10dc2a90
UD
2877 result = (*__realloc_hook)(oldmem, bytes);
2878 return result;
2879 }
2880#endif
f65fd747
UD
2881
2882#ifdef REALLOC_ZERO_BYTES_FREES
2883 if (bytes == 0) { fREe(oldmem); return 0; }
2884#endif
2885
f65fd747
UD
2886 /* realloc of null is supposed to be same as malloc */
2887 if (oldmem == 0) return mALLOc(bytes);
2888
10dc2a90
UD
2889 oldp = mem2chunk(oldmem);
2890 oldsize = chunksize(oldp);
f65fd747
UD
2891
2892 nb = request2size(bytes);
2893
2894#if HAVE_MMAP
2895 if (chunk_is_mmapped(oldp))
2896 {
10dc2a90
UD
2897 Void_t* newmem;
2898
f65fd747
UD
2899#if HAVE_MREMAP
2900 newp = mremap_chunk(oldp, nb);
2901 if(newp) return chunk2mem(newp);
2902#endif
2903 /* Note the extra SIZE_SZ overhead. */
2904 if(oldsize - SIZE_SZ >= nb) return oldmem; /* do nothing */
2905 /* Must alloc, copy, free. */
2906 newmem = mALLOc(bytes);
2907 if (newmem == 0) return 0; /* propagate failure */
2908 MALLOC_COPY(newmem, oldmem, oldsize - 2*SIZE_SZ);
2909 munmap_chunk(oldp);
2910 return newmem;
2911 }
2912#endif
2913
2914 ar_ptr = arena_for_ptr(oldp);
8a4b65b4
UD
2915#if THREAD_STATS
2916 if(!mutex_trylock(&ar_ptr->mutex))
2917 ++(ar_ptr->stat_lock_direct);
2918 else {
2919 (void)mutex_lock(&ar_ptr->mutex);
2920 ++(ar_ptr->stat_lock_wait);
2921 }
2922#else
f65fd747 2923 (void)mutex_lock(&ar_ptr->mutex);
8a4b65b4
UD
2924#endif
2925
1228ed5c 2926#ifndef NO_THREADS
f65fd747
UD
2927 /* As in malloc(), remember this arena for the next allocation. */
2928 tsd_setspecific(arena_key, (Void_t *)ar_ptr);
1228ed5c 2929#endif
f65fd747 2930
10dc2a90
UD
2931 newp = chunk_realloc(ar_ptr, oldp, oldsize, nb);
2932
2933 (void)mutex_unlock(&ar_ptr->mutex);
2934 return newp ? chunk2mem(newp) : NULL;
2935}
2936
2937static mchunkptr
2938#if __STD_C
2939chunk_realloc(arena* ar_ptr, mchunkptr oldp, INTERNAL_SIZE_T oldsize,
7e3be507 2940 INTERNAL_SIZE_T nb)
10dc2a90
UD
2941#else
2942chunk_realloc(ar_ptr, oldp, oldsize, nb)
2943arena* ar_ptr; mchunkptr oldp; INTERNAL_SIZE_T oldsize, nb;
2944#endif
2945{
2946 mchunkptr newp = oldp; /* chunk to return */
2947 INTERNAL_SIZE_T newsize = oldsize; /* its size */
2948
2949 mchunkptr next; /* next contiguous chunk after oldp */
2950 INTERNAL_SIZE_T nextsize; /* its size */
2951
2952 mchunkptr prev; /* previous contiguous chunk before oldp */
2953 INTERNAL_SIZE_T prevsize; /* its size */
2954
2955 mchunkptr remainder; /* holds split off extra space from newp */
2956 INTERNAL_SIZE_T remainder_size; /* its size */
2957
2958 mchunkptr bck; /* misc temp for linking */
2959 mchunkptr fwd; /* misc temp for linking */
2960
f65fd747
UD
2961 check_inuse_chunk(ar_ptr, oldp);
2962
2963 if ((long)(oldsize) < (long)(nb))
2964 {
2965
2966 /* Try expanding forward */
2967
2968 next = chunk_at_offset(oldp, oldsize);
2969 if (next == top(ar_ptr) || !inuse(next))
2970 {
2971 nextsize = chunksize(next);
2972
2973 /* Forward into top only if a remainder */
2974 if (next == top(ar_ptr))
2975 {
2976 if ((long)(nextsize + newsize) >= (long)(nb + MINSIZE))
2977 {
2978 newsize += nextsize;
2979 top(ar_ptr) = chunk_at_offset(oldp, nb);
2980 set_head(top(ar_ptr), (newsize - nb) | PREV_INUSE);
2981 set_head_size(oldp, nb);
10dc2a90 2982 return oldp;
f65fd747
UD
2983 }
2984 }
2985
2986 /* Forward into next chunk */
2987 else if (((long)(nextsize + newsize) >= (long)(nb)))
2988 {
2989 unlink(next, bck, fwd);
2990 newsize += nextsize;
2991 goto split;
2992 }
2993 }
2994 else
2995 {
2996 next = 0;
2997 nextsize = 0;
2998 }
2999
3000 /* Try shifting backwards. */
3001
3002 if (!prev_inuse(oldp))
3003 {
3004 prev = prev_chunk(oldp);
3005 prevsize = chunksize(prev);
3006
3007 /* try forward + backward first to save a later consolidation */
3008
3009 if (next != 0)
3010 {
3011 /* into top */
3012 if (next == top(ar_ptr))
3013 {
3014 if ((long)(nextsize + prevsize + newsize) >= (long)(nb + MINSIZE))
3015 {
3016 unlink(prev, bck, fwd);
3017 newp = prev;
3018 newsize += prevsize + nextsize;
10dc2a90 3019 MALLOC_COPY(chunk2mem(newp), chunk2mem(oldp), oldsize - SIZE_SZ);
f65fd747
UD
3020 top(ar_ptr) = chunk_at_offset(newp, nb);
3021 set_head(top(ar_ptr), (newsize - nb) | PREV_INUSE);
3022 set_head_size(newp, nb);
10dc2a90 3023 return newp;
f65fd747
UD
3024 }
3025 }
3026
3027 /* into next chunk */
3028 else if (((long)(nextsize + prevsize + newsize) >= (long)(nb)))
3029 {
3030 unlink(next, bck, fwd);
3031 unlink(prev, bck, fwd);
3032 newp = prev;
3033 newsize += nextsize + prevsize;
10dc2a90 3034 MALLOC_COPY(chunk2mem(newp), chunk2mem(oldp), oldsize - SIZE_SZ);
f65fd747
UD
3035 goto split;
3036 }
3037 }
3038
3039 /* backward only */
3040 if (prev != 0 && (long)(prevsize + newsize) >= (long)nb)
3041 {
3042 unlink(prev, bck, fwd);
3043 newp = prev;
3044 newsize += prevsize;
10dc2a90 3045 MALLOC_COPY(chunk2mem(newp), chunk2mem(oldp), oldsize - SIZE_SZ);
f65fd747
UD
3046 goto split;
3047 }
3048 }
3049
3050 /* Must allocate */
3051
3052 newp = chunk_alloc (ar_ptr, nb);
3053
3054 if (newp == 0) /* propagate failure */
3055 return 0;
3056
3057 /* Avoid copy if newp is next chunk after oldp. */
3058 /* (This can only happen when new chunk is sbrk'ed.) */
3059
3060 if ( newp == next_chunk(oldp))
3061 {
3062 newsize += chunksize(newp);
3063 newp = oldp;
3064 goto split;
3065 }
3066
3067 /* Otherwise copy, free, and exit */
10dc2a90 3068 MALLOC_COPY(chunk2mem(newp), chunk2mem(oldp), oldsize - SIZE_SZ);
f65fd747 3069 chunk_free(ar_ptr, oldp);
10dc2a90 3070 return newp;
f65fd747
UD
3071 }
3072
3073
3074 split: /* split off extra room in old or expanded chunk */
3075
3076 if (newsize - nb >= MINSIZE) /* split off remainder */
3077 {
3078 remainder = chunk_at_offset(newp, nb);
3079 remainder_size = newsize - nb;
3080 set_head_size(newp, nb);
3081 set_head(remainder, remainder_size | PREV_INUSE);
3082 set_inuse_bit_at_offset(remainder, remainder_size);
3083 chunk_free(ar_ptr, remainder);
3084 }
3085 else
3086 {
3087 set_head_size(newp, newsize);
3088 set_inuse_bit_at_offset(newp, newsize);
3089 }
3090
3091 check_inuse_chunk(ar_ptr, newp);
10dc2a90 3092 return newp;
f65fd747
UD
3093}
3094
3095
3096\f
3097
3098/*
3099
3100 memalign algorithm:
3101
3102 memalign requests more than enough space from malloc, finds a spot
3103 within that chunk that meets the alignment request, and then
3104 possibly frees the leading and trailing space.
3105
3106 The alignment argument must be a power of two. This property is not
3107 checked by memalign, so misuse may result in random runtime errors.
3108
3109 8-byte alignment is guaranteed by normal malloc calls, so don't
3110 bother calling memalign with an argument of 8 or less.
3111
3112 Overreliance on memalign is a sure way to fragment space.
3113
3114*/
3115
3116
3117#if __STD_C
3118Void_t* mEMALIGn(size_t alignment, size_t bytes)
3119#else
3120Void_t* mEMALIGn(alignment, bytes) size_t alignment; size_t bytes;
3121#endif
3122{
3123 arena *ar_ptr;
3124 INTERNAL_SIZE_T nb; /* padded request size */
10dc2a90
UD
3125 mchunkptr p;
3126
3127#if defined(_LIBC) || defined(MALLOC_HOOKS)
3128 if (__memalign_hook != NULL) {
3129 Void_t* result;
3130
3131 result = (*__memalign_hook)(alignment, bytes);
3132 return result;
3133 }
3134#endif
f65fd747
UD
3135
3136 /* If need less alignment than we give anyway, just relay to malloc */
3137
3138 if (alignment <= MALLOC_ALIGNMENT) return mALLOc(bytes);
3139
3140 /* Otherwise, ensure that it is at least a minimum chunk size */
3141
3142 if (alignment < MINSIZE) alignment = MINSIZE;
3143
f65fd747
UD
3144 nb = request2size(bytes);
3145 arena_get(ar_ptr, nb + alignment + MINSIZE);
3146 if(!ar_ptr)
3147 return 0;
10dc2a90
UD
3148 p = chunk_align(ar_ptr, nb, alignment);
3149 (void)mutex_unlock(&ar_ptr->mutex);
3150 return p ? chunk2mem(p) : NULL;
3151}
f65fd747 3152
10dc2a90
UD
3153static mchunkptr
3154#if __STD_C
3155chunk_align(arena* ar_ptr, INTERNAL_SIZE_T nb, size_t alignment)
3156#else
3157chunk_align(ar_ptr, nb, alignment)
3158arena* ar_ptr; INTERNAL_SIZE_T nb; size_t alignment;
3159#endif
3160{
3161 char* m; /* memory returned by malloc call */
3162 mchunkptr p; /* corresponding chunk */
3163 char* brk; /* alignment point within p */
3164 mchunkptr newp; /* chunk to return */
3165 INTERNAL_SIZE_T newsize; /* its size */
3166 INTERNAL_SIZE_T leadsize; /* leading space befor alignment point */
3167 mchunkptr remainder; /* spare room at end to split off */
3168 long remainder_size; /* its size */
3169
3170 /* Call chunk_alloc with worst case padding to hit alignment. */
3171 p = chunk_alloc(ar_ptr, nb + alignment + MINSIZE);
3172 if (p == 0)
f65fd747 3173 return 0; /* propagate failure */
f65fd747
UD
3174
3175 m = chunk2mem(p);
3176
3177 if ((((unsigned long)(m)) % alignment) == 0) /* aligned */
3178 {
3179#if HAVE_MMAP
3180 if(chunk_is_mmapped(p)) {
10dc2a90 3181 return p; /* nothing more to do */
f65fd747
UD
3182 }
3183#endif
3184 }
3185 else /* misaligned */
3186 {
3187 /*
3188 Find an aligned spot inside chunk.
3189 Since we need to give back leading space in a chunk of at
3190 least MINSIZE, if the first calculation places us at
3191 a spot with less than MINSIZE leader, we can move to the
3192 next aligned spot -- we've allocated enough total room so that
3193 this is always possible.
3194 */
3195
3196 brk = (char*)mem2chunk(((unsigned long)(m + alignment - 1)) & -alignment);
10dc2a90 3197 if ((long)(brk - (char*)(p)) < (long)MINSIZE) brk += alignment;
f65fd747
UD
3198
3199 newp = (mchunkptr)brk;
3200 leadsize = brk - (char*)(p);
3201 newsize = chunksize(p) - leadsize;
3202
3203#if HAVE_MMAP
3204 if(chunk_is_mmapped(p))
3205 {
3206 newp->prev_size = p->prev_size + leadsize;
3207 set_head(newp, newsize|IS_MMAPPED);
10dc2a90 3208 return newp;
f65fd747
UD
3209 }
3210#endif
3211
3212 /* give back leader, use the rest */
3213
3214 set_head(newp, newsize | PREV_INUSE);
3215 set_inuse_bit_at_offset(newp, newsize);
3216 set_head_size(p, leadsize);
3217 chunk_free(ar_ptr, p);
3218 p = newp;
3219
3220 assert (newsize>=nb && (((unsigned long)(chunk2mem(p))) % alignment) == 0);
3221 }
3222
3223 /* Also give back spare room at the end */
3224
3225 remainder_size = chunksize(p) - nb;
3226
3227 if (remainder_size >= (long)MINSIZE)
3228 {
3229 remainder = chunk_at_offset(p, nb);
3230 set_head(remainder, remainder_size | PREV_INUSE);
3231 set_head_size(p, nb);
3232 chunk_free(ar_ptr, remainder);
3233 }
3234
3235 check_inuse_chunk(ar_ptr, p);
10dc2a90 3236 return p;
f65fd747
UD
3237}
3238
3239\f
3240
3241
3242/*
3243 valloc just invokes memalign with alignment argument equal
3244 to the page size of the system (or as near to this as can
3245 be figured out from all the includes/defines above.)
3246*/
3247
3248#if __STD_C
3249Void_t* vALLOc(size_t bytes)
3250#else
3251Void_t* vALLOc(bytes) size_t bytes;
3252#endif
3253{
3254 return mEMALIGn (malloc_getpagesize, bytes);
3255}
3256
3257/*
3258 pvalloc just invokes valloc for the nearest pagesize
3259 that will accommodate request
3260*/
3261
3262
3263#if __STD_C
3264Void_t* pvALLOc(size_t bytes)
3265#else
3266Void_t* pvALLOc(bytes) size_t bytes;
3267#endif
3268{
3269 size_t pagesize = malloc_getpagesize;
3270 return mEMALIGn (pagesize, (bytes + pagesize - 1) & ~(pagesize - 1));
3271}
3272
3273/*
3274
10dc2a90 3275 calloc calls chunk_alloc, then zeroes out the allocated chunk.
f65fd747
UD
3276
3277*/
3278
3279#if __STD_C
3280Void_t* cALLOc(size_t n, size_t elem_size)
3281#else
3282Void_t* cALLOc(n, elem_size) size_t n; size_t elem_size;
3283#endif
3284{
3285 arena *ar_ptr;
3286 mchunkptr p, oldtop;
10dc2a90 3287 INTERNAL_SIZE_T sz, csz, oldtopsize;
f65fd747
UD
3288 Void_t* mem;
3289
10dc2a90
UD
3290#if defined(_LIBC) || defined(MALLOC_HOOKS)
3291 if (__malloc_hook != NULL) {
3292 sz = n * elem_size;
3293 mem = (*__malloc_hook)(sz);
831372e7
UD
3294 if(mem == 0)
3295 return 0;
10dc2a90
UD
3296#ifdef HAVE_MEMCPY
3297 memset(mem, 0, sz);
3298#else
831372e7 3299 while(sz > 0) ((char*)mem)[--sz] = 0; /* rather inefficient */
10dc2a90
UD
3300#endif
3301 return mem;
3302 }
3303#endif
f65fd747 3304
10dc2a90 3305 sz = request2size(n * elem_size);
f65fd747
UD
3306 arena_get(ar_ptr, sz);
3307 if(!ar_ptr)
3308 return 0;
3309
3310 /* check if expand_top called, in which case don't need to clear */
3311#if MORECORE_CLEARS
3312 oldtop = top(ar_ptr);
3313 oldtopsize = chunksize(top(ar_ptr));
3314#endif
3315 p = chunk_alloc (ar_ptr, sz);
3316
3317 /* Only clearing follows, so we can unlock early. */
3318 (void)mutex_unlock(&ar_ptr->mutex);
3319
3320 if (p == 0)
3321 return 0;
3322 else
3323 {
3324 mem = chunk2mem(p);
3325
3326 /* Two optional cases in which clearing not necessary */
3327
3328#if HAVE_MMAP
3329 if (chunk_is_mmapped(p)) return mem;
3330#endif
3331
3332 csz = chunksize(p);
3333
3334#if MORECORE_CLEARS
3335 if (p == oldtop && csz > oldtopsize)
3336 {
3337 /* clear only the bytes from non-freshly-sbrked memory */
3338 csz = oldtopsize;
3339 }
3340#endif
3341
3342 MALLOC_ZERO(mem, csz - SIZE_SZ);
3343 return mem;
3344 }
3345}
3346
3347/*
3348
3349 cfree just calls free. It is needed/defined on some systems
3350 that pair it with calloc, presumably for odd historical reasons.
3351
3352*/
3353
3354#if !defined(_LIBC)
3355#if __STD_C
3356void cfree(Void_t *mem)
3357#else
3358void cfree(mem) Void_t *mem;
3359#endif
3360{
3361 free(mem);
3362}
3363#endif
3364
3365\f
3366
3367/*
3368
3369 Malloc_trim gives memory back to the system (via negative
3370 arguments to sbrk) if there is unused memory at the `high' end of
3371 the malloc pool. You can call this after freeing large blocks of
3372 memory to potentially reduce the system-level memory requirements
3373 of a program. However, it cannot guarantee to reduce memory. Under
3374 some allocation patterns, some large free blocks of memory will be
3375 locked between two used chunks, so they cannot be given back to
3376 the system.
3377
3378 The `pad' argument to malloc_trim represents the amount of free
3379 trailing space to leave untrimmed. If this argument is zero,
3380 only the minimum amount of memory to maintain internal data
3381 structures will be left (one page or less). Non-zero arguments
3382 can be supplied to maintain enough trailing space to service
3383 future expected allocations without having to re-obtain memory
3384 from the system.
3385
3386 Malloc_trim returns 1 if it actually released any memory, else 0.
3387
3388*/
3389
3390#if __STD_C
7e3be507 3391int mALLOC_TRIm(size_t pad)
f65fd747 3392#else
7e3be507 3393int mALLOC_TRIm(pad) size_t pad;
f65fd747
UD
3394#endif
3395{
3396 int res;
3397
3398 (void)mutex_lock(&main_arena.mutex);
8a4b65b4 3399 res = main_trim(pad);
f65fd747
UD
3400 (void)mutex_unlock(&main_arena.mutex);
3401 return res;
3402}
3403
8a4b65b4
UD
3404/* Trim the main arena. */
3405
f65fd747
UD
3406static int
3407#if __STD_C
8a4b65b4 3408main_trim(size_t pad)
f65fd747 3409#else
8a4b65b4 3410main_trim(pad) size_t pad;
f65fd747
UD
3411#endif
3412{
3413 mchunkptr top_chunk; /* The current top chunk */
3414 long top_size; /* Amount of top-most memory */
3415 long extra; /* Amount to release */
3416 char* current_brk; /* address returned by pre-check sbrk call */
3417 char* new_brk; /* address returned by negative sbrk call */
3418
3419 unsigned long pagesz = malloc_getpagesize;
3420
8a4b65b4 3421 top_chunk = top(&main_arena);
f65fd747
UD
3422 top_size = chunksize(top_chunk);
3423 extra = ((top_size - pad - MINSIZE + (pagesz-1)) / pagesz - 1) * pagesz;
3424
3425 if (extra < (long)pagesz) /* Not enough memory to release */
3426 return 0;
3427
8a4b65b4
UD
3428 /* Test to make sure no one else called sbrk */
3429 current_brk = (char*)(MORECORE (0));
3430 if (current_brk != (char*)(top_chunk) + top_size)
3431 return 0; /* Apparently we don't own memory; must fail */
f65fd747 3432
8a4b65b4 3433 new_brk = (char*)(MORECORE (-extra));
f65fd747 3434
1228ed5c
UD
3435 /* Call the `morecore' hook if necessary. */
3436 if (__after_morecore_hook)
3437 (*__after_morecore_hook) ();
3438
8a4b65b4
UD
3439 if (new_brk == (char*)(MORECORE_FAILURE)) { /* sbrk failed? */
3440 /* Try to figure out what we have */
3441 current_brk = (char*)(MORECORE (0));
3442 top_size = current_brk - (char*)top_chunk;
3443 if (top_size >= (long)MINSIZE) /* if not, we are very very dead! */
3444 {
3445 sbrked_mem = current_brk - sbrk_base;
3446 set_head(top_chunk, top_size | PREV_INUSE);
f65fd747 3447 }
8a4b65b4
UD
3448 check_chunk(&main_arena, top_chunk);
3449 return 0;
3450 }
3451 sbrked_mem -= extra;
3452
3453 /* Success. Adjust top accordingly. */
3454 set_head(top_chunk, (top_size - extra) | PREV_INUSE);
3455 check_chunk(&main_arena, top_chunk);
3456 return 1;
3457}
f65fd747
UD
3458
3459#ifndef NO_THREADS
8a4b65b4
UD
3460
3461static int
3462#if __STD_C
3463heap_trim(heap_info *heap, size_t pad)
3464#else
3465heap_trim(heap, pad) heap_info *heap; size_t pad;
f65fd747 3466#endif
8a4b65b4
UD
3467{
3468 unsigned long pagesz = malloc_getpagesize;
3469 arena *ar_ptr = heap->ar_ptr;
3470 mchunkptr top_chunk = top(ar_ptr), p, bck, fwd;
3471 heap_info *prev_heap;
3472 long new_size, top_size, extra;
3473
3474 /* Can this heap go away completely ? */
3475 while(top_chunk == chunk_at_offset(heap, sizeof(*heap))) {
3476 prev_heap = heap->prev;
3477 p = chunk_at_offset(prev_heap, prev_heap->size - (MINSIZE-2*SIZE_SZ));
3478 assert(p->size == (0|PREV_INUSE)); /* must be fencepost */
3479 p = prev_chunk(p);
3480 new_size = chunksize(p) + (MINSIZE-2*SIZE_SZ);
10dc2a90 3481 assert(new_size>0 && new_size<(long)(2*MINSIZE));
8a4b65b4
UD
3482 if(!prev_inuse(p))
3483 new_size += p->prev_size;
3484 assert(new_size>0 && new_size<HEAP_MAX_SIZE);
3485 if(new_size + (HEAP_MAX_SIZE - prev_heap->size) < pad + MINSIZE + pagesz)
3486 break;
3487 ar_ptr->size -= heap->size;
3488 delete_heap(heap);
3489 heap = prev_heap;
3490 if(!prev_inuse(p)) { /* consolidate backward */
3491 p = prev_chunk(p);
3492 unlink(p, bck, fwd);
3493 }
3494 assert(((unsigned long)((char*)p + new_size) & (pagesz-1)) == 0);
3495 assert( ((char*)p + new_size) == ((char*)heap + heap->size) );
3496 top(ar_ptr) = top_chunk = p;
3497 set_head(top_chunk, new_size | PREV_INUSE);
3498 check_chunk(ar_ptr, top_chunk);
3499 }
3500 top_size = chunksize(top_chunk);
3501 extra = ((top_size - pad - MINSIZE + (pagesz-1))/pagesz - 1) * pagesz;
3502 if(extra < (long)pagesz)
3503 return 0;
3504 /* Try to shrink. */
3505 if(grow_heap(heap, -extra) != 0)
3506 return 0;
3507 ar_ptr->size -= extra;
f65fd747
UD
3508
3509 /* Success. Adjust top accordingly. */
3510 set_head(top_chunk, (top_size - extra) | PREV_INUSE);
3511 check_chunk(ar_ptr, top_chunk);
3512 return 1;
3513}
3514
8a4b65b4
UD
3515#endif
3516
f65fd747
UD
3517\f
3518
3519/*
3520 malloc_usable_size:
3521
3522 This routine tells you how many bytes you can actually use in an
3523 allocated chunk, which may be more than you requested (although
3524 often not). You can use this many bytes without worrying about
3525 overwriting other allocated objects. Not a particularly great
3526 programming practice, but still sometimes useful.
3527
3528*/
3529
3530#if __STD_C
7e3be507 3531size_t mALLOC_USABLE_SIZe(Void_t* mem)
f65fd747 3532#else
7e3be507 3533size_t mALLOC_USABLE_SIZe(mem) Void_t* mem;
f65fd747
UD
3534#endif
3535{
3536 mchunkptr p;
3537
3538 if (mem == 0)
3539 return 0;
3540 else
3541 {
3542 p = mem2chunk(mem);
3543 if(!chunk_is_mmapped(p))
3544 {
3545 if (!inuse(p)) return 0;
3546 check_inuse_chunk(arena_for_ptr(mem), p);
3547 return chunksize(p) - SIZE_SZ;
3548 }
3549 return chunksize(p) - 2*SIZE_SZ;
3550 }
3551}
3552
3553
3554\f
3555
8a4b65b4 3556/* Utility to update mallinfo for malloc_stats() and mallinfo() */
f65fd747 3557
8a4b65b4
UD
3558static void
3559#if __STD_C
3560malloc_update_mallinfo(arena *ar_ptr, struct mallinfo *mi)
3561#else
3562malloc_update_mallinfo(ar_ptr, mi) arena *ar_ptr; struct mallinfo *mi;
3563#endif
f65fd747 3564{
f65fd747
UD
3565 int i, navail;
3566 mbinptr b;
3567 mchunkptr p;
3568#if MALLOC_DEBUG
3569 mchunkptr q;
3570#endif
3571 INTERNAL_SIZE_T avail;
3572
3573 (void)mutex_lock(&ar_ptr->mutex);
3574 avail = chunksize(top(ar_ptr));
3575 navail = ((long)(avail) >= (long)MINSIZE)? 1 : 0;
3576
3577 for (i = 1; i < NAV; ++i)
3578 {
3579 b = bin_at(ar_ptr, i);
3580 for (p = last(b); p != b; p = p->bk)
3581 {
3582#if MALLOC_DEBUG
3583 check_free_chunk(ar_ptr, p);
3584 for (q = next_chunk(p);
8a4b65b4 3585 q != top(ar_ptr) && inuse(q) && (long)chunksize(q) > 0;
f65fd747
UD
3586 q = next_chunk(q))
3587 check_inuse_chunk(ar_ptr, q);
3588#endif
3589 avail += chunksize(p);
3590 navail++;
3591 }
3592 }
3593
8a4b65b4
UD
3594 mi->arena = ar_ptr->size;
3595 mi->ordblks = navail;
3596 mi->uordblks = ar_ptr->size - avail;
3597 mi->fordblks = avail;
3598 mi->hblks = n_mmaps;
3599 mi->hblkhd = mmapped_mem;
3600 mi->keepcost = chunksize(top(ar_ptr));
f65fd747
UD
3601
3602 (void)mutex_unlock(&ar_ptr->mutex);
3603}
3604
8a4b65b4
UD
3605#if !defined(NO_THREADS) && MALLOC_DEBUG > 1
3606
3607/* Print the complete contents of a single heap to stderr. */
3608
3609static void
3610#if __STD_C
3611dump_heap(heap_info *heap)
3612#else
3613dump_heap(heap) heap_info *heap;
3614#endif
3615{
3616 char *ptr;
3617 mchunkptr p;
3618
3619 fprintf(stderr, "Heap %p, size %10lx:\n", heap, (long)heap->size);
3620 ptr = (heap->ar_ptr != (arena*)(heap+1)) ?
3621 (char*)(heap + 1) : (char*)(heap + 1) + sizeof(arena);
3622 p = (mchunkptr)(((unsigned long)ptr + MALLOC_ALIGN_MASK) &
3623 ~MALLOC_ALIGN_MASK);
3624 for(;;) {
3625 fprintf(stderr, "chunk %p size %10lx", p, (long)p->size);
3626 if(p == top(heap->ar_ptr)) {
3627 fprintf(stderr, " (top)\n");
3628 break;
3629 } else if(p->size == (0|PREV_INUSE)) {
3630 fprintf(stderr, " (fence)\n");
3631 break;
3632 }
3633 fprintf(stderr, "\n");
3634 p = next_chunk(p);
3635 }
3636}
3637
3638#endif
3639
f65fd747
UD
3640\f
3641
3642/*
3643
3644 malloc_stats:
3645
6d52618b 3646 For all arenas separately and in total, prints on stderr the
8a4b65b4 3647 amount of space obtained from the system, and the current number
f65fd747
UD
3648 of bytes allocated via malloc (or realloc, etc) but not yet
3649 freed. (Note that this is the number of bytes allocated, not the
3650 number requested. It will be larger than the number requested
8a4b65b4
UD
3651 because of alignment and bookkeeping overhead.) When not compiled
3652 for multiple threads, the maximum amount of allocated memory
3653 (which may be more than current if malloc_trim and/or munmap got
3654 called) is also reported. When using mmap(), prints the maximum
3655 number of simultaneous mmap regions used, too.
f65fd747
UD
3656
3657*/
3658
7e3be507 3659void mALLOC_STATs()
f65fd747 3660{
8a4b65b4
UD
3661 int i;
3662 arena *ar_ptr;
3663 struct mallinfo mi;
3664 unsigned int in_use_b = mmapped_mem, system_b = in_use_b;
3665#if THREAD_STATS
3666 long stat_lock_direct = 0, stat_lock_loop = 0, stat_lock_wait = 0;
3667#endif
3668
7e3be507 3669 for(i=0, ar_ptr = &main_arena;; i++) {
8a4b65b4
UD
3670 malloc_update_mallinfo(ar_ptr, &mi);
3671 fprintf(stderr, "Arena %d:\n", i);
3672 fprintf(stderr, "system bytes = %10u\n", (unsigned int)mi.arena);
3673 fprintf(stderr, "in use bytes = %10u\n", (unsigned int)mi.uordblks);
3674 system_b += mi.arena;
3675 in_use_b += mi.uordblks;
3676#if THREAD_STATS
3677 stat_lock_direct += ar_ptr->stat_lock_direct;
3678 stat_lock_loop += ar_ptr->stat_lock_loop;
3679 stat_lock_wait += ar_ptr->stat_lock_wait;
3680#endif
3681#if !defined(NO_THREADS) && MALLOC_DEBUG > 1
3682 if(ar_ptr != &main_arena) {
7e3be507 3683 (void)mutex_lock(&ar_ptr->mutex);
8a4b65b4
UD
3684 heap_info *heap = heap_for_ptr(top(ar_ptr));
3685 while(heap) { dump_heap(heap); heap = heap->prev; }
7e3be507 3686 (void)mutex_unlock(&ar_ptr->mutex);
8a4b65b4
UD
3687 }
3688#endif
7e3be507
UD
3689 ar_ptr = ar_ptr->next;
3690 if(ar_ptr == &main_arena) break;
8a4b65b4
UD
3691 }
3692 fprintf(stderr, "Total (incl. mmap):\n");
3693 fprintf(stderr, "system bytes = %10u\n", system_b);
3694 fprintf(stderr, "in use bytes = %10u\n", in_use_b);
3695#ifdef NO_THREADS
3696 fprintf(stderr, "max system bytes = %10u\n", (unsigned int)max_total_mem);
3697#endif
f65fd747 3698#if HAVE_MMAP
8a4b65b4 3699 fprintf(stderr, "max mmap regions = %10u\n", (unsigned int)max_n_mmaps);
f65fd747
UD
3700#endif
3701#if THREAD_STATS
8a4b65b4 3702 fprintf(stderr, "heaps created = %10d\n", stat_n_heaps);
f65fd747
UD
3703 fprintf(stderr, "locked directly = %10ld\n", stat_lock_direct);
3704 fprintf(stderr, "locked in loop = %10ld\n", stat_lock_loop);
8a4b65b4
UD
3705 fprintf(stderr, "locked waiting = %10ld\n", stat_lock_wait);
3706 fprintf(stderr, "locked total = %10ld\n",
3707 stat_lock_direct + stat_lock_loop + stat_lock_wait);
f65fd747
UD
3708#endif
3709}
3710
3711/*
3712 mallinfo returns a copy of updated current mallinfo.
8a4b65b4 3713 The information reported is for the arena last used by the thread.
f65fd747
UD
3714*/
3715
3716struct mallinfo mALLINFo()
3717{
8a4b65b4
UD
3718 struct mallinfo mi;
3719 Void_t *vptr = NULL;
3720
1228ed5c 3721#ifndef NO_THREADS
8a4b65b4 3722 tsd_getspecific(arena_key, vptr);
1228ed5c 3723#endif
8a4b65b4
UD
3724 malloc_update_mallinfo((vptr ? (arena*)vptr : &main_arena), &mi);
3725 return mi;
f65fd747
UD
3726}
3727
3728
3729\f
3730
3731/*
3732 mallopt:
3733
3734 mallopt is the general SVID/XPG interface to tunable parameters.
3735 The format is to provide a (parameter-number, parameter-value) pair.
3736 mallopt then sets the corresponding parameter to the argument
3737 value if it can (i.e., so long as the value is meaningful),
3738 and returns 1 if successful else 0.
3739
3740 See descriptions of tunable parameters above.
3741
3742*/
3743
3744#if __STD_C
3745int mALLOPt(int param_number, int value)
3746#else
3747int mALLOPt(param_number, value) int param_number; int value;
3748#endif
3749{
3750 switch(param_number)
3751 {
3752 case M_TRIM_THRESHOLD:
3753 trim_threshold = value; return 1;
3754 case M_TOP_PAD:
3755 top_pad = value; return 1;
3756 case M_MMAP_THRESHOLD:
3757#ifndef NO_THREADS
3758 /* Forbid setting the threshold too high. */
3759 if((unsigned long)value > HEAP_MAX_SIZE/2) return 0;
3760#endif
3761 mmap_threshold = value; return 1;
3762 case M_MMAP_MAX:
3763#if HAVE_MMAP
3764 n_mmaps_max = value; return 1;
3765#else
3766 if (value != 0) return 0; else n_mmaps_max = value; return 1;
3767#endif
10dc2a90
UD
3768 case M_CHECK_ACTION:
3769 check_action = value; return 1;
f65fd747
UD
3770
3771 default:
3772 return 0;
3773 }
3774}
10dc2a90 3775
10dc2a90
UD
3776\f
3777
3778#if defined(_LIBC) || defined(MALLOC_HOOKS)
3779
3780/* A simple, standard set of debugging hooks. Overhead is `only' one
3781 byte per chunk; still this will catch most cases of double frees or
3782 overruns. */
3783
c0e45674 3784#define MAGICBYTE(p) ( ( ((size_t)p >> 3) ^ ((size_t)p >> 11)) & 0xFF )
f65fd747 3785
10dc2a90
UD
3786/* Convert a pointer to be free()d or realloc()ed to a valid chunk
3787 pointer. If the provided pointer is not valid, return NULL. The
3788 goal here is to avoid crashes, unlike in the MALLOC_DEBUG code. */
3789
3790static mchunkptr
3791#if __STD_C
3792mem2chunk_check(Void_t* mem)
3793#else
3794mem2chunk_check(mem) Void_t* mem;
f65fd747 3795#endif
10dc2a90
UD
3796{
3797 mchunkptr p;
3798 INTERNAL_SIZE_T sz;
3799
3800 p = mem2chunk(mem);
3801 if(!aligned_OK(p)) return NULL;
3802 if( (char*)p>=sbrk_base && (char*)p<(sbrk_base+sbrked_mem) ) {
7e3be507 3803 /* Must be a chunk in conventional heap memory. */
10dc2a90
UD
3804 if(chunk_is_mmapped(p) ||
3805 ( (sz = chunksize(p)), ((char*)p + sz)>=(sbrk_base+sbrked_mem) ) ||
7e3be507
UD
3806 sz<MINSIZE || sz&MALLOC_ALIGN_MASK || !inuse(p) ||
3807 ( !prev_inuse(p) && (p->prev_size&MALLOC_ALIGN_MASK ||
3808 (long)prev_chunk(p)<(long)sbrk_base ||
3809 next_chunk(prev_chunk(p))!=p) ))
3810 return NULL;
3811 if(*((unsigned char*)p + sz + (SIZE_SZ-1)) != MAGICBYTE(p))
3812 return NULL;
3813 *((unsigned char*)p + sz + (SIZE_SZ-1)) ^= 0xFF;
10dc2a90
UD
3814 } else {
3815 unsigned long offset, page_mask = malloc_getpagesize-1;
3816
7e3be507 3817 /* mmap()ed chunks have MALLOC_ALIGNMENT or higher power-of-two
10dc2a90
UD
3818 alignment relative to the beginning of a page. Check this
3819 first. */
3820 offset = (unsigned long)mem & page_mask;
3821 if((offset!=MALLOC_ALIGNMENT && offset!=0 && offset!=0x10 &&
7e3be507
UD
3822 offset!=0x20 && offset!=0x40 && offset!=0x80 && offset!=0x100 &&
3823 offset!=0x200 && offset!=0x400 && offset!=0x800 && offset!=0x1000 &&
3824 offset<0x2000) ||
10dc2a90
UD
3825 !chunk_is_mmapped(p) || (p->size & PREV_INUSE) ||
3826 ( (((unsigned long)p - p->prev_size) & page_mask) != 0 ) ||
3827 ( (sz = chunksize(p)), ((p->prev_size + sz) & page_mask) != 0 ) )
3828 return NULL;
7e3be507
UD
3829 if(*((unsigned char*)p + sz - 1) != MAGICBYTE(p))
3830 return NULL;
3831 *((unsigned char*)p + sz - 1) ^= 0xFF;
10dc2a90
UD
3832 }
3833 return p;
3834}
3835
3836static Void_t*
3837#if __STD_C
3838malloc_check(size_t sz)
3839#else
3840malloc_check(sz) size_t sz;
3841#endif
3842{
3843 mchunkptr victim;
3844 INTERNAL_SIZE_T nb = request2size(sz + 1);
3845
3846 (void)mutex_lock(&main_arena.mutex);
3847 victim = chunk_alloc(&main_arena, nb);
3848 (void)mutex_unlock(&main_arena.mutex);
3849 if(!victim) return NULL;
3850 nb = chunksize(victim);
3851 if(chunk_is_mmapped(victim))
3852 --nb;
3853 else
3854 nb += SIZE_SZ - 1;
7e3be507 3855 *((unsigned char*)victim + nb) = MAGICBYTE(victim);
10dc2a90
UD
3856 return chunk2mem(victim);
3857}
3858
3859static void
3860#if __STD_C
3861free_check(Void_t* mem)
3862#else
3863free_check(mem) Void_t* mem;
3864#endif
3865{
3866 mchunkptr p;
3867
3868 if(!mem) return;
7e3be507 3869 (void)mutex_lock(&main_arena.mutex);
10dc2a90
UD
3870 p = mem2chunk_check(mem);
3871 if(!p) {
7e3be507 3872 (void)mutex_unlock(&main_arena.mutex);
10dc2a90
UD
3873 switch(check_action) {
3874 case 1:
3875 fprintf(stderr, "free(): invalid pointer %lx!\n", (long)(mem));
3876 break;
3877 case 2:
3878 abort();
3879 }
3880 return;
3881 }
3882#if HAVE_MMAP
3883 if (chunk_is_mmapped(p)) {
7e3be507 3884 (void)mutex_unlock(&main_arena.mutex);
10dc2a90
UD
3885 munmap_chunk(p);
3886 return;
3887 }
3888#endif
7e3be507
UD
3889#if 0 /* Erase freed memory. */
3890 memset(mem, 0, chunksize(p) - (SIZE_SZ+1));
3891#endif
10dc2a90
UD
3892 chunk_free(&main_arena, p);
3893 (void)mutex_unlock(&main_arena.mutex);
3894}
3895
3896static Void_t*
3897#if __STD_C
3898realloc_check(Void_t* oldmem, size_t bytes)
3899#else
3900realloc_check(oldmem, bytes) Void_t* oldmem; size_t bytes;
3901#endif
3902{
3903 mchunkptr oldp, newp;
3904 INTERNAL_SIZE_T nb, oldsize;
3905
3906 if (oldmem == 0) return malloc_check(bytes);
7e3be507 3907 (void)mutex_lock(&main_arena.mutex);
10dc2a90
UD
3908 oldp = mem2chunk_check(oldmem);
3909 if(!oldp) {
7e3be507 3910 (void)mutex_unlock(&main_arena.mutex);
10dc2a90
UD
3911 switch(check_action) {
3912 case 1:
3913 fprintf(stderr, "realloc(): invalid pointer %lx!\n", (long)(oldmem));
3914 break;
3915 case 2:
3916 abort();
3917 }
3918 return malloc_check(bytes);
3919 }
3920 oldsize = chunksize(oldp);
3921
3922 nb = request2size(bytes+1);
3923
10dc2a90
UD
3924#if HAVE_MMAP
3925 if (chunk_is_mmapped(oldp)) {
3926#if HAVE_MREMAP
3927 newp = mremap_chunk(oldp, nb);
3928 if(!newp) {
3929#endif
3930 /* Note the extra SIZE_SZ overhead. */
3931 if(oldsize - SIZE_SZ >= nb) newp = oldp; /* do nothing */
3932 else {
7e3be507
UD
3933 /* Must alloc, copy, free. */
3934 newp = chunk_alloc(&main_arena, nb);
3935 if (newp) {
3936 MALLOC_COPY(chunk2mem(newp), oldmem, oldsize - 2*SIZE_SZ);
3937 munmap_chunk(oldp);
3938 }
10dc2a90
UD
3939 }
3940#if HAVE_MREMAP
3941 }
3942#endif
7e3be507 3943 } else {
10dc2a90
UD
3944#endif /* HAVE_MMAP */
3945 newp = chunk_realloc(&main_arena, oldp, oldsize, nb);
7e3be507
UD
3946#if 0 /* Erase freed memory. */
3947 nb = chunksize(newp);
3948 if(oldp<newp || oldp>=chunk_at_offset(newp, nb)) {
3949 memset((char*)oldmem + 2*sizeof(mbinptr), 0,
3950 oldsize - (2*sizeof(mbinptr)+2*SIZE_SZ+1));
3951 } else if(nb > oldsize+SIZE_SZ) {
3952 memset((char*)chunk2mem(newp) + oldsize, 0, nb - (oldsize+SIZE_SZ));
3953 }
3954#endif
3955#if HAVE_MMAP
3956 }
3957#endif
10dc2a90
UD
3958 (void)mutex_unlock(&main_arena.mutex);
3959
3960 if(!newp) return NULL;
3961 nb = chunksize(newp);
3962 if(chunk_is_mmapped(newp))
3963 --nb;
3964 else
3965 nb += SIZE_SZ - 1;
7e3be507 3966 *((unsigned char*)newp + nb) = MAGICBYTE(newp);
10dc2a90
UD
3967 return chunk2mem(newp);
3968}
3969
3970static Void_t*
3971#if __STD_C
3972memalign_check(size_t alignment, size_t bytes)
3973#else
3974memalign_check(alignment, bytes) size_t alignment; size_t bytes;
3975#endif
3976{
3977 INTERNAL_SIZE_T nb;
3978 mchunkptr p;
3979
3980 if (alignment <= MALLOC_ALIGNMENT) return malloc_check(bytes);
3981 if (alignment < MINSIZE) alignment = MINSIZE;
3982
3983 nb = request2size(bytes+1);
3984 (void)mutex_lock(&main_arena.mutex);
3985 p = chunk_align(&main_arena, nb, alignment);
3986 (void)mutex_unlock(&main_arena.mutex);
3987 if(!p) return NULL;
3988 nb = chunksize(p);
3989 if(chunk_is_mmapped(p))
3990 --nb;
3991 else
3992 nb += SIZE_SZ - 1;
7e3be507 3993 *((unsigned char*)p + nb) = MAGICBYTE(p);
10dc2a90
UD
3994 return chunk2mem(p);
3995}
3996
7e3be507
UD
3997/* The following hooks are used when the global initialization in
3998 ptmalloc_init() hasn't completed yet. */
3999
4000static Void_t*
4001#if __STD_C
4002malloc_starter(size_t sz)
4003#else
4004malloc_starter(sz) size_t sz;
4005#endif
4006{
4007 mchunkptr victim = chunk_alloc(&main_arena, request2size(sz));
4008
4009 return victim ? chunk2mem(victim) : 0;
4010}
4011
4012static void
4013#if __STD_C
4014free_starter(Void_t* mem)
4015#else
4016free_starter(mem) Void_t* mem;
4017#endif
4018{
4019 mchunkptr p;
4020
4021 if(!mem) return;
4022 p = mem2chunk(mem);
4023#if HAVE_MMAP
4024 if (chunk_is_mmapped(p)) {
4025 munmap_chunk(p);
4026 return;
4027 }
4028#endif
4029 chunk_free(&main_arena, p);
4030}
4031
10dc2a90 4032#endif /* defined(_LIBC) || defined(MALLOC_HOOKS) */
f65fd747 4033
7e3be507
UD
4034\f
4035
4036#ifdef _LIBC
4037weak_alias (__libc_calloc, __calloc) weak_alias (__libc_calloc, calloc)
4038weak_alias (__libc_free, __cfree) weak_alias (__libc_free, cfree)
4039weak_alias (__libc_free, __free) weak_alias (__libc_free, free)
4040weak_alias (__libc_malloc, __malloc) weak_alias (__libc_malloc, malloc)
4041weak_alias (__libc_memalign, __memalign) weak_alias (__libc_memalign, memalign)
4042weak_alias (__libc_realloc, __realloc) weak_alias (__libc_realloc, realloc)
4043weak_alias (__libc_valloc, __valloc) weak_alias (__libc_valloc, valloc)
4044weak_alias (__libc_pvalloc, __pvalloc) weak_alias (__libc_pvalloc, pvalloc)
4045weak_alias (__libc_mallinfo, __mallinfo) weak_alias (__libc_mallinfo, mallinfo)
4046weak_alias (__libc_mallopt, __mallopt) weak_alias (__libc_mallopt, mallopt)
4047
4048weak_alias (__malloc_stats, malloc_stats)
4049weak_alias (__malloc_usable_size, malloc_usable_size)
4050weak_alias (__malloc_trim, malloc_trim)
4051#endif
4052
f65fd747
UD
4053/*
4054
4055History:
4056
10dc2a90
UD
4057 V2.6.4-pt2 Sat Dec 14 1996 Wolfram Gloger (wmglo@dent.med.uni-muenchen.de)
4058 * Added debugging hooks
4059 * Fixed possible deadlock in realloc() when out of memory
4060 * Don't pollute namespace in glibc: use __getpagesize, __mmap, etc.
4061
f65fd747
UD
4062 V2.6.4-pt Wed Dec 4 1996 Wolfram Gloger (wmglo@dent.med.uni-muenchen.de)
4063 * Very minor updates from the released 2.6.4 version.
4064 * Trimmed include file down to exported data structures.
4065 * Changes from H.J. Lu for glibc-2.0.
4066
4067 V2.6.3i-pt Sep 16 1996 Wolfram Gloger (wmglo@dent.med.uni-muenchen.de)
4068 * Many changes for multiple threads
4069 * Introduced arenas and heaps
4070
4071 V2.6.3 Sun May 19 08:17:58 1996 Doug Lea (dl at gee)
4072 * Added pvalloc, as recommended by H.J. Liu
4073 * Added 64bit pointer support mainly from Wolfram Gloger
4074 * Added anonymously donated WIN32 sbrk emulation
4075 * Malloc, calloc, getpagesize: add optimizations from Raymond Nijssen
4076 * malloc_extend_top: fix mask error that caused wastage after
4077 foreign sbrks
4078 * Add linux mremap support code from HJ Liu
4079
4080 V2.6.2 Tue Dec 5 06:52:55 1995 Doug Lea (dl at gee)
4081 * Integrated most documentation with the code.
4082 * Add support for mmap, with help from
4083 Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
4084 * Use last_remainder in more cases.
4085 * Pack bins using idea from colin@nyx10.cs.du.edu
6d52618b 4086 * Use ordered bins instead of best-fit threshold
f65fd747
UD
4087 * Eliminate block-local decls to simplify tracing and debugging.
4088 * Support another case of realloc via move into top
6d52618b 4089 * Fix error occurring when initial sbrk_base not word-aligned.
f65fd747
UD
4090 * Rely on page size for units instead of SBRK_UNIT to
4091 avoid surprises about sbrk alignment conventions.
4092 * Add mallinfo, mallopt. Thanks to Raymond Nijssen
4093 (raymond@es.ele.tue.nl) for the suggestion.
4094 * Add `pad' argument to malloc_trim and top_pad mallopt parameter.
4095 * More precautions for cases where other routines call sbrk,
4096 courtesy of Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
4097 * Added macros etc., allowing use in linux libc from
4098 H.J. Lu (hjl@gnu.ai.mit.edu)
4099 * Inverted this history list
4100
4101 V2.6.1 Sat Dec 2 14:10:57 1995 Doug Lea (dl at gee)
4102 * Re-tuned and fixed to behave more nicely with V2.6.0 changes.
4103 * Removed all preallocation code since under current scheme
4104 the work required to undo bad preallocations exceeds
4105 the work saved in good cases for most test programs.
4106 * No longer use return list or unconsolidated bins since
4107 no scheme using them consistently outperforms those that don't
4108 given above changes.
4109 * Use best fit for very large chunks to prevent some worst-cases.
4110 * Added some support for debugging
4111
4112 V2.6.0 Sat Nov 4 07:05:23 1995 Doug Lea (dl at gee)
4113 * Removed footers when chunks are in use. Thanks to
4114 Paul Wilson (wilson@cs.texas.edu) for the suggestion.
4115
4116 V2.5.4 Wed Nov 1 07:54:51 1995 Doug Lea (dl at gee)
4117 * Added malloc_trim, with help from Wolfram Gloger
4118 (wmglo@Dent.MED.Uni-Muenchen.DE).
4119
4120 V2.5.3 Tue Apr 26 10:16:01 1994 Doug Lea (dl at g)
4121
4122 V2.5.2 Tue Apr 5 16:20:40 1994 Doug Lea (dl at g)
4123 * realloc: try to expand in both directions
4124 * malloc: swap order of clean-bin strategy;
4125 * realloc: only conditionally expand backwards
4126 * Try not to scavenge used bins
4127 * Use bin counts as a guide to preallocation
4128 * Occasionally bin return list chunks in first scan
4129 * Add a few optimizations from colin@nyx10.cs.du.edu
4130
4131 V2.5.1 Sat Aug 14 15:40:43 1993 Doug Lea (dl at g)
4132 * faster bin computation & slightly different binning
4133 * merged all consolidations to one part of malloc proper
4134 (eliminating old malloc_find_space & malloc_clean_bin)
4135 * Scan 2 returns chunks (not just 1)
4136 * Propagate failure in realloc if malloc returns 0
4137 * Add stuff to allow compilation on non-ANSI compilers
4138 from kpv@research.att.com
4139
4140 V2.5 Sat Aug 7 07:41:59 1993 Doug Lea (dl at g.oswego.edu)
4141 * removed potential for odd address access in prev_chunk
4142 * removed dependency on getpagesize.h
4143 * misc cosmetics and a bit more internal documentation
4144 * anticosmetics: mangled names in macros to evade debugger strangeness
4145 * tested on sparc, hp-700, dec-mips, rs6000
4146 with gcc & native cc (hp, dec only) allowing
4147 Detlefs & Zorn comparison study (in SIGPLAN Notices.)
4148
4149 Trial version Fri Aug 28 13:14:29 1992 Doug Lea (dl at g.oswego.edu)
4150 * Based loosely on libg++-1.2X malloc. (It retains some of the overall
4151 structure of old version, but most details differ.)
4152
4153*/
This page took 3.507888 seconds and 5 git commands to generate.