]> sourceware.org Git - glibc.git/blame - malloc/malloc.c
Update.
[glibc.git] / malloc / malloc.c
CommitLineData
f65fd747 1/* Malloc implementation for multiple threads without lock contention.
7c2b945e 2 Copyright (C) 1996, 1997, 1998, 1999 Free Software Foundation, Inc.
f65fd747 3 This file is part of the GNU C Library.
6259ec0d
UD
4 Contributed by Wolfram Gloger <wmglo@dent.med.uni-muenchen.de>
5 and Doug Lea <dl@cs.oswego.edu>, 1996.
f65fd747
UD
6
7 The GNU C Library is free software; you can redistribute it and/or
8 modify it under the terms of the GNU Library General Public License as
9 published by the Free Software Foundation; either version 2 of the
10 License, or (at your option) any later version.
11
12 The GNU C Library is distributed in the hope that it will be useful,
13 but WITHOUT ANY WARRANTY; without even the implied warranty of
14 MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
15 Library General Public License for more details.
16
17 You should have received a copy of the GNU Library General Public
18 License along with the GNU C Library; see the file COPYING.LIB. If not,
19 write to the Free Software Foundation, Inc., 59 Temple Place - Suite 330,
20 Boston, MA 02111-1307, USA. */
21
2f6d1f1b 22/* V2.6.4-pt3 Thu Feb 20 1997
f65fd747
UD
23
24 This work is mainly derived from malloc-2.6.4 by Doug Lea
25 <dl@cs.oswego.edu>, which is available from:
26
27 ftp://g.oswego.edu/pub/misc/malloc.c
28
29 Most of the original comments are reproduced in the code below.
30
31* Why use this malloc?
32
33 This is not the fastest, most space-conserving, most portable, or
34 most tunable malloc ever written. However it is among the fastest
35 while also being among the most space-conserving, portable and tunable.
36 Consistent balance across these factors results in a good general-purpose
37 allocator. For a high-level description, see
38 http://g.oswego.edu/dl/html/malloc.html
39
40 On many systems, the standard malloc implementation is by itself not
41 thread-safe, and therefore wrapped with a single global lock around
42 all malloc-related functions. In some applications, especially with
43 multiple available processors, this can lead to contention problems
44 and bad performance. This malloc version was designed with the goal
45 to avoid waiting for locks as much as possible. Statistics indicate
46 that this goal is achieved in many cases.
47
48* Synopsis of public routines
49
50 (Much fuller descriptions are contained in the program documentation below.)
51
52 ptmalloc_init();
53 Initialize global configuration. When compiled for multiple threads,
54 this function must be called once before any other function in the
10dc2a90
UD
55 package. It is not required otherwise. It is called automatically
56 in the Linux/GNU C libray or when compiling with MALLOC_HOOKS.
f65fd747
UD
57 malloc(size_t n);
58 Return a pointer to a newly allocated chunk of at least n bytes, or null
59 if no space is available.
60 free(Void_t* p);
61 Release the chunk of memory pointed to by p, or no effect if p is null.
62 realloc(Void_t* p, size_t n);
63 Return a pointer to a chunk of size n that contains the same data
64 as does chunk p up to the minimum of (n, p's size) bytes, or null
65 if no space is available. The returned pointer may or may not be
66 the same as p. If p is null, equivalent to malloc. Unless the
67 #define REALLOC_ZERO_BYTES_FREES below is set, realloc with a
68 size argument of zero (re)allocates a minimum-sized chunk.
69 memalign(size_t alignment, size_t n);
70 Return a pointer to a newly allocated chunk of n bytes, aligned
71 in accord with the alignment argument, which must be a power of
72 two.
73 valloc(size_t n);
74 Equivalent to memalign(pagesize, n), where pagesize is the page
75 size of the system (or as near to this as can be figured out from
76 all the includes/defines below.)
77 pvalloc(size_t n);
78 Equivalent to valloc(minimum-page-that-holds(n)), that is,
79 round up n to nearest pagesize.
80 calloc(size_t unit, size_t quantity);
81 Returns a pointer to quantity * unit bytes, with all locations
82 set to zero.
83 cfree(Void_t* p);
84 Equivalent to free(p).
85 malloc_trim(size_t pad);
86 Release all but pad bytes of freed top-most memory back
87 to the system. Return 1 if successful, else 0.
88 malloc_usable_size(Void_t* p);
89 Report the number usable allocated bytes associated with allocated
90 chunk p. This may or may not report more bytes than were requested,
91 due to alignment and minimum size constraints.
92 malloc_stats();
93 Prints brief summary statistics on stderr.
94 mallinfo()
95 Returns (by copy) a struct containing various summary statistics.
96 mallopt(int parameter_number, int parameter_value)
97 Changes one of the tunable parameters described below. Returns
98 1 if successful in changing the parameter, else 0.
99
100* Vital statistics:
101
102 Alignment: 8-byte
103 8 byte alignment is currently hardwired into the design. This
104 seems to suffice for all current machines and C compilers.
105
106 Assumed pointer representation: 4 or 8 bytes
107 Code for 8-byte pointers is untested by me but has worked
108 reliably by Wolfram Gloger, who contributed most of the
109 changes supporting this.
110
111 Assumed size_t representation: 4 or 8 bytes
112 Note that size_t is allowed to be 4 bytes even if pointers are 8.
113
114 Minimum overhead per allocated chunk: 4 or 8 bytes
115 Each malloced chunk has a hidden overhead of 4 bytes holding size
116 and status information.
117
118 Minimum allocated size: 4-byte ptrs: 16 bytes (including 4 overhead)
119 8-byte ptrs: 24/32 bytes (including, 4/8 overhead)
120
121 When a chunk is freed, 12 (for 4byte ptrs) or 20 (for 8 byte
122 ptrs but 4 byte size) or 24 (for 8/8) additional bytes are
123 needed; 4 (8) for a trailing size field
124 and 8 (16) bytes for free list pointers. Thus, the minimum
125 allocatable size is 16/24/32 bytes.
126
127 Even a request for zero bytes (i.e., malloc(0)) returns a
128 pointer to something of the minimum allocatable size.
129
130 Maximum allocated size: 4-byte size_t: 2^31 - 8 bytes
131 8-byte size_t: 2^63 - 16 bytes
132
133 It is assumed that (possibly signed) size_t bit values suffice to
134 represent chunk sizes. `Possibly signed' is due to the fact
135 that `size_t' may be defined on a system as either a signed or
136 an unsigned type. To be conservative, values that would appear
137 as negative numbers are avoided.
138 Requests for sizes with a negative sign bit will return a
139 minimum-sized chunk.
140
141 Maximum overhead wastage per allocated chunk: normally 15 bytes
142
6d52618b 143 Alignment demands, plus the minimum allocatable size restriction
f65fd747
UD
144 make the normal worst-case wastage 15 bytes (i.e., up to 15
145 more bytes will be allocated than were requested in malloc), with
146 two exceptions:
147 1. Because requests for zero bytes allocate non-zero space,
148 the worst case wastage for a request of zero bytes is 24 bytes.
149 2. For requests >= mmap_threshold that are serviced via
150 mmap(), the worst case wastage is 8 bytes plus the remainder
151 from a system page (the minimal mmap unit); typically 4096 bytes.
152
153* Limitations
154
155 Here are some features that are NOT currently supported
156
f65fd747
UD
157 * No automated mechanism for fully checking that all accesses
158 to malloced memory stay within their bounds.
159 * No support for compaction.
160
161* Synopsis of compile-time options:
162
163 People have reported using previous versions of this malloc on all
164 versions of Unix, sometimes by tweaking some of the defines
165 below. It has been tested most extensively on Solaris and
166 Linux. People have also reported adapting this malloc for use in
167 stand-alone embedded systems.
168
169 The implementation is in straight, hand-tuned ANSI C. Among other
170 consequences, it uses a lot of macros. Because of this, to be at
171 all usable, this code should be compiled using an optimizing compiler
172 (for example gcc -O2) that can simplify expressions and control
173 paths.
174
175 __STD_C (default: derived from C compiler defines)
176 Nonzero if using ANSI-standard C compiler, a C++ compiler, or
177 a C compiler sufficiently close to ANSI to get away with it.
178 MALLOC_DEBUG (default: NOT defined)
179 Define to enable debugging. Adds fairly extensive assertion-based
180 checking to help track down memory errors, but noticeably slows down
181 execution.
7e3be507 182 MALLOC_HOOKS (default: NOT defined)
10dc2a90
UD
183 Define to enable support run-time replacement of the allocation
184 functions through user-defined `hooks'.
431c33c0 185 REALLOC_ZERO_BYTES_FREES (default: defined)
f65fd747 186 Define this if you think that realloc(p, 0) should be equivalent
431c33c0
UD
187 to free(p). (The C standard requires this behaviour, therefore
188 it is the default.) Otherwise, since malloc returns a unique
189 pointer for malloc(0), so does realloc(p, 0).
f65fd747
UD
190 HAVE_MEMCPY (default: defined)
191 Define if you are not otherwise using ANSI STD C, but still
192 have memcpy and memset in your C library and want to use them.
193 Otherwise, simple internal versions are supplied.
194 USE_MEMCPY (default: 1 if HAVE_MEMCPY is defined, 0 otherwise)
195 Define as 1 if you want the C library versions of memset and
196 memcpy called in realloc and calloc (otherwise macro versions are used).
197 At least on some platforms, the simple macro versions usually
198 outperform libc versions.
199 HAVE_MMAP (default: defined as 1)
200 Define to non-zero to optionally make malloc() use mmap() to
201 allocate very large blocks.
202 HAVE_MREMAP (default: defined as 0 unless Linux libc set)
203 Define to non-zero to optionally make realloc() use mremap() to
204 reallocate very large blocks.
205 malloc_getpagesize (default: derived from system #includes)
206 Either a constant or routine call returning the system page size.
207 HAVE_USR_INCLUDE_MALLOC_H (default: NOT defined)
208 Optionally define if you are on a system with a /usr/include/malloc.h
209 that declares struct mallinfo. It is not at all necessary to
210 define this even if you do, but will ensure consistency.
211 INTERNAL_SIZE_T (default: size_t)
212 Define to a 32-bit type (probably `unsigned int') if you are on a
213 64-bit machine, yet do not want or need to allow malloc requests of
214 greater than 2^31 to be handled. This saves space, especially for
215 very small chunks.
216 _LIBC (default: NOT defined)
217 Defined only when compiled as part of the Linux libc/glibc.
218 Also note that there is some odd internal name-mangling via defines
219 (for example, internally, `malloc' is named `mALLOc') needed
220 when compiling in this case. These look funny but don't otherwise
221 affect anything.
222 LACKS_UNISTD_H (default: undefined)
223 Define this if your system does not have a <unistd.h>.
224 MORECORE (default: sbrk)
225 The name of the routine to call to obtain more memory from the system.
226 MORECORE_FAILURE (default: -1)
227 The value returned upon failure of MORECORE.
228 MORECORE_CLEARS (default 1)
229 True (1) if the routine mapped to MORECORE zeroes out memory (which
230 holds for sbrk).
231 DEFAULT_TRIM_THRESHOLD
232 DEFAULT_TOP_PAD
233 DEFAULT_MMAP_THRESHOLD
234 DEFAULT_MMAP_MAX
235 Default values of tunable parameters (described in detail below)
236 controlling interaction with host system routines (sbrk, mmap, etc).
237 These values may also be changed dynamically via mallopt(). The
238 preset defaults are those that give best performance for typical
239 programs/systems.
10dc2a90
UD
240 DEFAULT_CHECK_ACTION
241 When the standard debugging hooks are in place, and a pointer is
242 detected as corrupt, do nothing (0), print an error message (1),
243 or call abort() (2).
f65fd747
UD
244
245
246*/
247
248/*
249
250* Compile-time options for multiple threads:
251
252 USE_PTHREADS, USE_THR, USE_SPROC
253 Define one of these as 1 to select the thread interface:
254 POSIX threads, Solaris threads or SGI sproc's, respectively.
255 If none of these is defined as non-zero, you get a `normal'
256 malloc implementation which is not thread-safe. Support for
257 multiple threads requires HAVE_MMAP=1. As an exception, when
258 compiling for GNU libc, i.e. when _LIBC is defined, then none of
259 the USE_... symbols have to be defined.
260
261 HEAP_MIN_SIZE
262 HEAP_MAX_SIZE
263 When thread support is enabled, additional `heap's are created
264 with mmap calls. These are limited in size; HEAP_MIN_SIZE should
265 be a multiple of the page size, while HEAP_MAX_SIZE must be a power
266 of two for alignment reasons. HEAP_MAX_SIZE should be at least
267 twice as large as the mmap threshold.
268 THREAD_STATS
269 When this is defined as non-zero, some statistics on mutex locking
270 are computed.
271
272*/
273
274\f
275
276
f65fd747
UD
277/* Preliminaries */
278
279#ifndef __STD_C
280#if defined (__STDC__)
281#define __STD_C 1
282#else
283#if __cplusplus
284#define __STD_C 1
285#else
286#define __STD_C 0
287#endif /*__cplusplus*/
288#endif /*__STDC__*/
289#endif /*__STD_C*/
290
291#ifndef Void_t
292#if __STD_C
293#define Void_t void
294#else
295#define Void_t char
296#endif
297#endif /*Void_t*/
298
299#if __STD_C
10dc2a90 300# include <stddef.h> /* for size_t */
dfd2257a 301# if defined _LIBC || defined MALLOC_HOOKS
7e3be507 302# include <stdlib.h> /* for getenv(), abort() */
10dc2a90 303# endif
f65fd747 304#else
10dc2a90 305# include <sys/types.h>
f65fd747
UD
306#endif
307
d01d6319
UD
308#ifndef _LIBC
309# define __secure_getenv(Str) getenv (Str)
310#endif
311
8a4b65b4
UD
312/* Macros for handling mutexes and thread-specific data. This is
313 included early, because some thread-related header files (such as
314 pthread.h) should be included before any others. */
315#include "thread-m.h"
316
f65fd747
UD
317#ifdef __cplusplus
318extern "C" {
319#endif
320
2556bfe6 321#include <errno.h>
f65fd747
UD
322#include <stdio.h> /* needed for malloc_stats */
323
324
325/*
326 Compile-time options
327*/
328
329
330/*
331 Debugging:
332
333 Because freed chunks may be overwritten with link fields, this
334 malloc will often die when freed memory is overwritten by user
335 programs. This can be very effective (albeit in an annoying way)
336 in helping track down dangling pointers.
337
338 If you compile with -DMALLOC_DEBUG, a number of assertion checks are
339 enabled that will catch more memory errors. You probably won't be
340 able to make much sense of the actual assertion errors, but they
341 should help you locate incorrectly overwritten memory. The
342 checking is fairly extensive, and will slow down execution
343 noticeably. Calling malloc_stats or mallinfo with MALLOC_DEBUG set will
344 attempt to check every non-mmapped allocated and free chunk in the
6d52618b 345 course of computing the summaries. (By nature, mmapped regions
f65fd747
UD
346 cannot be checked very much automatically.)
347
348 Setting MALLOC_DEBUG may also be helpful if you are trying to modify
349 this code. The assertions in the check routines spell out in more
350 detail the assumptions and invariants underlying the algorithms.
351
352*/
353
354#if MALLOC_DEBUG
355#include <assert.h>
356#else
357#define assert(x) ((void)0)
358#endif
359
360
361/*
362 INTERNAL_SIZE_T is the word-size used for internal bookkeeping
363 of chunk sizes. On a 64-bit machine, you can reduce malloc
364 overhead by defining INTERNAL_SIZE_T to be a 32 bit `unsigned int'
365 at the expense of not being able to handle requests greater than
366 2^31. This limitation is hardly ever a concern; you are encouraged
367 to set this. However, the default version is the same as size_t.
368*/
369
370#ifndef INTERNAL_SIZE_T
371#define INTERNAL_SIZE_T size_t
372#endif
373
374/*
431c33c0
UD
375 REALLOC_ZERO_BYTES_FREES should be set if a call to realloc with
376 zero bytes should be the same as a call to free. The C standard
377 requires this. Otherwise, since this malloc returns a unique pointer
378 for malloc(0), so does realloc(p, 0).
f65fd747
UD
379*/
380
381
7c2b945e 382#define REALLOC_ZERO_BYTES_FREES
f65fd747
UD
383
384
385/*
386 HAVE_MEMCPY should be defined if you are not otherwise using
387 ANSI STD C, but still have memcpy and memset in your C library
388 and want to use them in calloc and realloc. Otherwise simple
389 macro versions are defined here.
390
391 USE_MEMCPY should be defined as 1 if you actually want to
392 have memset and memcpy called. People report that the macro
393 versions are often enough faster than libc versions on many
394 systems that it is better to use them.
395
396*/
397
10dc2a90 398#define HAVE_MEMCPY 1
f65fd747
UD
399
400#ifndef USE_MEMCPY
401#ifdef HAVE_MEMCPY
402#define USE_MEMCPY 1
403#else
404#define USE_MEMCPY 0
405#endif
406#endif
407
408#if (__STD_C || defined(HAVE_MEMCPY))
409
410#if __STD_C
411void* memset(void*, int, size_t);
412void* memcpy(void*, const void*, size_t);
413#else
414Void_t* memset();
415Void_t* memcpy();
416#endif
417#endif
418
419#if USE_MEMCPY
420
421/* The following macros are only invoked with (2n+1)-multiples of
422 INTERNAL_SIZE_T units, with a positive integer n. This is exploited
423 for fast inline execution when n is small. */
424
425#define MALLOC_ZERO(charp, nbytes) \
426do { \
427 INTERNAL_SIZE_T mzsz = (nbytes); \
428 if(mzsz <= 9*sizeof(mzsz)) { \
429 INTERNAL_SIZE_T* mz = (INTERNAL_SIZE_T*) (charp); \
430 if(mzsz >= 5*sizeof(mzsz)) { *mz++ = 0; \
431 *mz++ = 0; \
432 if(mzsz >= 7*sizeof(mzsz)) { *mz++ = 0; \
433 *mz++ = 0; \
434 if(mzsz >= 9*sizeof(mzsz)) { *mz++ = 0; \
435 *mz++ = 0; }}} \
436 *mz++ = 0; \
437 *mz++ = 0; \
438 *mz = 0; \
439 } else memset((charp), 0, mzsz); \
440} while(0)
441
442#define MALLOC_COPY(dest,src,nbytes) \
443do { \
444 INTERNAL_SIZE_T mcsz = (nbytes); \
445 if(mcsz <= 9*sizeof(mcsz)) { \
446 INTERNAL_SIZE_T* mcsrc = (INTERNAL_SIZE_T*) (src); \
447 INTERNAL_SIZE_T* mcdst = (INTERNAL_SIZE_T*) (dest); \
448 if(mcsz >= 5*sizeof(mcsz)) { *mcdst++ = *mcsrc++; \
449 *mcdst++ = *mcsrc++; \
450 if(mcsz >= 7*sizeof(mcsz)) { *mcdst++ = *mcsrc++; \
451 *mcdst++ = *mcsrc++; \
452 if(mcsz >= 9*sizeof(mcsz)) { *mcdst++ = *mcsrc++; \
453 *mcdst++ = *mcsrc++; }}} \
454 *mcdst++ = *mcsrc++; \
455 *mcdst++ = *mcsrc++; \
456 *mcdst = *mcsrc ; \
457 } else memcpy(dest, src, mcsz); \
458} while(0)
459
460#else /* !USE_MEMCPY */
461
462/* Use Duff's device for good zeroing/copying performance. */
463
464#define MALLOC_ZERO(charp, nbytes) \
465do { \
466 INTERNAL_SIZE_T* mzp = (INTERNAL_SIZE_T*)(charp); \
467 long mctmp = (nbytes)/sizeof(INTERNAL_SIZE_T), mcn; \
468 if (mctmp < 8) mcn = 0; else { mcn = (mctmp-1)/8; mctmp %= 8; } \
469 switch (mctmp) { \
470 case 0: for(;;) { *mzp++ = 0; \
471 case 7: *mzp++ = 0; \
472 case 6: *mzp++ = 0; \
473 case 5: *mzp++ = 0; \
474 case 4: *mzp++ = 0; \
475 case 3: *mzp++ = 0; \
476 case 2: *mzp++ = 0; \
477 case 1: *mzp++ = 0; if(mcn <= 0) break; mcn--; } \
478 } \
479} while(0)
480
481#define MALLOC_COPY(dest,src,nbytes) \
482do { \
483 INTERNAL_SIZE_T* mcsrc = (INTERNAL_SIZE_T*) src; \
484 INTERNAL_SIZE_T* mcdst = (INTERNAL_SIZE_T*) dest; \
485 long mctmp = (nbytes)/sizeof(INTERNAL_SIZE_T), mcn; \
486 if (mctmp < 8) mcn = 0; else { mcn = (mctmp-1)/8; mctmp %= 8; } \
487 switch (mctmp) { \
488 case 0: for(;;) { *mcdst++ = *mcsrc++; \
489 case 7: *mcdst++ = *mcsrc++; \
490 case 6: *mcdst++ = *mcsrc++; \
491 case 5: *mcdst++ = *mcsrc++; \
492 case 4: *mcdst++ = *mcsrc++; \
493 case 3: *mcdst++ = *mcsrc++; \
494 case 2: *mcdst++ = *mcsrc++; \
495 case 1: *mcdst++ = *mcsrc++; if(mcn <= 0) break; mcn--; } \
496 } \
497} while(0)
498
499#endif
500
501
7cabd57c
UD
502#ifndef LACKS_UNISTD_H
503# include <unistd.h>
504#endif
505
f65fd747
UD
506/*
507 Define HAVE_MMAP to optionally make malloc() use mmap() to
508 allocate very large blocks. These will be returned to the
509 operating system immediately after a free().
510*/
511
512#ifndef HAVE_MMAP
7cabd57c
UD
513# ifdef _POSIX_MAPPED_FILES
514# define HAVE_MMAP 1
515# endif
f65fd747
UD
516#endif
517
518/*
519 Define HAVE_MREMAP to make realloc() use mremap() to re-allocate
520 large blocks. This is currently only possible on Linux with
521 kernel versions newer than 1.3.77.
522*/
523
524#ifndef HAVE_MREMAP
d71b808a 525#define HAVE_MREMAP defined(__linux__) && !defined(__arm__)
f65fd747
UD
526#endif
527
528#if HAVE_MMAP
529
530#include <unistd.h>
531#include <fcntl.h>
532#include <sys/mman.h>
533
534#if !defined(MAP_ANONYMOUS) && defined(MAP_ANON)
535#define MAP_ANONYMOUS MAP_ANON
536#endif
431c33c0
UD
537#if !defined(MAP_FAILED)
538#define MAP_FAILED ((char*)-1)
539#endif
f65fd747 540
b22fc5f5
UD
541#ifndef MAP_NORESERVE
542# ifdef MAP_AUTORESRV
543# define MAP_NORESERVE MAP_AUTORESRV
544# else
545# define MAP_NORESERVE 0
546# endif
547#endif
548
f65fd747
UD
549#endif /* HAVE_MMAP */
550
551/*
552 Access to system page size. To the extent possible, this malloc
553 manages memory from the system in page-size units.
554
555 The following mechanics for getpagesize were adapted from
556 bsd/gnu getpagesize.h
557*/
558
f65fd747
UD
559#ifndef malloc_getpagesize
560# ifdef _SC_PAGESIZE /* some SVR4 systems omit an underscore */
561# ifndef _SC_PAGE_SIZE
562# define _SC_PAGE_SIZE _SC_PAGESIZE
563# endif
564# endif
565# ifdef _SC_PAGE_SIZE
566# define malloc_getpagesize sysconf(_SC_PAGE_SIZE)
567# else
568# if defined(BSD) || defined(DGUX) || defined(HAVE_GETPAGESIZE)
569 extern size_t getpagesize();
570# define malloc_getpagesize getpagesize()
571# else
572# include <sys/param.h>
573# ifdef EXEC_PAGESIZE
574# define malloc_getpagesize EXEC_PAGESIZE
575# else
576# ifdef NBPG
577# ifndef CLSIZE
578# define malloc_getpagesize NBPG
579# else
580# define malloc_getpagesize (NBPG * CLSIZE)
581# endif
582# else
583# ifdef NBPC
584# define malloc_getpagesize NBPC
585# else
586# ifdef PAGESIZE
587# define malloc_getpagesize PAGESIZE
588# else
589# define malloc_getpagesize (4096) /* just guess */
590# endif
591# endif
592# endif
593# endif
594# endif
595# endif
596#endif
597
598
599
600/*
601
602 This version of malloc supports the standard SVID/XPG mallinfo
603 routine that returns a struct containing the same kind of
604 information you can get from malloc_stats. It should work on
605 any SVID/XPG compliant system that has a /usr/include/malloc.h
606 defining struct mallinfo. (If you'd like to install such a thing
607 yourself, cut out the preliminary declarations as described above
608 and below and save them in a malloc.h file. But there's no
609 compelling reason to bother to do this.)
610
611 The main declaration needed is the mallinfo struct that is returned
612 (by-copy) by mallinfo(). The SVID/XPG malloinfo struct contains a
613 bunch of fields, most of which are not even meaningful in this
614 version of malloc. Some of these fields are are instead filled by
615 mallinfo() with other numbers that might possibly be of interest.
616
617 HAVE_USR_INCLUDE_MALLOC_H should be set if you have a
618 /usr/include/malloc.h file that includes a declaration of struct
619 mallinfo. If so, it is included; else an SVID2/XPG2 compliant
620 version is declared below. These must be precisely the same for
621 mallinfo() to work.
622
623*/
624
625/* #define HAVE_USR_INCLUDE_MALLOC_H */
626
627#if HAVE_USR_INCLUDE_MALLOC_H
8a4b65b4 628# include "/usr/include/malloc.h"
f65fd747 629#else
8a4b65b4
UD
630# ifdef _LIBC
631# include "malloc.h"
632# else
633# include "ptmalloc.h"
634# endif
f65fd747
UD
635#endif
636
637
638
639#ifndef DEFAULT_TRIM_THRESHOLD
640#define DEFAULT_TRIM_THRESHOLD (128 * 1024)
641#endif
642
643/*
644 M_TRIM_THRESHOLD is the maximum amount of unused top-most memory
645 to keep before releasing via malloc_trim in free().
646
647 Automatic trimming is mainly useful in long-lived programs.
648 Because trimming via sbrk can be slow on some systems, and can
649 sometimes be wasteful (in cases where programs immediately
650 afterward allocate more large chunks) the value should be high
651 enough so that your overall system performance would improve by
652 releasing.
653
654 The trim threshold and the mmap control parameters (see below)
655 can be traded off with one another. Trimming and mmapping are
656 two different ways of releasing unused memory back to the
657 system. Between these two, it is often possible to keep
658 system-level demands of a long-lived program down to a bare
659 minimum. For example, in one test suite of sessions measuring
660 the XF86 X server on Linux, using a trim threshold of 128K and a
661 mmap threshold of 192K led to near-minimal long term resource
662 consumption.
663
664 If you are using this malloc in a long-lived program, it should
665 pay to experiment with these values. As a rough guide, you
666 might set to a value close to the average size of a process
667 (program) running on your system. Releasing this much memory
668 would allow such a process to run in memory. Generally, it's
831372e7 669 worth it to tune for trimming rather than memory mapping when a
f65fd747
UD
670 program undergoes phases where several large chunks are
671 allocated and released in ways that can reuse each other's
672 storage, perhaps mixed with phases where there are no such
673 chunks at all. And in well-behaved long-lived programs,
674 controlling release of large blocks via trimming versus mapping
675 is usually faster.
676
677 However, in most programs, these parameters serve mainly as
678 protection against the system-level effects of carrying around
679 massive amounts of unneeded memory. Since frequent calls to
680 sbrk, mmap, and munmap otherwise degrade performance, the default
681 parameters are set to relatively high values that serve only as
682 safeguards.
683
684 The default trim value is high enough to cause trimming only in
685 fairly extreme (by current memory consumption standards) cases.
686 It must be greater than page size to have any useful effect. To
687 disable trimming completely, you can set to (unsigned long)(-1);
688
689
690*/
691
692
693#ifndef DEFAULT_TOP_PAD
694#define DEFAULT_TOP_PAD (0)
695#endif
696
697/*
698 M_TOP_PAD is the amount of extra `padding' space to allocate or
699 retain whenever sbrk is called. It is used in two ways internally:
700
701 * When sbrk is called to extend the top of the arena to satisfy
702 a new malloc request, this much padding is added to the sbrk
703 request.
704
705 * When malloc_trim is called automatically from free(),
706 it is used as the `pad' argument.
707
708 In both cases, the actual amount of padding is rounded
709 so that the end of the arena is always a system page boundary.
710
711 The main reason for using padding is to avoid calling sbrk so
712 often. Having even a small pad greatly reduces the likelihood
713 that nearly every malloc request during program start-up (or
714 after trimming) will invoke sbrk, which needlessly wastes
715 time.
716
717 Automatic rounding-up to page-size units is normally sufficient
718 to avoid measurable overhead, so the default is 0. However, in
719 systems where sbrk is relatively slow, it can pay to increase
720 this value, at the expense of carrying around more memory than
721 the program needs.
722
723*/
724
725
726#ifndef DEFAULT_MMAP_THRESHOLD
727#define DEFAULT_MMAP_THRESHOLD (128 * 1024)
728#endif
729
730/*
731
732 M_MMAP_THRESHOLD is the request size threshold for using mmap()
733 to service a request. Requests of at least this size that cannot
734 be allocated using already-existing space will be serviced via mmap.
735 (If enough normal freed space already exists it is used instead.)
736
737 Using mmap segregates relatively large chunks of memory so that
738 they can be individually obtained and released from the host
739 system. A request serviced through mmap is never reused by any
740 other request (at least not directly; the system may just so
741 happen to remap successive requests to the same locations).
742
743 Segregating space in this way has the benefit that mmapped space
744 can ALWAYS be individually released back to the system, which
745 helps keep the system level memory demands of a long-lived
746 program low. Mapped memory can never become `locked' between
747 other chunks, as can happen with normally allocated chunks, which
748 menas that even trimming via malloc_trim would not release them.
749
750 However, it has the disadvantages that:
751
752 1. The space cannot be reclaimed, consolidated, and then
753 used to service later requests, as happens with normal chunks.
754 2. It can lead to more wastage because of mmap page alignment
755 requirements
756 3. It causes malloc performance to be more dependent on host
757 system memory management support routines which may vary in
758 implementation quality and may impose arbitrary
759 limitations. Generally, servicing a request via normal
760 malloc steps is faster than going through a system's mmap.
761
762 All together, these considerations should lead you to use mmap
763 only for relatively large requests.
764
765
766*/
767
768
769
770#ifndef DEFAULT_MMAP_MAX
771#if HAVE_MMAP
772#define DEFAULT_MMAP_MAX (1024)
773#else
774#define DEFAULT_MMAP_MAX (0)
775#endif
776#endif
777
778/*
779 M_MMAP_MAX is the maximum number of requests to simultaneously
780 service using mmap. This parameter exists because:
781
782 1. Some systems have a limited number of internal tables for
783 use by mmap.
784 2. In most systems, overreliance on mmap can degrade overall
785 performance.
786 3. If a program allocates many large regions, it is probably
787 better off using normal sbrk-based allocation routines that
788 can reclaim and reallocate normal heap memory. Using a
789 small value allows transition into this mode after the
790 first few allocations.
791
792 Setting to 0 disables all use of mmap. If HAVE_MMAP is not set,
793 the default value is 0, and attempts to set it to non-zero values
794 in mallopt will fail.
795*/
796
797
798
10dc2a90
UD
799#ifndef DEFAULT_CHECK_ACTION
800#define DEFAULT_CHECK_ACTION 1
801#endif
802
803/* What to do if the standard debugging hooks are in place and a
804 corrupt pointer is detected: do nothing (0), print an error message
805 (1), or call abort() (2). */
806
807
808
f65fd747
UD
809#define HEAP_MIN_SIZE (32*1024)
810#define HEAP_MAX_SIZE (1024*1024) /* must be a power of two */
811
812/* HEAP_MIN_SIZE and HEAP_MAX_SIZE limit the size of mmap()ed heaps
813 that are dynamically created for multi-threaded programs. The
814 maximum size must be a power of two, for fast determination of
815 which heap belongs to a chunk. It should be much larger than
816 the mmap threshold, so that requests with a size just below that
817 threshold can be fulfilled without creating too many heaps.
818*/
819
820
821
822#ifndef THREAD_STATS
823#define THREAD_STATS 0
824#endif
825
826/* If THREAD_STATS is non-zero, some statistics on mutex locking are
827 computed. */
828
829
9ae6fc54
UD
830/* Macro to set errno. */
831#ifndef __set_errno
832# define __set_errno(val) errno = (val)
833#endif
431c33c0
UD
834
835/* On some platforms we can compile internal, not exported functions better.
836 Let the environment provide a macro and define it to be empty if it
837 is not available. */
838#ifndef internal_function
839# define internal_function
840#endif
841
842
f65fd747
UD
843/*
844
845 Special defines for the Linux/GNU C library.
846
847*/
848
849
850#ifdef _LIBC
851
852#if __STD_C
853
854Void_t * __default_morecore (ptrdiff_t);
1228ed5c 855Void_t *(*__morecore)(ptrdiff_t) = __default_morecore;
f65fd747
UD
856
857#else
858
859Void_t * __default_morecore ();
1228ed5c 860Void_t *(*__morecore)() = __default_morecore;
f65fd747
UD
861
862#endif
863
b3864d70
UD
864static size_t __libc_pagesize;
865
f65fd747
UD
866#define MORECORE (*__morecore)
867#define MORECORE_FAILURE 0
868#define MORECORE_CLEARS 1
10dc2a90
UD
869#define mmap __mmap
870#define munmap __munmap
871#define mremap __mremap
4cca6b86 872#define mprotect __mprotect
10dc2a90 873#undef malloc_getpagesize
b3864d70 874#define malloc_getpagesize __libc_pagesize
f65fd747
UD
875
876#else /* _LIBC */
877
878#if __STD_C
879extern Void_t* sbrk(ptrdiff_t);
880#else
881extern Void_t* sbrk();
882#endif
883
884#ifndef MORECORE
885#define MORECORE sbrk
886#endif
887
888#ifndef MORECORE_FAILURE
889#define MORECORE_FAILURE -1
890#endif
891
892#ifndef MORECORE_CLEARS
893#define MORECORE_CLEARS 1
894#endif
895
896#endif /* _LIBC */
897
10dc2a90 898#ifdef _LIBC
f65fd747
UD
899
900#define cALLOc __libc_calloc
901#define fREe __libc_free
902#define mALLOc __libc_malloc
903#define mEMALIGn __libc_memalign
904#define rEALLOc __libc_realloc
905#define vALLOc __libc_valloc
906#define pvALLOc __libc_pvalloc
907#define mALLINFo __libc_mallinfo
908#define mALLOPt __libc_mallopt
7e3be507
UD
909#define mALLOC_STATs __malloc_stats
910#define mALLOC_USABLE_SIZe __malloc_usable_size
911#define mALLOC_TRIm __malloc_trim
2f6d1f1b
UD
912#define mALLOC_GET_STATe __malloc_get_state
913#define mALLOC_SET_STATe __malloc_set_state
f65fd747 914
f65fd747
UD
915#else
916
917#define cALLOc calloc
918#define fREe free
919#define mALLOc malloc
920#define mEMALIGn memalign
921#define rEALLOc realloc
922#define vALLOc valloc
923#define pvALLOc pvalloc
924#define mALLINFo mallinfo
925#define mALLOPt mallopt
7e3be507
UD
926#define mALLOC_STATs malloc_stats
927#define mALLOC_USABLE_SIZe malloc_usable_size
928#define mALLOC_TRIm malloc_trim
2f6d1f1b
UD
929#define mALLOC_GET_STATe malloc_get_state
930#define mALLOC_SET_STATe malloc_set_state
f65fd747
UD
931
932#endif
933
934/* Public routines */
935
936#if __STD_C
937
938#ifndef _LIBC
939void ptmalloc_init(void);
940#endif
941Void_t* mALLOc(size_t);
942void fREe(Void_t*);
943Void_t* rEALLOc(Void_t*, size_t);
944Void_t* mEMALIGn(size_t, size_t);
945Void_t* vALLOc(size_t);
946Void_t* pvALLOc(size_t);
947Void_t* cALLOc(size_t, size_t);
948void cfree(Void_t*);
7e3be507
UD
949int mALLOC_TRIm(size_t);
950size_t mALLOC_USABLE_SIZe(Void_t*);
951void mALLOC_STATs(void);
f65fd747
UD
952int mALLOPt(int, int);
953struct mallinfo mALLINFo(void);
2f6d1f1b
UD
954Void_t* mALLOC_GET_STATe(void);
955int mALLOC_SET_STATe(Void_t*);
956
957#else /* !__STD_C */
958
f65fd747
UD
959#ifndef _LIBC
960void ptmalloc_init();
961#endif
962Void_t* mALLOc();
963void fREe();
964Void_t* rEALLOc();
965Void_t* mEMALIGn();
966Void_t* vALLOc();
967Void_t* pvALLOc();
968Void_t* cALLOc();
969void cfree();
7e3be507
UD
970int mALLOC_TRIm();
971size_t mALLOC_USABLE_SIZe();
972void mALLOC_STATs();
f65fd747
UD
973int mALLOPt();
974struct mallinfo mALLINFo();
2f6d1f1b
UD
975Void_t* mALLOC_GET_STATe();
976int mALLOC_SET_STATe();
977
978#endif /* __STD_C */
f65fd747
UD
979
980
981#ifdef __cplusplus
982}; /* end of extern "C" */
983#endif
984
985#if !defined(NO_THREADS) && !HAVE_MMAP
986"Can't have threads support without mmap"
987#endif
988
989
990/*
991 Type declarations
992*/
993
994
995struct malloc_chunk
996{
997 INTERNAL_SIZE_T prev_size; /* Size of previous chunk (if free). */
998 INTERNAL_SIZE_T size; /* Size in bytes, including overhead. */
999 struct malloc_chunk* fd; /* double links -- used only if free. */
1000 struct malloc_chunk* bk;
1001};
1002
1003typedef struct malloc_chunk* mchunkptr;
1004
1005/*
1006
1007 malloc_chunk details:
1008
1009 (The following includes lightly edited explanations by Colin Plumb.)
1010
1011 Chunks of memory are maintained using a `boundary tag' method as
1012 described in e.g., Knuth or Standish. (See the paper by Paul
1013 Wilson ftp://ftp.cs.utexas.edu/pub/garbage/allocsrv.ps for a
1014 survey of such techniques.) Sizes of free chunks are stored both
1015 in the front of each chunk and at the end. This makes
1016 consolidating fragmented chunks into bigger chunks very fast. The
1017 size fields also hold bits representing whether chunks are free or
1018 in use.
1019
1020 An allocated chunk looks like this:
1021
1022
1023 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1024 | Size of previous chunk, if allocated | |
1025 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1026 | Size of chunk, in bytes |P|
1027 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1028 | User data starts here... .
1029 . .
1030 . (malloc_usable_space() bytes) .
1031 . |
1032nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1033 | Size of chunk |
1034 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1035
1036
1037 Where "chunk" is the front of the chunk for the purpose of most of
1038 the malloc code, but "mem" is the pointer that is returned to the
1039 user. "Nextchunk" is the beginning of the next contiguous chunk.
1040
6d52618b 1041 Chunks always begin on even word boundaries, so the mem portion
f65fd747
UD
1042 (which is returned to the user) is also on an even word boundary, and
1043 thus double-word aligned.
1044
1045 Free chunks are stored in circular doubly-linked lists, and look like this:
1046
1047 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1048 | Size of previous chunk |
1049 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1050 `head:' | Size of chunk, in bytes |P|
1051 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1052 | Forward pointer to next chunk in list |
1053 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1054 | Back pointer to previous chunk in list |
1055 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1056 | Unused space (may be 0 bytes long) .
1057 . .
1058 . |
1059nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1060 `foot:' | Size of chunk, in bytes |
1061 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1062
1063 The P (PREV_INUSE) bit, stored in the unused low-order bit of the
1064 chunk size (which is always a multiple of two words), is an in-use
1065 bit for the *previous* chunk. If that bit is *clear*, then the
1066 word before the current chunk size contains the previous chunk
1067 size, and can be used to find the front of the previous chunk.
1068 (The very first chunk allocated always has this bit set,
1069 preventing access to non-existent (or non-owned) memory.)
1070
1071 Note that the `foot' of the current chunk is actually represented
1072 as the prev_size of the NEXT chunk. (This makes it easier to
1073 deal with alignments etc).
1074
1075 The two exceptions to all this are
1076
1077 1. The special chunk `top', which doesn't bother using the
1078 trailing size field since there is no
1079 next contiguous chunk that would have to index off it. (After
1080 initialization, `top' is forced to always exist. If it would
1081 become less than MINSIZE bytes long, it is replenished via
1082 malloc_extend_top.)
1083
1084 2. Chunks allocated via mmap, which have the second-lowest-order
1085 bit (IS_MMAPPED) set in their size fields. Because they are
1086 never merged or traversed from any other chunk, they have no
1087 foot size or inuse information.
1088
1089 Available chunks are kept in any of several places (all declared below):
1090
1091 * `av': An array of chunks serving as bin headers for consolidated
1092 chunks. Each bin is doubly linked. The bins are approximately
1093 proportionally (log) spaced. There are a lot of these bins
1094 (128). This may look excessive, but works very well in
1095 practice. All procedures maintain the invariant that no
1096 consolidated chunk physically borders another one. Chunks in
1097 bins are kept in size order, with ties going to the
1098 approximately least recently used chunk.
1099
1100 The chunks in each bin are maintained in decreasing sorted order by
1101 size. This is irrelevant for the small bins, which all contain
1102 the same-sized chunks, but facilitates best-fit allocation for
1103 larger chunks. (These lists are just sequential. Keeping them in
1104 order almost never requires enough traversal to warrant using
1105 fancier ordered data structures.) Chunks of the same size are
1106 linked with the most recently freed at the front, and allocations
1107 are taken from the back. This results in LRU or FIFO allocation
1108 order, which tends to give each chunk an equal opportunity to be
1109 consolidated with adjacent freed chunks, resulting in larger free
1110 chunks and less fragmentation.
1111
1112 * `top': The top-most available chunk (i.e., the one bordering the
1113 end of available memory) is treated specially. It is never
1114 included in any bin, is used only if no other chunk is
1115 available, and is released back to the system if it is very
1116 large (see M_TRIM_THRESHOLD).
1117
1118 * `last_remainder': A bin holding only the remainder of the
1119 most recently split (non-top) chunk. This bin is checked
1120 before other non-fitting chunks, so as to provide better
1121 locality for runs of sequentially allocated chunks.
1122
1123 * Implicitly, through the host system's memory mapping tables.
1124 If supported, requests greater than a threshold are usually
1125 serviced via calls to mmap, and then later released via munmap.
1126
1127*/
1128
1129/*
1130 Bins
1131
1132 The bins are an array of pairs of pointers serving as the
1133 heads of (initially empty) doubly-linked lists of chunks, laid out
1134 in a way so that each pair can be treated as if it were in a
1135 malloc_chunk. (This way, the fd/bk offsets for linking bin heads
1136 and chunks are the same).
1137
1138 Bins for sizes < 512 bytes contain chunks of all the same size, spaced
1139 8 bytes apart. Larger bins are approximately logarithmically
1140 spaced. (See the table below.)
1141
1142 Bin layout:
1143
1144 64 bins of size 8
1145 32 bins of size 64
1146 16 bins of size 512
1147 8 bins of size 4096
1148 4 bins of size 32768
1149 2 bins of size 262144
1150 1 bin of size what's left
1151
1152 There is actually a little bit of slop in the numbers in bin_index
1153 for the sake of speed. This makes no difference elsewhere.
1154
1155 The special chunks `top' and `last_remainder' get their own bins,
1156 (this is implemented via yet more trickery with the av array),
1157 although `top' is never properly linked to its bin since it is
1158 always handled specially.
1159
1160*/
1161
1162#define NAV 128 /* number of bins */
1163
1164typedef struct malloc_chunk* mbinptr;
1165
1166/* An arena is a configuration of malloc_chunks together with an array
1167 of bins. With multiple threads, it must be locked via a mutex
1168 before changing its data structures. One or more `heaps' are
1169 associated with each arena, except for the main_arena, which is
1170 associated only with the `main heap', i.e. the conventional free
1171 store obtained with calls to MORECORE() (usually sbrk). The `av'
1172 array is never mentioned directly in the code, but instead used via
1173 bin access macros. */
1174
1175typedef struct _arena {
1176 mbinptr av[2*NAV + 2];
1177 struct _arena *next;
8a4b65b4
UD
1178 size_t size;
1179#if THREAD_STATS
1180 long stat_lock_direct, stat_lock_loop, stat_lock_wait;
1181#endif
f65fd747
UD
1182 mutex_t mutex;
1183} arena;
1184
1185
6d52618b 1186/* A heap is a single contiguous memory region holding (coalesceable)
f65fd747
UD
1187 malloc_chunks. It is allocated with mmap() and always starts at an
1188 address aligned to HEAP_MAX_SIZE. Not used unless compiling for
1189 multiple threads. */
1190
1191typedef struct _heap_info {
8a4b65b4
UD
1192 arena *ar_ptr; /* Arena for this heap. */
1193 struct _heap_info *prev; /* Previous heap. */
1194 size_t size; /* Current size in bytes. */
1195 size_t pad; /* Make sure the following data is properly aligned. */
f65fd747
UD
1196} heap_info;
1197
1198
1199/*
1200 Static functions (forward declarations)
1201*/
1202
1203#if __STD_C
10dc2a90 1204
dfd2257a
UD
1205static void chunk_free(arena *ar_ptr, mchunkptr p) internal_function;
1206static mchunkptr chunk_alloc(arena *ar_ptr, INTERNAL_SIZE_T size)
1207 internal_function;
10dc2a90 1208static mchunkptr chunk_realloc(arena *ar_ptr, mchunkptr oldp,
dfd2257a
UD
1209 INTERNAL_SIZE_T oldsize, INTERNAL_SIZE_T nb)
1210 internal_function;
10dc2a90 1211static mchunkptr chunk_align(arena *ar_ptr, INTERNAL_SIZE_T nb,
dfd2257a
UD
1212 size_t alignment) internal_function;
1213static int main_trim(size_t pad) internal_function;
8a4b65b4 1214#ifndef NO_THREADS
dfd2257a 1215static int heap_trim(heap_info *heap, size_t pad) internal_function;
8a4b65b4 1216#endif
dfd2257a 1217#if defined _LIBC || defined MALLOC_HOOKS
a2b08ee5
UD
1218static Void_t* malloc_check(size_t sz, const Void_t *caller);
1219static void free_check(Void_t* mem, const Void_t *caller);
1220static Void_t* realloc_check(Void_t* oldmem, size_t bytes,
1221 const Void_t *caller);
1222static Void_t* memalign_check(size_t alignment, size_t bytes,
1223 const Void_t *caller);
ee74a442 1224#ifndef NO_THREADS
a2b08ee5
UD
1225static Void_t* malloc_starter(size_t sz, const Void_t *caller);
1226static void free_starter(Void_t* mem, const Void_t *caller);
1227static Void_t* malloc_atfork(size_t sz, const Void_t *caller);
1228static void free_atfork(Void_t* mem, const Void_t *caller);
a2b08ee5 1229#endif
ee74a442 1230#endif
10dc2a90 1231
f65fd747 1232#else
10dc2a90 1233
f65fd747
UD
1234static void chunk_free();
1235static mchunkptr chunk_alloc();
10dc2a90
UD
1236static mchunkptr chunk_realloc();
1237static mchunkptr chunk_align();
8a4b65b4
UD
1238static int main_trim();
1239#ifndef NO_THREADS
1240static int heap_trim();
1241#endif
dfd2257a 1242#if defined _LIBC || defined MALLOC_HOOKS
10dc2a90
UD
1243static Void_t* malloc_check();
1244static void free_check();
1245static Void_t* realloc_check();
1246static Void_t* memalign_check();
ee74a442 1247#ifndef NO_THREADS
7e3be507
UD
1248static Void_t* malloc_starter();
1249static void free_starter();
ca34d7a7
UD
1250static Void_t* malloc_atfork();
1251static void free_atfork();
10dc2a90 1252#endif
ee74a442 1253#endif
10dc2a90 1254
f65fd747
UD
1255#endif
1256
1257\f
1258
1259/* sizes, alignments */
1260
1261#define SIZE_SZ (sizeof(INTERNAL_SIZE_T))
1262#define MALLOC_ALIGNMENT (SIZE_SZ + SIZE_SZ)
1263#define MALLOC_ALIGN_MASK (MALLOC_ALIGNMENT - 1)
1264#define MINSIZE (sizeof(struct malloc_chunk))
1265
1266/* conversion from malloc headers to user pointers, and back */
1267
1268#define chunk2mem(p) ((Void_t*)((char*)(p) + 2*SIZE_SZ))
1269#define mem2chunk(mem) ((mchunkptr)((char*)(mem) - 2*SIZE_SZ))
1270
2e65ca2b 1271/* pad request bytes into a usable size, return non-zero on overflow */
f65fd747 1272
2e65ca2b
UD
1273#define request2size(req, nb) \
1274 ((nb = (req) + (SIZE_SZ + MALLOC_ALIGN_MASK)),\
597d10a0 1275 ((long)nb <= 0 || nb < (INTERNAL_SIZE_T) (req) \
9ae6fc54
UD
1276 ? (__set_errno (ENOMEM), 1) \
1277 : ((nb < (MINSIZE + MALLOC_ALIGN_MASK) \
597d10a0 1278 ? (nb = MINSIZE) : (nb &= ~MALLOC_ALIGN_MASK)), 0)))
f65fd747
UD
1279
1280/* Check if m has acceptable alignment */
1281
1282#define aligned_OK(m) (((unsigned long)((m)) & (MALLOC_ALIGN_MASK)) == 0)
1283
1284
1285\f
1286
1287/*
1288 Physical chunk operations
1289*/
1290
1291
1292/* size field is or'ed with PREV_INUSE when previous adjacent chunk in use */
1293
1294#define PREV_INUSE 0x1
1295
1296/* size field is or'ed with IS_MMAPPED if the chunk was obtained with mmap() */
1297
1298#define IS_MMAPPED 0x2
1299
1300/* Bits to mask off when extracting size */
1301
1302#define SIZE_BITS (PREV_INUSE|IS_MMAPPED)
1303
1304
1305/* Ptr to next physical malloc_chunk. */
1306
1307#define next_chunk(p) ((mchunkptr)( ((char*)(p)) + ((p)->size & ~PREV_INUSE) ))
1308
1309/* Ptr to previous physical malloc_chunk */
1310
1311#define prev_chunk(p) ((mchunkptr)( ((char*)(p)) - ((p)->prev_size) ))
1312
1313
1314/* Treat space at ptr + offset as a chunk */
1315
1316#define chunk_at_offset(p, s) ((mchunkptr)(((char*)(p)) + (s)))
1317
1318
1319\f
1320
1321/*
1322 Dealing with use bits
1323*/
1324
1325/* extract p's inuse bit */
1326
1327#define inuse(p) \
1328 ((((mchunkptr)(((char*)(p))+((p)->size & ~PREV_INUSE)))->size) & PREV_INUSE)
1329
1330/* extract inuse bit of previous chunk */
1331
1332#define prev_inuse(p) ((p)->size & PREV_INUSE)
1333
1334/* check for mmap()'ed chunk */
1335
1336#define chunk_is_mmapped(p) ((p)->size & IS_MMAPPED)
1337
1338/* set/clear chunk as in use without otherwise disturbing */
1339
1340#define set_inuse(p) \
1341 ((mchunkptr)(((char*)(p)) + ((p)->size & ~PREV_INUSE)))->size |= PREV_INUSE
1342
1343#define clear_inuse(p) \
1344 ((mchunkptr)(((char*)(p)) + ((p)->size & ~PREV_INUSE)))->size &= ~(PREV_INUSE)
1345
1346/* check/set/clear inuse bits in known places */
1347
1348#define inuse_bit_at_offset(p, s)\
1349 (((mchunkptr)(((char*)(p)) + (s)))->size & PREV_INUSE)
1350
1351#define set_inuse_bit_at_offset(p, s)\
1352 (((mchunkptr)(((char*)(p)) + (s)))->size |= PREV_INUSE)
1353
1354#define clear_inuse_bit_at_offset(p, s)\
1355 (((mchunkptr)(((char*)(p)) + (s)))->size &= ~(PREV_INUSE))
1356
1357
1358\f
1359
1360/*
1361 Dealing with size fields
1362*/
1363
1364/* Get size, ignoring use bits */
1365
1366#define chunksize(p) ((p)->size & ~(SIZE_BITS))
1367
1368/* Set size at head, without disturbing its use bit */
1369
1370#define set_head_size(p, s) ((p)->size = (((p)->size & PREV_INUSE) | (s)))
1371
1372/* Set size/use ignoring previous bits in header */
1373
1374#define set_head(p, s) ((p)->size = (s))
1375
1376/* Set size at footer (only when chunk is not in use) */
1377
1378#define set_foot(p, s) (((mchunkptr)((char*)(p) + (s)))->prev_size = (s))
1379
1380
1381\f
1382
1383
1384/* access macros */
1385
1386#define bin_at(a, i) ((mbinptr)((char*)&(((a)->av)[2*(i) + 2]) - 2*SIZE_SZ))
1387#define init_bin(a, i) ((a)->av[2*i+2] = (a)->av[2*i+3] = bin_at((a), i))
1388#define next_bin(b) ((mbinptr)((char*)(b) + 2 * sizeof(mbinptr)))
1389#define prev_bin(b) ((mbinptr)((char*)(b) - 2 * sizeof(mbinptr)))
1390
1391/*
1392 The first 2 bins are never indexed. The corresponding av cells are instead
1393 used for bookkeeping. This is not to save space, but to simplify
1394 indexing, maintain locality, and avoid some initialization tests.
1395*/
1396
1397#define binblocks(a) (bin_at(a,0)->size)/* bitvector of nonempty blocks */
1398#define top(a) (bin_at(a,0)->fd) /* The topmost chunk */
1399#define last_remainder(a) (bin_at(a,1)) /* remainder from last split */
1400
1401/*
1402 Because top initially points to its own bin with initial
1403 zero size, thus forcing extension on the first malloc request,
1404 we avoid having any special code in malloc to check whether
1405 it even exists yet. But we still need to in malloc_extend_top.
1406*/
1407
1408#define initial_top(a) ((mchunkptr)bin_at(a, 0))
1409
1410\f
1411
1412/* field-extraction macros */
1413
1414#define first(b) ((b)->fd)
1415#define last(b) ((b)->bk)
1416
1417/*
1418 Indexing into bins
1419*/
1420
dfd2257a
UD
1421#define bin_index(sz) \
1422(((((unsigned long)(sz)) >> 9) == 0) ? (((unsigned long)(sz)) >> 3):\
1423 ((((unsigned long)(sz)) >> 9) <= 4) ? 56 + (((unsigned long)(sz)) >> 6):\
1424 ((((unsigned long)(sz)) >> 9) <= 20) ? 91 + (((unsigned long)(sz)) >> 9):\
1425 ((((unsigned long)(sz)) >> 9) <= 84) ? 110 + (((unsigned long)(sz)) >> 12):\
1426 ((((unsigned long)(sz)) >> 9) <= 340) ? 119 + (((unsigned long)(sz)) >> 15):\
1427 ((((unsigned long)(sz)) >> 9) <= 1364) ? 124 + (((unsigned long)(sz)) >> 18):\
f65fd747
UD
1428 126)
1429/*
1430 bins for chunks < 512 are all spaced 8 bytes apart, and hold
1431 identically sized chunks. This is exploited in malloc.
1432*/
1433
1434#define MAX_SMALLBIN 63
1435#define MAX_SMALLBIN_SIZE 512
1436#define SMALLBIN_WIDTH 8
1437
1438#define smallbin_index(sz) (((unsigned long)(sz)) >> 3)
1439
1440/*
1441 Requests are `small' if both the corresponding and the next bin are small
1442*/
1443
1444#define is_small_request(nb) ((nb) < MAX_SMALLBIN_SIZE - SMALLBIN_WIDTH)
1445
1446\f
1447
1448/*
1449 To help compensate for the large number of bins, a one-level index
1450 structure is used for bin-by-bin searching. `binblocks' is a
1451 one-word bitvector recording whether groups of BINBLOCKWIDTH bins
1452 have any (possibly) non-empty bins, so they can be skipped over
1453 all at once during during traversals. The bits are NOT always
1454 cleared as soon as all bins in a block are empty, but instead only
1455 when all are noticed to be empty during traversal in malloc.
1456*/
1457
1458#define BINBLOCKWIDTH 4 /* bins per block */
1459
1460/* bin<->block macros */
1461
1462#define idx2binblock(ix) ((unsigned)1 << ((ix) / BINBLOCKWIDTH))
1463#define mark_binblock(a, ii) (binblocks(a) |= idx2binblock(ii))
1464#define clear_binblock(a, ii) (binblocks(a) &= ~(idx2binblock(ii)))
1465
1466
1467\f
1468
1469/* Static bookkeeping data */
1470
1471/* Helper macro to initialize bins */
1472#define IAV(i) bin_at(&main_arena, i), bin_at(&main_arena, i)
1473
1474static arena main_arena = {
1475 {
1476 0, 0,
1477 IAV(0), IAV(1), IAV(2), IAV(3), IAV(4), IAV(5), IAV(6), IAV(7),
1478 IAV(8), IAV(9), IAV(10), IAV(11), IAV(12), IAV(13), IAV(14), IAV(15),
1479 IAV(16), IAV(17), IAV(18), IAV(19), IAV(20), IAV(21), IAV(22), IAV(23),
1480 IAV(24), IAV(25), IAV(26), IAV(27), IAV(28), IAV(29), IAV(30), IAV(31),
1481 IAV(32), IAV(33), IAV(34), IAV(35), IAV(36), IAV(37), IAV(38), IAV(39),
1482 IAV(40), IAV(41), IAV(42), IAV(43), IAV(44), IAV(45), IAV(46), IAV(47),
1483 IAV(48), IAV(49), IAV(50), IAV(51), IAV(52), IAV(53), IAV(54), IAV(55),
1484 IAV(56), IAV(57), IAV(58), IAV(59), IAV(60), IAV(61), IAV(62), IAV(63),
1485 IAV(64), IAV(65), IAV(66), IAV(67), IAV(68), IAV(69), IAV(70), IAV(71),
1486 IAV(72), IAV(73), IAV(74), IAV(75), IAV(76), IAV(77), IAV(78), IAV(79),
1487 IAV(80), IAV(81), IAV(82), IAV(83), IAV(84), IAV(85), IAV(86), IAV(87),
1488 IAV(88), IAV(89), IAV(90), IAV(91), IAV(92), IAV(93), IAV(94), IAV(95),
1489 IAV(96), IAV(97), IAV(98), IAV(99), IAV(100), IAV(101), IAV(102), IAV(103),
1490 IAV(104), IAV(105), IAV(106), IAV(107), IAV(108), IAV(109), IAV(110), IAV(111),
1491 IAV(112), IAV(113), IAV(114), IAV(115), IAV(116), IAV(117), IAV(118), IAV(119),
1492 IAV(120), IAV(121), IAV(122), IAV(123), IAV(124), IAV(125), IAV(126), IAV(127)
1493 },
7e3be507 1494 &main_arena, /* next */
8a4b65b4
UD
1495 0, /* size */
1496#if THREAD_STATS
1497 0, 0, 0, /* stat_lock_direct, stat_lock_loop, stat_lock_wait */
1498#endif
f65fd747
UD
1499 MUTEX_INITIALIZER /* mutex */
1500};
1501
1502#undef IAV
1503
1504/* Thread specific data */
1505
8a4b65b4 1506#ifndef NO_THREADS
f65fd747
UD
1507static tsd_key_t arena_key;
1508static mutex_t list_lock = MUTEX_INITIALIZER;
8a4b65b4 1509#endif
f65fd747
UD
1510
1511#if THREAD_STATS
f65fd747 1512static int stat_n_heaps = 0;
f65fd747
UD
1513#define THREAD_STAT(x) x
1514#else
1515#define THREAD_STAT(x) do ; while(0)
1516#endif
1517
1518/* variables holding tunable values */
1519
1520static unsigned long trim_threshold = DEFAULT_TRIM_THRESHOLD;
1521static unsigned long top_pad = DEFAULT_TOP_PAD;
1522static unsigned int n_mmaps_max = DEFAULT_MMAP_MAX;
1523static unsigned long mmap_threshold = DEFAULT_MMAP_THRESHOLD;
10dc2a90 1524static int check_action = DEFAULT_CHECK_ACTION;
f65fd747
UD
1525
1526/* The first value returned from sbrk */
1527static char* sbrk_base = (char*)(-1);
1528
1529/* The maximum memory obtained from system via sbrk */
1530static unsigned long max_sbrked_mem = 0;
1531
8a4b65b4
UD
1532/* The maximum via either sbrk or mmap (too difficult to track with threads) */
1533#ifdef NO_THREADS
f65fd747 1534static unsigned long max_total_mem = 0;
8a4b65b4 1535#endif
f65fd747
UD
1536
1537/* The total memory obtained from system via sbrk */
8a4b65b4 1538#define sbrked_mem (main_arena.size)
f65fd747
UD
1539
1540/* Tracking mmaps */
1541
1542static unsigned int n_mmaps = 0;
1543static unsigned int max_n_mmaps = 0;
1544static unsigned long mmapped_mem = 0;
1545static unsigned long max_mmapped_mem = 0;
1546
1547
1548\f
831372e7
UD
1549#ifndef _LIBC
1550#define weak_variable
1551#else
1552/* In GNU libc we want the hook variables to be weak definitions to
1553 avoid a problem with Emacs. */
1554#define weak_variable weak_function
1555#endif
7e3be507
UD
1556
1557/* Already initialized? */
9756dfe1 1558int __malloc_initialized = -1;
f65fd747
UD
1559
1560
ee74a442
UD
1561#ifndef NO_THREADS
1562
ca34d7a7
UD
1563/* The following two functions are registered via thread_atfork() to
1564 make sure that the mutexes remain in a consistent state in the
1565 fork()ed version of a thread. Also adapt the malloc and free hooks
1566 temporarily, because the `atfork' handler mechanism may use
1567 malloc/free internally (e.g. in LinuxThreads). */
1568
dfd2257a 1569#if defined _LIBC || defined MALLOC_HOOKS
a2b08ee5
UD
1570static __malloc_ptr_t (*save_malloc_hook) __MALLOC_P ((size_t __size,
1571 const __malloc_ptr_t));
1572static void (*save_free_hook) __MALLOC_P ((__malloc_ptr_t __ptr,
1573 const __malloc_ptr_t));
1574static Void_t* save_arena;
a2b08ee5 1575#endif
ca34d7a7
UD
1576
1577static void
1578ptmalloc_lock_all __MALLOC_P((void))
1579{
1580 arena *ar_ptr;
1581
1582 (void)mutex_lock(&list_lock);
1583 for(ar_ptr = &main_arena;;) {
1584 (void)mutex_lock(&ar_ptr->mutex);
1585 ar_ptr = ar_ptr->next;
1586 if(ar_ptr == &main_arena) break;
1587 }
dfd2257a 1588#if defined _LIBC || defined MALLOC_HOOKS
ca34d7a7
UD
1589 save_malloc_hook = __malloc_hook;
1590 save_free_hook = __free_hook;
1591 __malloc_hook = malloc_atfork;
1592 __free_hook = free_atfork;
1593 /* Only the current thread may perform malloc/free calls now. */
1594 tsd_getspecific(arena_key, save_arena);
1595 tsd_setspecific(arena_key, (Void_t*)0);
1596#endif
1597}
1598
1599static void
1600ptmalloc_unlock_all __MALLOC_P((void))
1601{
1602 arena *ar_ptr;
1603
dfd2257a 1604#if defined _LIBC || defined MALLOC_HOOKS
ca34d7a7
UD
1605 tsd_setspecific(arena_key, save_arena);
1606 __malloc_hook = save_malloc_hook;
1607 __free_hook = save_free_hook;
1608#endif
1609 for(ar_ptr = &main_arena;;) {
1610 (void)mutex_unlock(&ar_ptr->mutex);
1611 ar_ptr = ar_ptr->next;
1612 if(ar_ptr == &main_arena) break;
1613 }
1614 (void)mutex_unlock(&list_lock);
1615}
1616
eb406346
UD
1617static void
1618ptmalloc_init_all __MALLOC_P((void))
1619{
1620 arena *ar_ptr;
1621
1622#if defined _LIBC || defined MALLOC_HOOKS
1623 tsd_setspecific(arena_key, save_arena);
1624 __malloc_hook = save_malloc_hook;
1625 __free_hook = save_free_hook;
1626#endif
1627 for(ar_ptr = &main_arena;;) {
1628 (void)mutex_init(&ar_ptr->mutex);
1629 ar_ptr = ar_ptr->next;
1630 if(ar_ptr == &main_arena) break;
1631 }
1632 (void)mutex_init(&list_lock);
1633}
1634
431c33c0 1635#endif /* !defined NO_THREADS */
ee74a442 1636
f65fd747
UD
1637/* Initialization routine. */
1638#if defined(_LIBC)
10dc2a90 1639#if 0
f65fd747 1640static void ptmalloc_init __MALLOC_P ((void)) __attribute__ ((constructor));
10dc2a90 1641#endif
f65fd747
UD
1642
1643static void
1644ptmalloc_init __MALLOC_P((void))
1645#else
1646void
1647ptmalloc_init __MALLOC_P((void))
1648#endif
1649{
dfd2257a 1650#if defined _LIBC || defined MALLOC_HOOKS
10dc2a90
UD
1651 const char* s;
1652#endif
f65fd747 1653
9756dfe1
UD
1654 if(__malloc_initialized >= 0) return;
1655 __malloc_initialized = 0;
f341c297
UD
1656#ifdef _LIBC
1657 __libc_pagesize = __getpagesize();
1658#endif
ee74a442 1659#ifndef NO_THREADS
dfd2257a 1660#if defined _LIBC || defined MALLOC_HOOKS
7e3be507
UD
1661 /* With some threads implementations, creating thread-specific data
1662 or initializing a mutex may call malloc() itself. Provide a
1663 simple starter version (realloc() won't work). */
1664 save_malloc_hook = __malloc_hook;
1665 save_free_hook = __free_hook;
1666 __malloc_hook = malloc_starter;
1667 __free_hook = free_starter;
1668#endif
ee74a442 1669#ifdef _LIBC
8a4b65b4 1670 /* Initialize the pthreads interface. */
f65fd747 1671 if (__pthread_initialize != NULL)
8a4b65b4 1672 __pthread_initialize();
f65fd747 1673#endif
10dc2a90
UD
1674 mutex_init(&main_arena.mutex);
1675 mutex_init(&list_lock);
1676 tsd_key_create(&arena_key, NULL);
1677 tsd_setspecific(arena_key, (Void_t *)&main_arena);
eb406346 1678 thread_atfork(ptmalloc_lock_all, ptmalloc_unlock_all, ptmalloc_init_all);
94ffedf6 1679#endif /* !defined NO_THREADS */
dfd2257a 1680#if defined _LIBC || defined MALLOC_HOOKS
d01d6319 1681 if((s = __secure_getenv("MALLOC_TRIM_THRESHOLD_")))
831372e7 1682 mALLOPt(M_TRIM_THRESHOLD, atoi(s));
d01d6319 1683 if((s = __secure_getenv("MALLOC_TOP_PAD_")))
831372e7 1684 mALLOPt(M_TOP_PAD, atoi(s));
d01d6319 1685 if((s = __secure_getenv("MALLOC_MMAP_THRESHOLD_")))
831372e7 1686 mALLOPt(M_MMAP_THRESHOLD, atoi(s));
d01d6319 1687 if((s = __secure_getenv("MALLOC_MMAP_MAX_")))
831372e7 1688 mALLOPt(M_MMAP_MAX, atoi(s));
10dc2a90 1689 s = getenv("MALLOC_CHECK_");
ee74a442 1690#ifndef NO_THREADS
7e3be507
UD
1691 __malloc_hook = save_malloc_hook;
1692 __free_hook = save_free_hook;
ee74a442 1693#endif
10dc2a90 1694 if(s) {
831372e7
UD
1695 if(s[0]) mALLOPt(M_CHECK_ACTION, (int)(s[0] - '0'));
1696 __malloc_check_init();
f65fd747 1697 }
10dc2a90
UD
1698 if(__malloc_initialize_hook != NULL)
1699 (*__malloc_initialize_hook)();
1700#endif
9756dfe1 1701 __malloc_initialized = 1;
f65fd747
UD
1702}
1703
ca34d7a7
UD
1704/* There are platforms (e.g. Hurd) with a link-time hook mechanism. */
1705#ifdef thread_atfork_static
1706thread_atfork_static(ptmalloc_lock_all, ptmalloc_unlock_all, \
eb406346 1707 ptmalloc_init_all)
ca34d7a7
UD
1708#endif
1709
dfd2257a 1710#if defined _LIBC || defined MALLOC_HOOKS
10dc2a90
UD
1711
1712/* Hooks for debugging versions. The initial hooks just call the
1713 initialization routine, then do the normal work. */
1714
1715static Void_t*
1716#if __STD_C
9a51759b 1717malloc_hook_ini(size_t sz, const __malloc_ptr_t caller)
10dc2a90 1718#else
9a51759b
UD
1719malloc_hook_ini(sz, caller)
1720 size_t sz; const __malloc_ptr_t caller;
a2b08ee5 1721#endif
10dc2a90
UD
1722{
1723 __malloc_hook = NULL;
10dc2a90
UD
1724 ptmalloc_init();
1725 return mALLOc(sz);
1726}
1727
1728static Void_t*
1729#if __STD_C
dfd2257a 1730realloc_hook_ini(Void_t* ptr, size_t sz, const __malloc_ptr_t caller)
10dc2a90 1731#else
dfd2257a
UD
1732realloc_hook_ini(ptr, sz, caller)
1733 Void_t* ptr; size_t sz; const __malloc_ptr_t caller;
a2b08ee5 1734#endif
10dc2a90
UD
1735{
1736 __malloc_hook = NULL;
1737 __realloc_hook = NULL;
10dc2a90
UD
1738 ptmalloc_init();
1739 return rEALLOc(ptr, sz);
1740}
1741
1742static Void_t*
1743#if __STD_C
dfd2257a 1744memalign_hook_ini(size_t sz, size_t alignment, const __malloc_ptr_t caller)
10dc2a90 1745#else
dfd2257a
UD
1746memalign_hook_ini(sz, alignment, caller)
1747 size_t sz; size_t alignment; const __malloc_ptr_t caller;
a2b08ee5 1748#endif
10dc2a90 1749{
10dc2a90
UD
1750 __memalign_hook = NULL;
1751 ptmalloc_init();
1752 return mEMALIGn(sz, alignment);
1753}
1754
831372e7 1755void weak_variable (*__malloc_initialize_hook) __MALLOC_P ((void)) = NULL;
a2b08ee5
UD
1756void weak_variable (*__free_hook) __MALLOC_P ((__malloc_ptr_t __ptr,
1757 const __malloc_ptr_t)) = NULL;
1758__malloc_ptr_t weak_variable (*__malloc_hook)
1759 __MALLOC_P ((size_t __size, const __malloc_ptr_t)) = malloc_hook_ini;
1760__malloc_ptr_t weak_variable (*__realloc_hook)
1761 __MALLOC_P ((__malloc_ptr_t __ptr, size_t __size, const __malloc_ptr_t))
1762 = realloc_hook_ini;
1763__malloc_ptr_t weak_variable (*__memalign_hook)
1764 __MALLOC_P ((size_t __size, size_t __alignment, const __malloc_ptr_t))
1765 = memalign_hook_ini;
1228ed5c 1766void weak_variable (*__after_morecore_hook) __MALLOC_P ((void)) = NULL;
10dc2a90 1767
9a51759b
UD
1768/* Whether we are using malloc checking. */
1769static int using_malloc_checking;
1770
1771/* A flag that is set by malloc_set_state, to signal that malloc checking
1772 must not be enabled on the request from the user (via the MALLOC_CHECK_
1773 environment variable). It is reset by __malloc_check_init to tell
1774 malloc_set_state that the user has requested malloc checking.
1775
1776 The purpose of this flag is to make sure that malloc checking is not
1777 enabled when the heap to be restored was constructed without malloc
1778 checking, and thus does not contain the required magic bytes.
1779 Otherwise the heap would be corrupted by calls to free and realloc. If
1780 it turns out that the heap was created with malloc checking and the
1781 user has requested it malloc_set_state just calls __malloc_check_init
1782 again to enable it. On the other hand, reusing such a heap without
1783 further malloc checking is safe. */
1784static int disallow_malloc_check;
1785
10dc2a90
UD
1786/* Activate a standard set of debugging hooks. */
1787void
831372e7 1788__malloc_check_init()
10dc2a90 1789{
9a51759b
UD
1790 if (disallow_malloc_check) {
1791 disallow_malloc_check = 0;
1792 return;
1793 }
1794 using_malloc_checking = 1;
10dc2a90
UD
1795 __malloc_hook = malloc_check;
1796 __free_hook = free_check;
1797 __realloc_hook = realloc_check;
1798 __memalign_hook = memalign_check;
b3864d70 1799 if(check_action & 1)
7e3be507 1800 fprintf(stderr, "malloc: using debugging hooks\n");
10dc2a90
UD
1801}
1802
1803#endif
1804
f65fd747
UD
1805
1806\f
1807
1808
1809/* Routines dealing with mmap(). */
1810
1811#if HAVE_MMAP
1812
1813#ifndef MAP_ANONYMOUS
1814
1815static int dev_zero_fd = -1; /* Cached file descriptor for /dev/zero. */
1816
3ddfec55 1817#define MMAP(addr, size, prot, flags) ((dev_zero_fd < 0) ? \
f65fd747 1818 (dev_zero_fd = open("/dev/zero", O_RDWR), \
3ddfec55
UD
1819 mmap((addr), (size), (prot), (flags), dev_zero_fd, 0)) : \
1820 mmap((addr), (size), (prot), (flags), dev_zero_fd, 0))
f65fd747
UD
1821
1822#else
1823
3ddfec55
UD
1824#define MMAP(addr, size, prot, flags) \
1825 (mmap((addr), (size), (prot), (flags)|MAP_ANONYMOUS, -1, 0))
f65fd747
UD
1826
1827#endif
1828
dfd2257a
UD
1829#if defined __GNUC__ && __GNUC__ >= 2
1830/* This function is only called from one place, inline it. */
1831inline
1832#endif
af6f3906 1833static mchunkptr
dfd2257a 1834internal_function
f65fd747 1835#if __STD_C
dfd2257a 1836mmap_chunk(size_t size)
f65fd747 1837#else
dfd2257a 1838mmap_chunk(size) size_t size;
f65fd747
UD
1839#endif
1840{
1841 size_t page_mask = malloc_getpagesize - 1;
1842 mchunkptr p;
1843
1844 if(n_mmaps >= n_mmaps_max) return 0; /* too many regions */
1845
1846 /* For mmapped chunks, the overhead is one SIZE_SZ unit larger, because
1847 * there is no following chunk whose prev_size field could be used.
1848 */
1849 size = (size + SIZE_SZ + page_mask) & ~page_mask;
1850
3ddfec55 1851 p = (mchunkptr)MMAP(0, size, PROT_READ|PROT_WRITE, MAP_PRIVATE);
0413b54c 1852 if(p == (mchunkptr) MAP_FAILED) return 0;
f65fd747
UD
1853
1854 n_mmaps++;
1855 if (n_mmaps > max_n_mmaps) max_n_mmaps = n_mmaps;
1856
1857 /* We demand that eight bytes into a page must be 8-byte aligned. */
1858 assert(aligned_OK(chunk2mem(p)));
1859
1860 /* The offset to the start of the mmapped region is stored
1861 * in the prev_size field of the chunk; normally it is zero,
1862 * but that can be changed in memalign().
1863 */
1864 p->prev_size = 0;
1865 set_head(p, size|IS_MMAPPED);
1866
1867 mmapped_mem += size;
1868 if ((unsigned long)mmapped_mem > (unsigned long)max_mmapped_mem)
1869 max_mmapped_mem = mmapped_mem;
8a4b65b4 1870#ifdef NO_THREADS
f65fd747
UD
1871 if ((unsigned long)(mmapped_mem + sbrked_mem) > (unsigned long)max_total_mem)
1872 max_total_mem = mmapped_mem + sbrked_mem;
8a4b65b4 1873#endif
f65fd747
UD
1874 return p;
1875}
1876
431c33c0
UD
1877static void
1878internal_function
f65fd747 1879#if __STD_C
431c33c0 1880munmap_chunk(mchunkptr p)
f65fd747 1881#else
431c33c0 1882munmap_chunk(p) mchunkptr p;
f65fd747
UD
1883#endif
1884{
1885 INTERNAL_SIZE_T size = chunksize(p);
1886 int ret;
1887
1888 assert (chunk_is_mmapped(p));
1889 assert(! ((char*)p >= sbrk_base && (char*)p < sbrk_base + sbrked_mem));
1890 assert((n_mmaps > 0));
1891 assert(((p->prev_size + size) & (malloc_getpagesize-1)) == 0);
1892
1893 n_mmaps--;
1894 mmapped_mem -= (size + p->prev_size);
1895
1896 ret = munmap((char *)p - p->prev_size, size + p->prev_size);
1897
1898 /* munmap returns non-zero on failure */
1899 assert(ret == 0);
1900}
1901
1902#if HAVE_MREMAP
1903
431c33c0
UD
1904static mchunkptr
1905internal_function
f65fd747 1906#if __STD_C
431c33c0 1907mremap_chunk(mchunkptr p, size_t new_size)
f65fd747 1908#else
431c33c0 1909mremap_chunk(p, new_size) mchunkptr p; size_t new_size;
f65fd747
UD
1910#endif
1911{
1912 size_t page_mask = malloc_getpagesize - 1;
1913 INTERNAL_SIZE_T offset = p->prev_size;
1914 INTERNAL_SIZE_T size = chunksize(p);
1915 char *cp;
1916
1917 assert (chunk_is_mmapped(p));
1918 assert(! ((char*)p >= sbrk_base && (char*)p < sbrk_base + sbrked_mem));
1919 assert((n_mmaps > 0));
1920 assert(((size + offset) & (malloc_getpagesize-1)) == 0);
1921
1922 /* Note the extra SIZE_SZ overhead as in mmap_chunk(). */
1923 new_size = (new_size + offset + SIZE_SZ + page_mask) & ~page_mask;
1924
1925 cp = (char *)mremap((char *)p - offset, size + offset, new_size,
1926 MREMAP_MAYMOVE);
1927
431c33c0 1928 if (cp == MAP_FAILED) return 0;
f65fd747
UD
1929
1930 p = (mchunkptr)(cp + offset);
1931
1932 assert(aligned_OK(chunk2mem(p)));
1933
1934 assert((p->prev_size == offset));
1935 set_head(p, (new_size - offset)|IS_MMAPPED);
1936
1937 mmapped_mem -= size + offset;
1938 mmapped_mem += new_size;
1939 if ((unsigned long)mmapped_mem > (unsigned long)max_mmapped_mem)
1940 max_mmapped_mem = mmapped_mem;
8a4b65b4 1941#ifdef NO_THREADS
f65fd747
UD
1942 if ((unsigned long)(mmapped_mem + sbrked_mem) > (unsigned long)max_total_mem)
1943 max_total_mem = mmapped_mem + sbrked_mem;
8a4b65b4 1944#endif
f65fd747
UD
1945 return p;
1946}
1947
1948#endif /* HAVE_MREMAP */
1949
1950#endif /* HAVE_MMAP */
1951
1952\f
1953
1954/* Managing heaps and arenas (for concurrent threads) */
1955
1956#ifndef NO_THREADS
1957
1958/* Create a new heap. size is automatically rounded up to a multiple
1959 of the page size. */
1960
1961static heap_info *
dfd2257a 1962internal_function
f65fd747
UD
1963#if __STD_C
1964new_heap(size_t size)
1965#else
1966new_heap(size) size_t size;
1967#endif
1968{
1969 size_t page_mask = malloc_getpagesize - 1;
1970 char *p1, *p2;
1971 unsigned long ul;
1972 heap_info *h;
1973
7799b7b3 1974 if(size+top_pad < HEAP_MIN_SIZE)
f65fd747 1975 size = HEAP_MIN_SIZE;
7799b7b3
UD
1976 else if(size+top_pad <= HEAP_MAX_SIZE)
1977 size += top_pad;
1978 else if(size > HEAP_MAX_SIZE)
f65fd747 1979 return 0;
7799b7b3
UD
1980 else
1981 size = HEAP_MAX_SIZE;
1982 size = (size + page_mask) & ~page_mask;
1983
b22fc5f5
UD
1984 /* A memory region aligned to a multiple of HEAP_MAX_SIZE is needed.
1985 No swap space needs to be reserved for the following large
1986 mapping (on Linux, this is the case for all non-writable mappings
1987 anyway). */
3ddfec55 1988 p1 = (char *)MMAP(0, HEAP_MAX_SIZE<<1, PROT_NONE, MAP_PRIVATE|MAP_NORESERVE);
0413b54c 1989 if(p1 == MAP_FAILED)
f65fd747
UD
1990 return 0;
1991 p2 = (char *)(((unsigned long)p1 + HEAP_MAX_SIZE) & ~(HEAP_MAX_SIZE-1));
1992 ul = p2 - p1;
1993 munmap(p1, ul);
1994 munmap(p2 + HEAP_MAX_SIZE, HEAP_MAX_SIZE - ul);
1995 if(mprotect(p2, size, PROT_READ|PROT_WRITE) != 0) {
1996 munmap(p2, HEAP_MAX_SIZE);
1997 return 0;
1998 }
1999 h = (heap_info *)p2;
2000 h->size = size;
2001 THREAD_STAT(stat_n_heaps++);
2002 return h;
2003}
2004
2005/* Grow or shrink a heap. size is automatically rounded up to a
8a4b65b4 2006 multiple of the page size if it is positive. */
f65fd747
UD
2007
2008static int
2009#if __STD_C
2010grow_heap(heap_info *h, long diff)
2011#else
2012grow_heap(h, diff) heap_info *h; long diff;
2013#endif
2014{
2015 size_t page_mask = malloc_getpagesize - 1;
2016 long new_size;
2017
2018 if(diff >= 0) {
2019 diff = (diff + page_mask) & ~page_mask;
2020 new_size = (long)h->size + diff;
2021 if(new_size > HEAP_MAX_SIZE)
2022 return -1;
2023 if(mprotect((char *)h + h->size, diff, PROT_READ|PROT_WRITE) != 0)
2024 return -2;
2025 } else {
2026 new_size = (long)h->size + diff;
8a4b65b4 2027 if(new_size < (long)sizeof(*h))
f65fd747 2028 return -1;
7c2b945e
UD
2029 /* Try to re-map the extra heap space freshly to save memory, and
2030 make it inaccessible. */
2031 if((char *)MMAP((char *)h + new_size, -diff, PROT_NONE,
3ddfec55 2032 MAP_PRIVATE|MAP_FIXED) == (char *) MAP_FAILED)
f65fd747
UD
2033 return -2;
2034 }
2035 h->size = new_size;
2036 return 0;
2037}
2038
8a4b65b4
UD
2039/* Delete a heap. */
2040
2041#define delete_heap(heap) munmap((char*)(heap), HEAP_MAX_SIZE)
2042
f65fd747
UD
2043/* arena_get() acquires an arena and locks the corresponding mutex.
2044 First, try the one last locked successfully by this thread. (This
2045 is the common case and handled with a macro for speed.) Then, loop
7e3be507
UD
2046 once over the circularly linked list of arenas. If no arena is
2047 readily available, create a new one. */
f65fd747
UD
2048
2049#define arena_get(ptr, size) do { \
2050 Void_t *vptr = NULL; \
2051 ptr = (arena *)tsd_getspecific(arena_key, vptr); \
2052 if(ptr && !mutex_trylock(&ptr->mutex)) { \
8a4b65b4 2053 THREAD_STAT(++(ptr->stat_lock_direct)); \
7e3be507 2054 } else \
f65fd747 2055 ptr = arena_get2(ptr, (size)); \
f65fd747
UD
2056} while(0)
2057
2058static arena *
dfd2257a 2059internal_function
f65fd747
UD
2060#if __STD_C
2061arena_get2(arena *a_tsd, size_t size)
2062#else
2063arena_get2(a_tsd, size) arena *a_tsd; size_t size;
2064#endif
2065{
2066 arena *a;
2067 heap_info *h;
2068 char *ptr;
2069 int i;
2070 unsigned long misalign;
2071
7e3be507
UD
2072 if(!a_tsd)
2073 a = a_tsd = &main_arena;
2074 else {
2075 a = a_tsd->next;
2076 if(!a) {
2077 /* This can only happen while initializing the new arena. */
2078 (void)mutex_lock(&main_arena.mutex);
2079 THREAD_STAT(++(main_arena.stat_lock_wait));
2080 return &main_arena;
f65fd747 2081 }
8a4b65b4 2082 }
7e3be507
UD
2083
2084 /* Check the global, circularly linked list for available arenas. */
b22fc5f5 2085 repeat:
7e3be507
UD
2086 do {
2087 if(!mutex_trylock(&a->mutex)) {
2088 THREAD_STAT(++(a->stat_lock_loop));
2089 tsd_setspecific(arena_key, (Void_t *)a);
2090 return a;
2091 }
2092 a = a->next;
2093 } while(a != a_tsd);
f65fd747 2094
b22fc5f5
UD
2095 /* If not even the list_lock can be obtained, try again. This can
2096 happen during `atfork', or for example on systems where thread
2097 creation makes it temporarily impossible to obtain _any_
2098 locks. */
2099 if(mutex_trylock(&list_lock)) {
2100 a = a_tsd;
2101 goto repeat;
2102 }
2103 (void)mutex_unlock(&list_lock);
2104
f65fd747
UD
2105 /* Nothing immediately available, so generate a new arena. */
2106 h = new_heap(size + (sizeof(*h) + sizeof(*a) + MALLOC_ALIGNMENT));
2107 if(!h)
2108 return 0;
2109 a = h->ar_ptr = (arena *)(h+1);
2110 for(i=0; i<NAV; i++)
2111 init_bin(a, i);
7e3be507 2112 a->next = NULL;
8a4b65b4 2113 a->size = h->size;
7e3be507 2114 tsd_setspecific(arena_key, (Void_t *)a);
f65fd747
UD
2115 mutex_init(&a->mutex);
2116 i = mutex_lock(&a->mutex); /* remember result */
2117
2118 /* Set up the top chunk, with proper alignment. */
2119 ptr = (char *)(a + 1);
2120 misalign = (unsigned long)chunk2mem(ptr) & MALLOC_ALIGN_MASK;
2121 if (misalign > 0)
2122 ptr += MALLOC_ALIGNMENT - misalign;
2123 top(a) = (mchunkptr)ptr;
8a4b65b4 2124 set_head(top(a), (((char*)h + h->size) - ptr) | PREV_INUSE);
f65fd747
UD
2125
2126 /* Add the new arena to the list. */
2127 (void)mutex_lock(&list_lock);
2128 a->next = main_arena.next;
2129 main_arena.next = a;
f65fd747
UD
2130 (void)mutex_unlock(&list_lock);
2131
2132 if(i) /* locking failed; keep arena for further attempts later */
2133 return 0;
2134
8a4b65b4 2135 THREAD_STAT(++(a->stat_lock_loop));
f65fd747
UD
2136 return a;
2137}
2138
2139/* find the heap and corresponding arena for a given ptr */
2140
2141#define heap_for_ptr(ptr) \
2142 ((heap_info *)((unsigned long)(ptr) & ~(HEAP_MAX_SIZE-1)))
2143#define arena_for_ptr(ptr) \
2144 (((mchunkptr)(ptr) < top(&main_arena) && (char *)(ptr) >= sbrk_base) ? \
2145 &main_arena : heap_for_ptr(ptr)->ar_ptr)
2146
2147#else /* defined(NO_THREADS) */
2148
2149/* Without concurrent threads, there is only one arena. */
2150
2151#define arena_get(ptr, sz) (ptr = &main_arena)
2152#define arena_for_ptr(ptr) (&main_arena)
2153
2154#endif /* !defined(NO_THREADS) */
2155
2156\f
2157
2158/*
2159 Debugging support
2160*/
2161
2162#if MALLOC_DEBUG
2163
2164
2165/*
2166 These routines make a number of assertions about the states
2167 of data structures that should be true at all times. If any
2168 are not true, it's very likely that a user program has somehow
2169 trashed memory. (It's also possible that there is a coding error
2170 in malloc. In which case, please report it!)
2171*/
2172
2173#if __STD_C
2174static void do_check_chunk(arena *ar_ptr, mchunkptr p)
2175#else
2176static void do_check_chunk(ar_ptr, p) arena *ar_ptr; mchunkptr p;
2177#endif
2178{
2179 INTERNAL_SIZE_T sz = p->size & ~PREV_INUSE;
2180
2181 /* No checkable chunk is mmapped */
2182 assert(!chunk_is_mmapped(p));
2183
2184#ifndef NO_THREADS
2185 if(ar_ptr != &main_arena) {
2186 heap_info *heap = heap_for_ptr(p);
2187 assert(heap->ar_ptr == ar_ptr);
7c2b945e
UD
2188 if(p != top(ar_ptr))
2189 assert((char *)p + sz <= (char *)heap + heap->size);
2190 else
3ddfec55 2191 assert((char *)p + sz == (char *)heap + heap->size);
f65fd747
UD
2192 return;
2193 }
2194#endif
2195
2196 /* Check for legal address ... */
2197 assert((char*)p >= sbrk_base);
2198 if (p != top(ar_ptr))
2199 assert((char*)p + sz <= (char*)top(ar_ptr));
2200 else
2201 assert((char*)p + sz <= sbrk_base + sbrked_mem);
2202
2203}
2204
2205
2206#if __STD_C
2207static void do_check_free_chunk(arena *ar_ptr, mchunkptr p)
2208#else
2209static void do_check_free_chunk(ar_ptr, p) arena *ar_ptr; mchunkptr p;
2210#endif
2211{
2212 INTERNAL_SIZE_T sz = p->size & ~PREV_INUSE;
2213 mchunkptr next = chunk_at_offset(p, sz);
2214
2215 do_check_chunk(ar_ptr, p);
2216
2217 /* Check whether it claims to be free ... */
2218 assert(!inuse(p));
2219
8a4b65b4
UD
2220 /* Must have OK size and fields */
2221 assert((long)sz >= (long)MINSIZE);
2222 assert((sz & MALLOC_ALIGN_MASK) == 0);
2223 assert(aligned_OK(chunk2mem(p)));
2224 /* ... matching footer field */
2225 assert(next->prev_size == sz);
2226 /* ... and is fully consolidated */
2227 assert(prev_inuse(p));
2228 assert (next == top(ar_ptr) || inuse(next));
2229
2230 /* ... and has minimally sane links */
2231 assert(p->fd->bk == p);
2232 assert(p->bk->fd == p);
f65fd747
UD
2233}
2234
2235#if __STD_C
2236static void do_check_inuse_chunk(arena *ar_ptr, mchunkptr p)
2237#else
2238static void do_check_inuse_chunk(ar_ptr, p) arena *ar_ptr; mchunkptr p;
2239#endif
2240{
2241 mchunkptr next = next_chunk(p);
2242 do_check_chunk(ar_ptr, p);
2243
2244 /* Check whether it claims to be in use ... */
2245 assert(inuse(p));
2246
8a4b65b4
UD
2247 /* ... whether its size is OK (it might be a fencepost) ... */
2248 assert(chunksize(p) >= MINSIZE || next->size == (0|PREV_INUSE));
2249
f65fd747
UD
2250 /* ... and is surrounded by OK chunks.
2251 Since more things can be checked with free chunks than inuse ones,
2252 if an inuse chunk borders them and debug is on, it's worth doing them.
2253 */
2254 if (!prev_inuse(p))
2255 {
2256 mchunkptr prv = prev_chunk(p);
2257 assert(next_chunk(prv) == p);
2258 do_check_free_chunk(ar_ptr, prv);
2259 }
2260 if (next == top(ar_ptr))
2261 {
2262 assert(prev_inuse(next));
2263 assert(chunksize(next) >= MINSIZE);
2264 }
2265 else if (!inuse(next))
2266 do_check_free_chunk(ar_ptr, next);
2267
2268}
2269
2270#if __STD_C
2271static void do_check_malloced_chunk(arena *ar_ptr,
2272 mchunkptr p, INTERNAL_SIZE_T s)
2273#else
2274static void do_check_malloced_chunk(ar_ptr, p, s)
2275arena *ar_ptr; mchunkptr p; INTERNAL_SIZE_T s;
2276#endif
2277{
2278 INTERNAL_SIZE_T sz = p->size & ~PREV_INUSE;
2279 long room = sz - s;
2280
2281 do_check_inuse_chunk(ar_ptr, p);
2282
2283 /* Legal size ... */
2284 assert((long)sz >= (long)MINSIZE);
2285 assert((sz & MALLOC_ALIGN_MASK) == 0);
2286 assert(room >= 0);
2287 assert(room < (long)MINSIZE);
2288
2289 /* ... and alignment */
2290 assert(aligned_OK(chunk2mem(p)));
2291
2292
2293 /* ... and was allocated at front of an available chunk */
2294 assert(prev_inuse(p));
2295
2296}
2297
2298
2299#define check_free_chunk(A,P) do_check_free_chunk(A,P)
2300#define check_inuse_chunk(A,P) do_check_inuse_chunk(A,P)
2301#define check_chunk(A,P) do_check_chunk(A,P)
2302#define check_malloced_chunk(A,P,N) do_check_malloced_chunk(A,P,N)
2303#else
2304#define check_free_chunk(A,P)
2305#define check_inuse_chunk(A,P)
2306#define check_chunk(A,P)
2307#define check_malloced_chunk(A,P,N)
2308#endif
2309
2310\f
2311
2312/*
2313 Macro-based internal utilities
2314*/
2315
2316
2317/*
2318 Linking chunks in bin lists.
2319 Call these only with variables, not arbitrary expressions, as arguments.
2320*/
2321
2322/*
2323 Place chunk p of size s in its bin, in size order,
2324 putting it ahead of others of same size.
2325*/
2326
2327
2328#define frontlink(A, P, S, IDX, BK, FD) \
2329{ \
2330 if (S < MAX_SMALLBIN_SIZE) \
2331 { \
2332 IDX = smallbin_index(S); \
2333 mark_binblock(A, IDX); \
2334 BK = bin_at(A, IDX); \
2335 FD = BK->fd; \
2336 P->bk = BK; \
2337 P->fd = FD; \
2338 FD->bk = BK->fd = P; \
2339 } \
2340 else \
2341 { \
2342 IDX = bin_index(S); \
2343 BK = bin_at(A, IDX); \
2344 FD = BK->fd; \
2345 if (FD == BK) mark_binblock(A, IDX); \
2346 else \
2347 { \
2348 while (FD != BK && S < chunksize(FD)) FD = FD->fd; \
2349 BK = FD->bk; \
2350 } \
2351 P->bk = BK; \
2352 P->fd = FD; \
2353 FD->bk = BK->fd = P; \
2354 } \
2355}
2356
2357
2358/* take a chunk off a list */
2359
2360#define unlink(P, BK, FD) \
2361{ \
2362 BK = P->bk; \
2363 FD = P->fd; \
2364 FD->bk = BK; \
2365 BK->fd = FD; \
2366} \
2367
2368/* Place p as the last remainder */
2369
2370#define link_last_remainder(A, P) \
2371{ \
2372 last_remainder(A)->fd = last_remainder(A)->bk = P; \
2373 P->fd = P->bk = last_remainder(A); \
2374}
2375
2376/* Clear the last_remainder bin */
2377
2378#define clear_last_remainder(A) \
2379 (last_remainder(A)->fd = last_remainder(A)->bk = last_remainder(A))
2380
2381
2382
2383\f
2384
2385/*
2386 Extend the top-most chunk by obtaining memory from system.
2387 Main interface to sbrk (but see also malloc_trim).
2388*/
2389
dfd2257a
UD
2390#if defined __GNUC__ && __GNUC__ >= 2
2391/* This function is called only from one place, inline it. */
2392inline
2393#endif
af6f3906 2394static void
dfd2257a 2395internal_function
f65fd747 2396#if __STD_C
dfd2257a 2397malloc_extend_top(arena *ar_ptr, INTERNAL_SIZE_T nb)
f65fd747 2398#else
dfd2257a 2399malloc_extend_top(ar_ptr, nb) arena *ar_ptr; INTERNAL_SIZE_T nb;
f65fd747
UD
2400#endif
2401{
2402 unsigned long pagesz = malloc_getpagesize;
2403 mchunkptr old_top = top(ar_ptr); /* Record state of old top */
2404 INTERNAL_SIZE_T old_top_size = chunksize(old_top);
2405 INTERNAL_SIZE_T top_size; /* new size of top chunk */
2406
2407#ifndef NO_THREADS
2408 if(ar_ptr == &main_arena) {
2409#endif
2410
2411 char* brk; /* return value from sbrk */
2412 INTERNAL_SIZE_T front_misalign; /* unusable bytes at front of sbrked space */
2413 INTERNAL_SIZE_T correction; /* bytes for 2nd sbrk call */
2414 char* new_brk; /* return of 2nd sbrk call */
2415 char* old_end = (char*)(chunk_at_offset(old_top, old_top_size));
2416
2417 /* Pad request with top_pad plus minimal overhead */
2418 INTERNAL_SIZE_T sbrk_size = nb + top_pad + MINSIZE;
2419
2420 /* If not the first time through, round to preserve page boundary */
2421 /* Otherwise, we need to correct to a page size below anyway. */
2422 /* (We also correct below if an intervening foreign sbrk call.) */
2423
2424 if (sbrk_base != (char*)(-1))
2425 sbrk_size = (sbrk_size + (pagesz - 1)) & ~(pagesz - 1);
2426
2427 brk = (char*)(MORECORE (sbrk_size));
2428
2429 /* Fail if sbrk failed or if a foreign sbrk call killed our space */
2430 if (brk == (char*)(MORECORE_FAILURE) ||
2431 (brk < old_end && old_top != initial_top(&main_arena)))
2432 return;
2433
dfd2257a 2434#if defined _LIBC || defined MALLOC_HOOKS
1228ed5c
UD
2435 /* Call the `morecore' hook if necessary. */
2436 if (__after_morecore_hook)
2437 (*__after_morecore_hook) ();
7799b7b3 2438#endif
1228ed5c 2439
f65fd747
UD
2440 sbrked_mem += sbrk_size;
2441
2442 if (brk == old_end) { /* can just add bytes to current top */
2443 top_size = sbrk_size + old_top_size;
2444 set_head(old_top, top_size | PREV_INUSE);
2445 old_top = 0; /* don't free below */
2446 } else {
2447 if (sbrk_base == (char*)(-1)) /* First time through. Record base */
2448 sbrk_base = brk;
2449 else
2450 /* Someone else called sbrk(). Count those bytes as sbrked_mem. */
2451 sbrked_mem += brk - (char*)old_end;
2452
2453 /* Guarantee alignment of first new chunk made from this space */
2454 front_misalign = (unsigned long)chunk2mem(brk) & MALLOC_ALIGN_MASK;
2455 if (front_misalign > 0) {
2456 correction = (MALLOC_ALIGNMENT) - front_misalign;
2457 brk += correction;
2458 } else
2459 correction = 0;
2460
2461 /* Guarantee the next brk will be at a page boundary */
2462 correction += pagesz - ((unsigned long)(brk + sbrk_size) & (pagesz - 1));
2463
2464 /* Allocate correction */
2465 new_brk = (char*)(MORECORE (correction));
2466 if (new_brk == (char*)(MORECORE_FAILURE)) return;
2467
dfd2257a 2468#if defined _LIBC || defined MALLOC_HOOKS
1228ed5c
UD
2469 /* Call the `morecore' hook if necessary. */
2470 if (__after_morecore_hook)
7799b7b3
UD
2471 (*__after_morecore_hook) ();
2472#endif
1228ed5c 2473
f65fd747
UD
2474 sbrked_mem += correction;
2475
2476 top(&main_arena) = (mchunkptr)brk;
2477 top_size = new_brk - brk + correction;
2478 set_head(top(&main_arena), top_size | PREV_INUSE);
2479
2480 if (old_top == initial_top(&main_arena))
2481 old_top = 0; /* don't free below */
2482 }
2483
2484 if ((unsigned long)sbrked_mem > (unsigned long)max_sbrked_mem)
2485 max_sbrked_mem = sbrked_mem;
8a4b65b4 2486#ifdef NO_THREADS
f65fd747
UD
2487 if ((unsigned long)(mmapped_mem + sbrked_mem) >
2488 (unsigned long)max_total_mem)
2489 max_total_mem = mmapped_mem + sbrked_mem;
8a4b65b4 2490#endif
f65fd747
UD
2491
2492#ifndef NO_THREADS
2493 } else { /* ar_ptr != &main_arena */
8a4b65b4
UD
2494 heap_info *old_heap, *heap;
2495 size_t old_heap_size;
f65fd747
UD
2496
2497 if(old_top_size < MINSIZE) /* this should never happen */
2498 return;
2499
2500 /* First try to extend the current heap. */
2501 if(MINSIZE + nb <= old_top_size)
2502 return;
8a4b65b4
UD
2503 old_heap = heap_for_ptr(old_top);
2504 old_heap_size = old_heap->size;
2505 if(grow_heap(old_heap, MINSIZE + nb - old_top_size) == 0) {
2506 ar_ptr->size += old_heap->size - old_heap_size;
2507 top_size = ((char *)old_heap + old_heap->size) - (char *)old_top;
f65fd747
UD
2508 set_head(old_top, top_size | PREV_INUSE);
2509 return;
2510 }
2511
2512 /* A new heap must be created. */
7799b7b3 2513 heap = new_heap(nb + (MINSIZE + sizeof(*heap)));
f65fd747
UD
2514 if(!heap)
2515 return;
2516 heap->ar_ptr = ar_ptr;
8a4b65b4
UD
2517 heap->prev = old_heap;
2518 ar_ptr->size += heap->size;
f65fd747
UD
2519
2520 /* Set up the new top, so we can safely use chunk_free() below. */
2521 top(ar_ptr) = chunk_at_offset(heap, sizeof(*heap));
2522 top_size = heap->size - sizeof(*heap);
2523 set_head(top(ar_ptr), top_size | PREV_INUSE);
2524 }
2525#endif /* !defined(NO_THREADS) */
2526
2527 /* We always land on a page boundary */
2528 assert(((unsigned long)((char*)top(ar_ptr) + top_size) & (pagesz-1)) == 0);
2529
2530 /* Setup fencepost and free the old top chunk. */
2531 if(old_top) {
8a4b65b4
UD
2532 /* The fencepost takes at least MINSIZE bytes, because it might
2533 become the top chunk again later. Note that a footer is set
2534 up, too, although the chunk is marked in use. */
2535 old_top_size -= MINSIZE;
2536 set_head(chunk_at_offset(old_top, old_top_size + 2*SIZE_SZ), 0|PREV_INUSE);
2537 if(old_top_size >= MINSIZE) {
2538 set_head(chunk_at_offset(old_top, old_top_size), (2*SIZE_SZ)|PREV_INUSE);
2539 set_foot(chunk_at_offset(old_top, old_top_size), (2*SIZE_SZ));
f65fd747
UD
2540 set_head_size(old_top, old_top_size);
2541 chunk_free(ar_ptr, old_top);
2542 } else {
8a4b65b4
UD
2543 set_head(old_top, (old_top_size + 2*SIZE_SZ)|PREV_INUSE);
2544 set_foot(old_top, (old_top_size + 2*SIZE_SZ));
f65fd747
UD
2545 }
2546 }
2547}
2548
2549
2550\f
2551
2552/* Main public routines */
2553
2554
2555/*
8a4b65b4 2556 Malloc Algorithm:
f65fd747
UD
2557
2558 The requested size is first converted into a usable form, `nb'.
2559 This currently means to add 4 bytes overhead plus possibly more to
2560 obtain 8-byte alignment and/or to obtain a size of at least
8a4b65b4
UD
2561 MINSIZE (currently 16, 24, or 32 bytes), the smallest allocatable
2562 size. (All fits are considered `exact' if they are within MINSIZE
2563 bytes.)
f65fd747
UD
2564
2565 From there, the first successful of the following steps is taken:
2566
2567 1. The bin corresponding to the request size is scanned, and if
2568 a chunk of exactly the right size is found, it is taken.
2569
2570 2. The most recently remaindered chunk is used if it is big
2571 enough. This is a form of (roving) first fit, used only in
2572 the absence of exact fits. Runs of consecutive requests use
2573 the remainder of the chunk used for the previous such request
2574 whenever possible. This limited use of a first-fit style
2575 allocation strategy tends to give contiguous chunks
2576 coextensive lifetimes, which improves locality and can reduce
2577 fragmentation in the long run.
2578
2579 3. Other bins are scanned in increasing size order, using a
2580 chunk big enough to fulfill the request, and splitting off
2581 any remainder. This search is strictly by best-fit; i.e.,
2582 the smallest (with ties going to approximately the least
2583 recently used) chunk that fits is selected.
2584
2585 4. If large enough, the chunk bordering the end of memory
2586 (`top') is split off. (This use of `top' is in accord with
2587 the best-fit search rule. In effect, `top' is treated as
2588 larger (and thus less well fitting) than any other available
2589 chunk since it can be extended to be as large as necessary
2590 (up to system limitations).
2591
2592 5. If the request size meets the mmap threshold and the
2593 system supports mmap, and there are few enough currently
2594 allocated mmapped regions, and a call to mmap succeeds,
2595 the request is allocated via direct memory mapping.
2596
2597 6. Otherwise, the top of memory is extended by
2598 obtaining more space from the system (normally using sbrk,
2599 but definable to anything else via the MORECORE macro).
2600 Memory is gathered from the system (in system page-sized
2601 units) in a way that allows chunks obtained across different
2602 sbrk calls to be consolidated, but does not require
2603 contiguous memory. Thus, it should be safe to intersperse
2604 mallocs with other sbrk calls.
2605
2606
3081378b 2607 All allocations are made from the `lowest' part of any found
f65fd747
UD
2608 chunk. (The implementation invariant is that prev_inuse is
2609 always true of any allocated chunk; i.e., that each allocated
2610 chunk borders either a previously allocated and still in-use chunk,
2611 or the base of its memory arena.)
2612
2613*/
2614
2615#if __STD_C
2616Void_t* mALLOc(size_t bytes)
2617#else
2618Void_t* mALLOc(bytes) size_t bytes;
2619#endif
2620{
2621 arena *ar_ptr;
10dc2a90 2622 INTERNAL_SIZE_T nb; /* padded request size */
f65fd747
UD
2623 mchunkptr victim;
2624
dfd2257a 2625#if defined _LIBC || defined MALLOC_HOOKS
10dc2a90
UD
2626 if (__malloc_hook != NULL) {
2627 Void_t* result;
2628
dfd2257a 2629#if defined __GNUC__ && __GNUC__ >= 2
a2b08ee5
UD
2630 result = (*__malloc_hook)(bytes, __builtin_return_address (0));
2631#else
dfd2257a 2632 result = (*__malloc_hook)(bytes, NULL);
a2b08ee5 2633#endif
10dc2a90
UD
2634 return result;
2635 }
2636#endif
2637
2e65ca2b
UD
2638 if(request2size(bytes, nb))
2639 return 0;
7799b7b3 2640 arena_get(ar_ptr, nb);
f65fd747
UD
2641 if(!ar_ptr)
2642 return 0;
2643 victim = chunk_alloc(ar_ptr, nb);
2644 (void)mutex_unlock(&ar_ptr->mutex);
7799b7b3
UD
2645 if(!victim) {
2646 /* Maybe the failure is due to running out of mmapped areas. */
2647 if(ar_ptr != &main_arena) {
2648 (void)mutex_lock(&main_arena.mutex);
2649 victim = chunk_alloc(&main_arena, nb);
2650 (void)mutex_unlock(&main_arena.mutex);
2651 }
2652 if(!victim) return 0;
2653 }
2654 return chunk2mem(victim);
f65fd747
UD
2655}
2656
2657static mchunkptr
dfd2257a 2658internal_function
f65fd747
UD
2659#if __STD_C
2660chunk_alloc(arena *ar_ptr, INTERNAL_SIZE_T nb)
2661#else
2662chunk_alloc(ar_ptr, nb) arena *ar_ptr; INTERNAL_SIZE_T nb;
2663#endif
2664{
2665 mchunkptr victim; /* inspected/selected chunk */
2666 INTERNAL_SIZE_T victim_size; /* its size */
2667 int idx; /* index for bin traversal */
2668 mbinptr bin; /* associated bin */
2669 mchunkptr remainder; /* remainder from a split */
2670 long remainder_size; /* its size */
2671 int remainder_index; /* its bin index */
2672 unsigned long block; /* block traverser bit */
2673 int startidx; /* first bin of a traversed block */
2674 mchunkptr fwd; /* misc temp for linking */
2675 mchunkptr bck; /* misc temp for linking */
2676 mbinptr q; /* misc temp */
2677
2678
2679 /* Check for exact match in a bin */
2680
2681 if (is_small_request(nb)) /* Faster version for small requests */
2682 {
2683 idx = smallbin_index(nb);
2684
2685 /* No traversal or size check necessary for small bins. */
2686
2687 q = bin_at(ar_ptr, idx);
2688 victim = last(q);
2689
2690 /* Also scan the next one, since it would have a remainder < MINSIZE */
2691 if (victim == q)
2692 {
2693 q = next_bin(q);
2694 victim = last(q);
2695 }
2696 if (victim != q)
2697 {
2698 victim_size = chunksize(victim);
2699 unlink(victim, bck, fwd);
2700 set_inuse_bit_at_offset(victim, victim_size);
2701 check_malloced_chunk(ar_ptr, victim, nb);
2702 return victim;
2703 }
2704
2705 idx += 2; /* Set for bin scan below. We've already scanned 2 bins. */
2706
2707 }
2708 else
2709 {
2710 idx = bin_index(nb);
2711 bin = bin_at(ar_ptr, idx);
2712
2713 for (victim = last(bin); victim != bin; victim = victim->bk)
2714 {
2715 victim_size = chunksize(victim);
2716 remainder_size = victim_size - nb;
2717
2718 if (remainder_size >= (long)MINSIZE) /* too big */
2719 {
2720 --idx; /* adjust to rescan below after checking last remainder */
2721 break;
2722 }
2723
2724 else if (remainder_size >= 0) /* exact fit */
2725 {
2726 unlink(victim, bck, fwd);
2727 set_inuse_bit_at_offset(victim, victim_size);
2728 check_malloced_chunk(ar_ptr, victim, nb);
2729 return victim;
2730 }
2731 }
2732
2733 ++idx;
2734
2735 }
2736
2737 /* Try to use the last split-off remainder */
2738
2739 if ( (victim = last_remainder(ar_ptr)->fd) != last_remainder(ar_ptr))
2740 {
2741 victim_size = chunksize(victim);
2742 remainder_size = victim_size - nb;
2743
2744 if (remainder_size >= (long)MINSIZE) /* re-split */
2745 {
2746 remainder = chunk_at_offset(victim, nb);
2747 set_head(victim, nb | PREV_INUSE);
2748 link_last_remainder(ar_ptr, remainder);
2749 set_head(remainder, remainder_size | PREV_INUSE);
2750 set_foot(remainder, remainder_size);
2751 check_malloced_chunk(ar_ptr, victim, nb);
2752 return victim;
2753 }
2754
2755 clear_last_remainder(ar_ptr);
2756
2757 if (remainder_size >= 0) /* exhaust */
2758 {
2759 set_inuse_bit_at_offset(victim, victim_size);
2760 check_malloced_chunk(ar_ptr, victim, nb);
2761 return victim;
2762 }
2763
2764 /* Else place in bin */
2765
2766 frontlink(ar_ptr, victim, victim_size, remainder_index, bck, fwd);
2767 }
2768
2769 /*
2770 If there are any possibly nonempty big-enough blocks,
2771 search for best fitting chunk by scanning bins in blockwidth units.
2772 */
2773
2774 if ( (block = idx2binblock(idx)) <= binblocks(ar_ptr))
2775 {
2776
2777 /* Get to the first marked block */
2778
2779 if ( (block & binblocks(ar_ptr)) == 0)
2780 {
2781 /* force to an even block boundary */
2782 idx = (idx & ~(BINBLOCKWIDTH - 1)) + BINBLOCKWIDTH;
2783 block <<= 1;
2784 while ((block & binblocks(ar_ptr)) == 0)
2785 {
2786 idx += BINBLOCKWIDTH;
2787 block <<= 1;
2788 }
2789 }
2790
2791 /* For each possibly nonempty block ... */
2792 for (;;)
2793 {
2794 startidx = idx; /* (track incomplete blocks) */
2795 q = bin = bin_at(ar_ptr, idx);
2796
2797 /* For each bin in this block ... */
2798 do
2799 {
2800 /* Find and use first big enough chunk ... */
2801
2802 for (victim = last(bin); victim != bin; victim = victim->bk)
2803 {
2804 victim_size = chunksize(victim);
2805 remainder_size = victim_size - nb;
2806
2807 if (remainder_size >= (long)MINSIZE) /* split */
2808 {
2809 remainder = chunk_at_offset(victim, nb);
2810 set_head(victim, nb | PREV_INUSE);
2811 unlink(victim, bck, fwd);
2812 link_last_remainder(ar_ptr, remainder);
2813 set_head(remainder, remainder_size | PREV_INUSE);
2814 set_foot(remainder, remainder_size);
2815 check_malloced_chunk(ar_ptr, victim, nb);
2816 return victim;
2817 }
2818
2819 else if (remainder_size >= 0) /* take */
2820 {
2821 set_inuse_bit_at_offset(victim, victim_size);
2822 unlink(victim, bck, fwd);
2823 check_malloced_chunk(ar_ptr, victim, nb);
2824 return victim;
2825 }
2826
2827 }
2828
2829 bin = next_bin(bin);
2830
2831 } while ((++idx & (BINBLOCKWIDTH - 1)) != 0);
2832
2833 /* Clear out the block bit. */
2834
2835 do /* Possibly backtrack to try to clear a partial block */
2836 {
2837 if ((startidx & (BINBLOCKWIDTH - 1)) == 0)
2838 {
2839 binblocks(ar_ptr) &= ~block;
2840 break;
2841 }
2842 --startidx;
2843 q = prev_bin(q);
2844 } while (first(q) == q);
2845
2846 /* Get to the next possibly nonempty block */
2847
2848 if ( (block <<= 1) <= binblocks(ar_ptr) && (block != 0) )
2849 {
2850 while ((block & binblocks(ar_ptr)) == 0)
2851 {
2852 idx += BINBLOCKWIDTH;
2853 block <<= 1;
2854 }
2855 }
2856 else
2857 break;
2858 }
2859 }
2860
2861
2862 /* Try to use top chunk */
2863
2864 /* Require that there be a remainder, ensuring top always exists */
2865 if ( (remainder_size = chunksize(top(ar_ptr)) - nb) < (long)MINSIZE)
2866 {
2867
2868#if HAVE_MMAP
2869 /* If big and would otherwise need to extend, try to use mmap instead */
2870 if ((unsigned long)nb >= (unsigned long)mmap_threshold &&
2871 (victim = mmap_chunk(nb)) != 0)
2872 return victim;
2873#endif
2874
2875 /* Try to extend */
2876 malloc_extend_top(ar_ptr, nb);
2877 if ((remainder_size = chunksize(top(ar_ptr)) - nb) < (long)MINSIZE)
2878 return 0; /* propagate failure */
2879 }
2880
2881 victim = top(ar_ptr);
2882 set_head(victim, nb | PREV_INUSE);
2883 top(ar_ptr) = chunk_at_offset(victim, nb);
2884 set_head(top(ar_ptr), remainder_size | PREV_INUSE);
2885 check_malloced_chunk(ar_ptr, victim, nb);
2886 return victim;
2887
2888}
2889
2890
2891\f
2892
2893/*
2894
2895 free() algorithm :
2896
2897 cases:
2898
2899 1. free(0) has no effect.
2900
2901 2. If the chunk was allocated via mmap, it is released via munmap().
2902
2903 3. If a returned chunk borders the current high end of memory,
2904 it is consolidated into the top, and if the total unused
2905 topmost memory exceeds the trim threshold, malloc_trim is
2906 called.
2907
2908 4. Other chunks are consolidated as they arrive, and
2909 placed in corresponding bins. (This includes the case of
2910 consolidating with the current `last_remainder').
2911
2912*/
2913
2914
2915#if __STD_C
2916void fREe(Void_t* mem)
2917#else
2918void fREe(mem) Void_t* mem;
2919#endif
2920{
2921 arena *ar_ptr;
2922 mchunkptr p; /* chunk corresponding to mem */
2923
dfd2257a 2924#if defined _LIBC || defined MALLOC_HOOKS
10dc2a90 2925 if (__free_hook != NULL) {
dfd2257a 2926#if defined __GNUC__ && __GNUC__ >= 2
a2b08ee5
UD
2927 (*__free_hook)(mem, __builtin_return_address (0));
2928#else
dfd2257a 2929 (*__free_hook)(mem, NULL);
a2b08ee5 2930#endif
10dc2a90
UD
2931 return;
2932 }
2933#endif
2934
f65fd747
UD
2935 if (mem == 0) /* free(0) has no effect */
2936 return;
2937
2938 p = mem2chunk(mem);
2939
2940#if HAVE_MMAP
2941 if (chunk_is_mmapped(p)) /* release mmapped memory. */
2942 {
2943 munmap_chunk(p);
2944 return;
2945 }
2946#endif
2947
2948 ar_ptr = arena_for_ptr(p);
8a4b65b4
UD
2949#if THREAD_STATS
2950 if(!mutex_trylock(&ar_ptr->mutex))
2951 ++(ar_ptr->stat_lock_direct);
2952 else {
2953 (void)mutex_lock(&ar_ptr->mutex);
2954 ++(ar_ptr->stat_lock_wait);
2955 }
2956#else
f65fd747 2957 (void)mutex_lock(&ar_ptr->mutex);
8a4b65b4 2958#endif
f65fd747
UD
2959 chunk_free(ar_ptr, p);
2960 (void)mutex_unlock(&ar_ptr->mutex);
2961}
2962
2963static void
dfd2257a 2964internal_function
f65fd747
UD
2965#if __STD_C
2966chunk_free(arena *ar_ptr, mchunkptr p)
2967#else
2968chunk_free(ar_ptr, p) arena *ar_ptr; mchunkptr p;
2969#endif
2970{
2971 INTERNAL_SIZE_T hd = p->size; /* its head field */
2972 INTERNAL_SIZE_T sz; /* its size */
2973 int idx; /* its bin index */
2974 mchunkptr next; /* next contiguous chunk */
2975 INTERNAL_SIZE_T nextsz; /* its size */
2976 INTERNAL_SIZE_T prevsz; /* size of previous contiguous chunk */
2977 mchunkptr bck; /* misc temp for linking */
2978 mchunkptr fwd; /* misc temp for linking */
2979 int islr; /* track whether merging with last_remainder */
2980
2981 check_inuse_chunk(ar_ptr, p);
2982
2983 sz = hd & ~PREV_INUSE;
2984 next = chunk_at_offset(p, sz);
2985 nextsz = chunksize(next);
2986
2987 if (next == top(ar_ptr)) /* merge with top */
2988 {
2989 sz += nextsz;
2990
2991 if (!(hd & PREV_INUSE)) /* consolidate backward */
2992 {
2993 prevsz = p->prev_size;
2994 p = chunk_at_offset(p, -prevsz);
2995 sz += prevsz;
2996 unlink(p, bck, fwd);
2997 }
2998
2999 set_head(p, sz | PREV_INUSE);
3000 top(ar_ptr) = p;
8a4b65b4
UD
3001
3002#ifndef NO_THREADS
3003 if(ar_ptr == &main_arena) {
3004#endif
3005 if ((unsigned long)(sz) >= (unsigned long)trim_threshold)
3006 main_trim(top_pad);
3007#ifndef NO_THREADS
3008 } else {
3009 heap_info *heap = heap_for_ptr(p);
3010
3011 assert(heap->ar_ptr == ar_ptr);
3012
3013 /* Try to get rid of completely empty heaps, if possible. */
3014 if((unsigned long)(sz) >= (unsigned long)trim_threshold ||
3015 p == chunk_at_offset(heap, sizeof(*heap)))
3016 heap_trim(heap, top_pad);
3017 }
3018#endif
f65fd747
UD
3019 return;
3020 }
3021
f65fd747
UD
3022 islr = 0;
3023
3024 if (!(hd & PREV_INUSE)) /* consolidate backward */
3025 {
3026 prevsz = p->prev_size;
3027 p = chunk_at_offset(p, -prevsz);
3028 sz += prevsz;
3029
3030 if (p->fd == last_remainder(ar_ptr)) /* keep as last_remainder */
3031 islr = 1;
3032 else
3033 unlink(p, bck, fwd);
3034 }
3035
3036 if (!(inuse_bit_at_offset(next, nextsz))) /* consolidate forward */
3037 {
3038 sz += nextsz;
3039
3040 if (!islr && next->fd == last_remainder(ar_ptr))
3041 /* re-insert last_remainder */
3042 {
3043 islr = 1;
3044 link_last_remainder(ar_ptr, p);
3045 }
3046 else
3047 unlink(next, bck, fwd);
7799b7b3
UD
3048
3049 next = chunk_at_offset(p, sz);
f65fd747 3050 }
7799b7b3
UD
3051 else
3052 set_head(next, nextsz); /* clear inuse bit */
f65fd747
UD
3053
3054 set_head(p, sz | PREV_INUSE);
7799b7b3 3055 next->prev_size = sz;
f65fd747
UD
3056 if (!islr)
3057 frontlink(ar_ptr, p, sz, idx, bck, fwd);
7799b7b3
UD
3058
3059#ifndef NO_THREADS
3060 /* Check whether the heap containing top can go away now. */
3061 if(next->size < MINSIZE &&
3062 (unsigned long)sz > trim_threshold &&
3063 ar_ptr != &main_arena) { /* fencepost */
431c33c0 3064 heap_info *heap = heap_for_ptr(top(ar_ptr));
7799b7b3
UD
3065
3066 if(top(ar_ptr) == chunk_at_offset(heap, sizeof(*heap)) &&
3067 heap->prev == heap_for_ptr(p))
3068 heap_trim(heap, top_pad);
3069 }
3070#endif
f65fd747
UD
3071}
3072
3073
3074\f
3075
3076
3077/*
3078
3079 Realloc algorithm:
3080
3081 Chunks that were obtained via mmap cannot be extended or shrunk
3082 unless HAVE_MREMAP is defined, in which case mremap is used.
3083 Otherwise, if their reallocation is for additional space, they are
3084 copied. If for less, they are just left alone.
3085
3086 Otherwise, if the reallocation is for additional space, and the
3087 chunk can be extended, it is, else a malloc-copy-free sequence is
3088 taken. There are several different ways that a chunk could be
3089 extended. All are tried:
3090
3091 * Extending forward into following adjacent free chunk.
3092 * Shifting backwards, joining preceding adjacent space
3093 * Both shifting backwards and extending forward.
3094 * Extending into newly sbrked space
3095
3096 Unless the #define REALLOC_ZERO_BYTES_FREES is set, realloc with a
3097 size argument of zero (re)allocates a minimum-sized chunk.
3098
3099 If the reallocation is for less space, and the new request is for
3100 a `small' (<512 bytes) size, then the newly unused space is lopped
3101 off and freed.
3102
3103 The old unix realloc convention of allowing the last-free'd chunk
3104 to be used as an argument to realloc is no longer supported.
3105 I don't know of any programs still relying on this feature,
3106 and allowing it would also allow too many other incorrect
3107 usages of realloc to be sensible.
3108
3109
3110*/
3111
3112
3113#if __STD_C
3114Void_t* rEALLOc(Void_t* oldmem, size_t bytes)
3115#else
3116Void_t* rEALLOc(oldmem, bytes) Void_t* oldmem; size_t bytes;
3117#endif
3118{
3119 arena *ar_ptr;
3120 INTERNAL_SIZE_T nb; /* padded request size */
3121
3122 mchunkptr oldp; /* chunk corresponding to oldmem */
3123 INTERNAL_SIZE_T oldsize; /* its size */
3124
3125 mchunkptr newp; /* chunk to return */
f65fd747 3126
dfd2257a 3127#if defined _LIBC || defined MALLOC_HOOKS
10dc2a90
UD
3128 if (__realloc_hook != NULL) {
3129 Void_t* result;
f65fd747 3130
dfd2257a 3131#if defined __GNUC__ && __GNUC__ >= 2
a2b08ee5
UD
3132 result = (*__realloc_hook)(oldmem, bytes, __builtin_return_address (0));
3133#else
dfd2257a 3134 result = (*__realloc_hook)(oldmem, bytes, NULL);
a2b08ee5 3135#endif
10dc2a90
UD
3136 return result;
3137 }
3138#endif
f65fd747
UD
3139
3140#ifdef REALLOC_ZERO_BYTES_FREES
7c2b945e 3141 if (bytes == 0 && oldmem != NULL) { fREe(oldmem); return 0; }
f65fd747
UD
3142#endif
3143
f65fd747
UD
3144 /* realloc of null is supposed to be same as malloc */
3145 if (oldmem == 0) return mALLOc(bytes);
3146
10dc2a90
UD
3147 oldp = mem2chunk(oldmem);
3148 oldsize = chunksize(oldp);
f65fd747 3149
2e65ca2b
UD
3150 if(request2size(bytes, nb))
3151 return 0;
f65fd747
UD
3152
3153#if HAVE_MMAP
3154 if (chunk_is_mmapped(oldp))
3155 {
10dc2a90
UD
3156 Void_t* newmem;
3157
f65fd747
UD
3158#if HAVE_MREMAP
3159 newp = mremap_chunk(oldp, nb);
3160 if(newp) return chunk2mem(newp);
3161#endif
3162 /* Note the extra SIZE_SZ overhead. */
3163 if(oldsize - SIZE_SZ >= nb) return oldmem; /* do nothing */
3164 /* Must alloc, copy, free. */
3165 newmem = mALLOc(bytes);
3166 if (newmem == 0) return 0; /* propagate failure */
3167 MALLOC_COPY(newmem, oldmem, oldsize - 2*SIZE_SZ);
3168 munmap_chunk(oldp);
3169 return newmem;
3170 }
3171#endif
3172
3173 ar_ptr = arena_for_ptr(oldp);
8a4b65b4
UD
3174#if THREAD_STATS
3175 if(!mutex_trylock(&ar_ptr->mutex))
3176 ++(ar_ptr->stat_lock_direct);
3177 else {
3178 (void)mutex_lock(&ar_ptr->mutex);
3179 ++(ar_ptr->stat_lock_wait);
3180 }
3181#else
f65fd747 3182 (void)mutex_lock(&ar_ptr->mutex);
8a4b65b4
UD
3183#endif
3184
1228ed5c 3185#ifndef NO_THREADS
f65fd747
UD
3186 /* As in malloc(), remember this arena for the next allocation. */
3187 tsd_setspecific(arena_key, (Void_t *)ar_ptr);
1228ed5c 3188#endif
f65fd747 3189
10dc2a90
UD
3190 newp = chunk_realloc(ar_ptr, oldp, oldsize, nb);
3191
3192 (void)mutex_unlock(&ar_ptr->mutex);
3193 return newp ? chunk2mem(newp) : NULL;
3194}
3195
3196static mchunkptr
dfd2257a 3197internal_function
10dc2a90
UD
3198#if __STD_C
3199chunk_realloc(arena* ar_ptr, mchunkptr oldp, INTERNAL_SIZE_T oldsize,
7e3be507 3200 INTERNAL_SIZE_T nb)
10dc2a90
UD
3201#else
3202chunk_realloc(ar_ptr, oldp, oldsize, nb)
3203arena* ar_ptr; mchunkptr oldp; INTERNAL_SIZE_T oldsize, nb;
3204#endif
3205{
3206 mchunkptr newp = oldp; /* chunk to return */
3207 INTERNAL_SIZE_T newsize = oldsize; /* its size */
3208
3209 mchunkptr next; /* next contiguous chunk after oldp */
3210 INTERNAL_SIZE_T nextsize; /* its size */
3211
3212 mchunkptr prev; /* previous contiguous chunk before oldp */
3213 INTERNAL_SIZE_T prevsize; /* its size */
3214
3215 mchunkptr remainder; /* holds split off extra space from newp */
3216 INTERNAL_SIZE_T remainder_size; /* its size */
3217
3218 mchunkptr bck; /* misc temp for linking */
3219 mchunkptr fwd; /* misc temp for linking */
3220
f65fd747
UD
3221 check_inuse_chunk(ar_ptr, oldp);
3222
3223 if ((long)(oldsize) < (long)(nb))
3224 {
3225
3226 /* Try expanding forward */
3227
3228 next = chunk_at_offset(oldp, oldsize);
3229 if (next == top(ar_ptr) || !inuse(next))
3230 {
3231 nextsize = chunksize(next);
3232
3233 /* Forward into top only if a remainder */
3234 if (next == top(ar_ptr))
3235 {
3236 if ((long)(nextsize + newsize) >= (long)(nb + MINSIZE))
3237 {
3238 newsize += nextsize;
3239 top(ar_ptr) = chunk_at_offset(oldp, nb);
3240 set_head(top(ar_ptr), (newsize - nb) | PREV_INUSE);
3241 set_head_size(oldp, nb);
10dc2a90 3242 return oldp;
f65fd747
UD
3243 }
3244 }
3245
3246 /* Forward into next chunk */
3247 else if (((long)(nextsize + newsize) >= (long)(nb)))
3248 {
3249 unlink(next, bck, fwd);
3250 newsize += nextsize;
3251 goto split;
3252 }
3253 }
3254 else
3255 {
3256 next = 0;
3257 nextsize = 0;
3258 }
3259
3260 /* Try shifting backwards. */
3261
3262 if (!prev_inuse(oldp))
3263 {
3264 prev = prev_chunk(oldp);
3265 prevsize = chunksize(prev);
3266
3267 /* try forward + backward first to save a later consolidation */
3268
3269 if (next != 0)
3270 {
3271 /* into top */
3272 if (next == top(ar_ptr))
3273 {
3274 if ((long)(nextsize + prevsize + newsize) >= (long)(nb + MINSIZE))
3275 {
3276 unlink(prev, bck, fwd);
3277 newp = prev;
3278 newsize += prevsize + nextsize;
10dc2a90 3279 MALLOC_COPY(chunk2mem(newp), chunk2mem(oldp), oldsize - SIZE_SZ);
f65fd747
UD
3280 top(ar_ptr) = chunk_at_offset(newp, nb);
3281 set_head(top(ar_ptr), (newsize - nb) | PREV_INUSE);
3282 set_head_size(newp, nb);
10dc2a90 3283 return newp;
f65fd747
UD
3284 }
3285 }
3286
3287 /* into next chunk */
3288 else if (((long)(nextsize + prevsize + newsize) >= (long)(nb)))
3289 {
3290 unlink(next, bck, fwd);
3291 unlink(prev, bck, fwd);
3292 newp = prev;
3293 newsize += nextsize + prevsize;
10dc2a90 3294 MALLOC_COPY(chunk2mem(newp), chunk2mem(oldp), oldsize - SIZE_SZ);
f65fd747
UD
3295 goto split;
3296 }
3297 }
3298
3299 /* backward only */
3300 if (prev != 0 && (long)(prevsize + newsize) >= (long)nb)
3301 {
3302 unlink(prev, bck, fwd);
3303 newp = prev;
3304 newsize += prevsize;
10dc2a90 3305 MALLOC_COPY(chunk2mem(newp), chunk2mem(oldp), oldsize - SIZE_SZ);
f65fd747
UD
3306 goto split;
3307 }
3308 }
3309
3310 /* Must allocate */
3311
3312 newp = chunk_alloc (ar_ptr, nb);
3313
7799b7b3
UD
3314 if (newp == 0) {
3315 /* Maybe the failure is due to running out of mmapped areas. */
3316 if (ar_ptr != &main_arena) {
3317 (void)mutex_lock(&main_arena.mutex);
3318 newp = chunk_alloc(&main_arena, nb);
3319 (void)mutex_unlock(&main_arena.mutex);
3320 }
3321 if (newp == 0) /* propagate failure */
3322 return 0;
3323 }
f65fd747
UD
3324
3325 /* Avoid copy if newp is next chunk after oldp. */
3326 /* (This can only happen when new chunk is sbrk'ed.) */
3327
3328 if ( newp == next_chunk(oldp))
3329 {
3330 newsize += chunksize(newp);
3331 newp = oldp;
3332 goto split;
3333 }
3334
3335 /* Otherwise copy, free, and exit */
10dc2a90 3336 MALLOC_COPY(chunk2mem(newp), chunk2mem(oldp), oldsize - SIZE_SZ);
f65fd747 3337 chunk_free(ar_ptr, oldp);
10dc2a90 3338 return newp;
f65fd747
UD
3339 }
3340
3341
3342 split: /* split off extra room in old or expanded chunk */
3343
3344 if (newsize - nb >= MINSIZE) /* split off remainder */
3345 {
3346 remainder = chunk_at_offset(newp, nb);
3347 remainder_size = newsize - nb;
3348 set_head_size(newp, nb);
3349 set_head(remainder, remainder_size | PREV_INUSE);
3350 set_inuse_bit_at_offset(remainder, remainder_size);
3351 chunk_free(ar_ptr, remainder);
3352 }
3353 else
3354 {
3355 set_head_size(newp, newsize);
3356 set_inuse_bit_at_offset(newp, newsize);
3357 }
3358
3359 check_inuse_chunk(ar_ptr, newp);
10dc2a90 3360 return newp;
f65fd747
UD
3361}
3362
3363
3364\f
3365
3366/*
3367
3368 memalign algorithm:
3369
3370 memalign requests more than enough space from malloc, finds a spot
3371 within that chunk that meets the alignment request, and then
3372 possibly frees the leading and trailing space.
3373
3374 The alignment argument must be a power of two. This property is not
3375 checked by memalign, so misuse may result in random runtime errors.
3376
3377 8-byte alignment is guaranteed by normal malloc calls, so don't
3378 bother calling memalign with an argument of 8 or less.
3379
3380 Overreliance on memalign is a sure way to fragment space.
3381
3382*/
3383
3384
3385#if __STD_C
3386Void_t* mEMALIGn(size_t alignment, size_t bytes)
3387#else
3388Void_t* mEMALIGn(alignment, bytes) size_t alignment; size_t bytes;
3389#endif
3390{
3391 arena *ar_ptr;
3392 INTERNAL_SIZE_T nb; /* padded request size */
10dc2a90
UD
3393 mchunkptr p;
3394
dfd2257a 3395#if defined _LIBC || defined MALLOC_HOOKS
10dc2a90
UD
3396 if (__memalign_hook != NULL) {
3397 Void_t* result;
3398
dfd2257a 3399#if defined __GNUC__ && __GNUC__ >= 2
a2b08ee5
UD
3400 result = (*__memalign_hook)(alignment, bytes,
3401 __builtin_return_address (0));
3402#else
dfd2257a 3403 result = (*__memalign_hook)(alignment, bytes, NULL);
a2b08ee5 3404#endif
10dc2a90
UD
3405 return result;
3406 }
3407#endif
f65fd747
UD
3408
3409 /* If need less alignment than we give anyway, just relay to malloc */
3410
3411 if (alignment <= MALLOC_ALIGNMENT) return mALLOc(bytes);
3412
3413 /* Otherwise, ensure that it is at least a minimum chunk size */
3414
3415 if (alignment < MINSIZE) alignment = MINSIZE;
3416
2e65ca2b
UD
3417 if(request2size(bytes, nb))
3418 return 0;
f65fd747
UD
3419 arena_get(ar_ptr, nb + alignment + MINSIZE);
3420 if(!ar_ptr)
3421 return 0;
10dc2a90
UD
3422 p = chunk_align(ar_ptr, nb, alignment);
3423 (void)mutex_unlock(&ar_ptr->mutex);
7799b7b3
UD
3424 if(!p) {
3425 /* Maybe the failure is due to running out of mmapped areas. */
3426 if(ar_ptr != &main_arena) {
3427 (void)mutex_lock(&main_arena.mutex);
3428 p = chunk_align(&main_arena, nb, alignment);
3429 (void)mutex_unlock(&main_arena.mutex);
3430 }
3431 if(!p) return 0;
3432 }
3433 return chunk2mem(p);
10dc2a90 3434}
f65fd747 3435
10dc2a90 3436static mchunkptr
dfd2257a 3437internal_function
10dc2a90
UD
3438#if __STD_C
3439chunk_align(arena* ar_ptr, INTERNAL_SIZE_T nb, size_t alignment)
3440#else
3441chunk_align(ar_ptr, nb, alignment)
3442arena* ar_ptr; INTERNAL_SIZE_T nb; size_t alignment;
3443#endif
3444{
3445 char* m; /* memory returned by malloc call */
3446 mchunkptr p; /* corresponding chunk */
3447 char* brk; /* alignment point within p */
3448 mchunkptr newp; /* chunk to return */
3449 INTERNAL_SIZE_T newsize; /* its size */
3450 INTERNAL_SIZE_T leadsize; /* leading space befor alignment point */
3451 mchunkptr remainder; /* spare room at end to split off */
3452 long remainder_size; /* its size */
3453
3454 /* Call chunk_alloc with worst case padding to hit alignment. */
3455 p = chunk_alloc(ar_ptr, nb + alignment + MINSIZE);
3456 if (p == 0)
f65fd747 3457 return 0; /* propagate failure */
f65fd747
UD
3458
3459 m = chunk2mem(p);
3460
3461 if ((((unsigned long)(m)) % alignment) == 0) /* aligned */
3462 {
3463#if HAVE_MMAP
3464 if(chunk_is_mmapped(p)) {
10dc2a90 3465 return p; /* nothing more to do */
f65fd747
UD
3466 }
3467#endif
3468 }
3469 else /* misaligned */
3470 {
3471 /*
3472 Find an aligned spot inside chunk.
3473 Since we need to give back leading space in a chunk of at
3474 least MINSIZE, if the first calculation places us at
3475 a spot with less than MINSIZE leader, we can move to the
3476 next aligned spot -- we've allocated enough total room so that
3477 this is always possible.
3478 */
3479
3480 brk = (char*)mem2chunk(((unsigned long)(m + alignment - 1)) & -alignment);
10dc2a90 3481 if ((long)(brk - (char*)(p)) < (long)MINSIZE) brk += alignment;
f65fd747
UD
3482
3483 newp = (mchunkptr)brk;
3484 leadsize = brk - (char*)(p);
3485 newsize = chunksize(p) - leadsize;
3486
3487#if HAVE_MMAP
3488 if(chunk_is_mmapped(p))
3489 {
3490 newp->prev_size = p->prev_size + leadsize;
3491 set_head(newp, newsize|IS_MMAPPED);
10dc2a90 3492 return newp;
f65fd747
UD
3493 }
3494#endif
3495
3496 /* give back leader, use the rest */
3497
3498 set_head(newp, newsize | PREV_INUSE);
3499 set_inuse_bit_at_offset(newp, newsize);
3500 set_head_size(p, leadsize);
3501 chunk_free(ar_ptr, p);
3502 p = newp;
3503
3504 assert (newsize>=nb && (((unsigned long)(chunk2mem(p))) % alignment) == 0);
3505 }
3506
3507 /* Also give back spare room at the end */
3508
3509 remainder_size = chunksize(p) - nb;
3510
3511 if (remainder_size >= (long)MINSIZE)
3512 {
3513 remainder = chunk_at_offset(p, nb);
3514 set_head(remainder, remainder_size | PREV_INUSE);
3515 set_head_size(p, nb);
3516 chunk_free(ar_ptr, remainder);
3517 }
3518
3519 check_inuse_chunk(ar_ptr, p);
10dc2a90 3520 return p;
f65fd747
UD
3521}
3522
3523\f
3524
3525
3526/*
3527 valloc just invokes memalign with alignment argument equal
3528 to the page size of the system (or as near to this as can
3529 be figured out from all the includes/defines above.)
3530*/
3531
3532#if __STD_C
3533Void_t* vALLOc(size_t bytes)
3534#else
3535Void_t* vALLOc(bytes) size_t bytes;
3536#endif
3537{
3538 return mEMALIGn (malloc_getpagesize, bytes);
3539}
3540
3541/*
3542 pvalloc just invokes valloc for the nearest pagesize
3543 that will accommodate request
3544*/
3545
3546
3547#if __STD_C
3548Void_t* pvALLOc(size_t bytes)
3549#else
3550Void_t* pvALLOc(bytes) size_t bytes;
3551#endif
3552{
3553 size_t pagesize = malloc_getpagesize;
3554 return mEMALIGn (pagesize, (bytes + pagesize - 1) & ~(pagesize - 1));
3555}
3556
3557/*
3558
10dc2a90 3559 calloc calls chunk_alloc, then zeroes out the allocated chunk.
f65fd747
UD
3560
3561*/
3562
3563#if __STD_C
3564Void_t* cALLOc(size_t n, size_t elem_size)
3565#else
3566Void_t* cALLOc(n, elem_size) size_t n; size_t elem_size;
3567#endif
3568{
3569 arena *ar_ptr;
3570 mchunkptr p, oldtop;
10dc2a90 3571 INTERNAL_SIZE_T sz, csz, oldtopsize;
f65fd747
UD
3572 Void_t* mem;
3573
dfd2257a 3574#if defined _LIBC || defined MALLOC_HOOKS
10dc2a90
UD
3575 if (__malloc_hook != NULL) {
3576 sz = n * elem_size;
dfd2257a 3577#if defined __GNUC__ && __GNUC__ >= 2
a2b08ee5
UD
3578 mem = (*__malloc_hook)(sz, __builtin_return_address (0));
3579#else
dfd2257a 3580 mem = (*__malloc_hook)(sz, NULL);
a2b08ee5 3581#endif
831372e7
UD
3582 if(mem == 0)
3583 return 0;
a2b08ee5 3584#ifdef HAVE_MEMSET
c131718c 3585 return memset(mem, 0, sz);
10dc2a90 3586#else
831372e7 3587 while(sz > 0) ((char*)mem)[--sz] = 0; /* rather inefficient */
10dc2a90 3588 return mem;
c131718c 3589#endif
10dc2a90
UD
3590 }
3591#endif
f65fd747 3592
2e65ca2b
UD
3593 if(request2size(n * elem_size, sz))
3594 return 0;
f65fd747
UD
3595 arena_get(ar_ptr, sz);
3596 if(!ar_ptr)
3597 return 0;
3598
3599 /* check if expand_top called, in which case don't need to clear */
3600#if MORECORE_CLEARS
3601 oldtop = top(ar_ptr);
3602 oldtopsize = chunksize(top(ar_ptr));
3603#endif
3604 p = chunk_alloc (ar_ptr, sz);
3605
3606 /* Only clearing follows, so we can unlock early. */
3607 (void)mutex_unlock(&ar_ptr->mutex);
3608
7799b7b3
UD
3609 if (p == 0) {
3610 /* Maybe the failure is due to running out of mmapped areas. */
3611 if(ar_ptr != &main_arena) {
3612 (void)mutex_lock(&main_arena.mutex);
3613 p = chunk_alloc(&main_arena, sz);
3614 (void)mutex_unlock(&main_arena.mutex);
3615 }
3616 if (p == 0) return 0;
3617 }
3618 mem = chunk2mem(p);
f65fd747 3619
7799b7b3 3620 /* Two optional cases in which clearing not necessary */
f65fd747
UD
3621
3622#if HAVE_MMAP
7799b7b3 3623 if (chunk_is_mmapped(p)) return mem;
f65fd747
UD
3624#endif
3625
7799b7b3 3626 csz = chunksize(p);
f65fd747
UD
3627
3628#if MORECORE_CLEARS
7799b7b3
UD
3629 if (p == oldtop && csz > oldtopsize) {
3630 /* clear only the bytes from non-freshly-sbrked memory */
3631 csz = oldtopsize;
3632 }
f65fd747
UD
3633#endif
3634
7799b7b3
UD
3635 MALLOC_ZERO(mem, csz - SIZE_SZ);
3636 return mem;
f65fd747
UD
3637}
3638
3639/*
3640
3641 cfree just calls free. It is needed/defined on some systems
3642 that pair it with calloc, presumably for odd historical reasons.
3643
3644*/
3645
3646#if !defined(_LIBC)
3647#if __STD_C
3648void cfree(Void_t *mem)
3649#else
3650void cfree(mem) Void_t *mem;
3651#endif
3652{
3653 free(mem);
3654}
3655#endif
3656
3657\f
3658
3659/*
3660
3661 Malloc_trim gives memory back to the system (via negative
3662 arguments to sbrk) if there is unused memory at the `high' end of
3663 the malloc pool. You can call this after freeing large blocks of
3664 memory to potentially reduce the system-level memory requirements
3665 of a program. However, it cannot guarantee to reduce memory. Under
3666 some allocation patterns, some large free blocks of memory will be
3667 locked between two used chunks, so they cannot be given back to
3668 the system.
3669
3670 The `pad' argument to malloc_trim represents the amount of free
3671 trailing space to leave untrimmed. If this argument is zero,
3672 only the minimum amount of memory to maintain internal data
3673 structures will be left (one page or less). Non-zero arguments
3674 can be supplied to maintain enough trailing space to service
3675 future expected allocations without having to re-obtain memory
3676 from the system.
3677
3678 Malloc_trim returns 1 if it actually released any memory, else 0.
3679
3680*/
3681
3682#if __STD_C
7e3be507 3683int mALLOC_TRIm(size_t pad)
f65fd747 3684#else
7e3be507 3685int mALLOC_TRIm(pad) size_t pad;
f65fd747
UD
3686#endif
3687{
3688 int res;
3689
3690 (void)mutex_lock(&main_arena.mutex);
8a4b65b4 3691 res = main_trim(pad);
f65fd747
UD
3692 (void)mutex_unlock(&main_arena.mutex);
3693 return res;
3694}
3695
8a4b65b4
UD
3696/* Trim the main arena. */
3697
f65fd747 3698static int
dfd2257a 3699internal_function
f65fd747 3700#if __STD_C
8a4b65b4 3701main_trim(size_t pad)
f65fd747 3702#else
8a4b65b4 3703main_trim(pad) size_t pad;
f65fd747
UD
3704#endif
3705{
3706 mchunkptr top_chunk; /* The current top chunk */
3707 long top_size; /* Amount of top-most memory */
3708 long extra; /* Amount to release */
3709 char* current_brk; /* address returned by pre-check sbrk call */
3710 char* new_brk; /* address returned by negative sbrk call */
3711
3712 unsigned long pagesz = malloc_getpagesize;
3713
8a4b65b4 3714 top_chunk = top(&main_arena);
f65fd747
UD
3715 top_size = chunksize(top_chunk);
3716 extra = ((top_size - pad - MINSIZE + (pagesz-1)) / pagesz - 1) * pagesz;
3717
3718 if (extra < (long)pagesz) /* Not enough memory to release */
3719 return 0;
3720
8a4b65b4
UD
3721 /* Test to make sure no one else called sbrk */
3722 current_brk = (char*)(MORECORE (0));
3723 if (current_brk != (char*)(top_chunk) + top_size)
3724 return 0; /* Apparently we don't own memory; must fail */
f65fd747 3725
8a4b65b4 3726 new_brk = (char*)(MORECORE (-extra));
f65fd747 3727
dfd2257a 3728#if defined _LIBC || defined MALLOC_HOOKS
1228ed5c
UD
3729 /* Call the `morecore' hook if necessary. */
3730 if (__after_morecore_hook)
3731 (*__after_morecore_hook) ();
7799b7b3 3732#endif
1228ed5c 3733
8a4b65b4
UD
3734 if (new_brk == (char*)(MORECORE_FAILURE)) { /* sbrk failed? */
3735 /* Try to figure out what we have */
3736 current_brk = (char*)(MORECORE (0));
3737 top_size = current_brk - (char*)top_chunk;
3738 if (top_size >= (long)MINSIZE) /* if not, we are very very dead! */
3739 {
3740 sbrked_mem = current_brk - sbrk_base;
3741 set_head(top_chunk, top_size | PREV_INUSE);
f65fd747 3742 }
8a4b65b4
UD
3743 check_chunk(&main_arena, top_chunk);
3744 return 0;
3745 }
3746 sbrked_mem -= extra;
3747
3748 /* Success. Adjust top accordingly. */
3749 set_head(top_chunk, (top_size - extra) | PREV_INUSE);
3750 check_chunk(&main_arena, top_chunk);
3751 return 1;
3752}
f65fd747
UD
3753
3754#ifndef NO_THREADS
8a4b65b4
UD
3755
3756static int
dfd2257a 3757internal_function
8a4b65b4
UD
3758#if __STD_C
3759heap_trim(heap_info *heap, size_t pad)
3760#else
3761heap_trim(heap, pad) heap_info *heap; size_t pad;
f65fd747 3762#endif
8a4b65b4
UD
3763{
3764 unsigned long pagesz = malloc_getpagesize;
3765 arena *ar_ptr = heap->ar_ptr;
3766 mchunkptr top_chunk = top(ar_ptr), p, bck, fwd;
3767 heap_info *prev_heap;
3768 long new_size, top_size, extra;
3769
3770 /* Can this heap go away completely ? */
3771 while(top_chunk == chunk_at_offset(heap, sizeof(*heap))) {
3772 prev_heap = heap->prev;
3773 p = chunk_at_offset(prev_heap, prev_heap->size - (MINSIZE-2*SIZE_SZ));
3774 assert(p->size == (0|PREV_INUSE)); /* must be fencepost */
3775 p = prev_chunk(p);
3776 new_size = chunksize(p) + (MINSIZE-2*SIZE_SZ);
10dc2a90 3777 assert(new_size>0 && new_size<(long)(2*MINSIZE));
8a4b65b4
UD
3778 if(!prev_inuse(p))
3779 new_size += p->prev_size;
3780 assert(new_size>0 && new_size<HEAP_MAX_SIZE);
3781 if(new_size + (HEAP_MAX_SIZE - prev_heap->size) < pad + MINSIZE + pagesz)
3782 break;
3783 ar_ptr->size -= heap->size;
3784 delete_heap(heap);
3785 heap = prev_heap;
3786 if(!prev_inuse(p)) { /* consolidate backward */
3787 p = prev_chunk(p);
3788 unlink(p, bck, fwd);
3789 }
3790 assert(((unsigned long)((char*)p + new_size) & (pagesz-1)) == 0);
3791 assert( ((char*)p + new_size) == ((char*)heap + heap->size) );
3792 top(ar_ptr) = top_chunk = p;
3793 set_head(top_chunk, new_size | PREV_INUSE);
3794 check_chunk(ar_ptr, top_chunk);
3795 }
3796 top_size = chunksize(top_chunk);
3797 extra = ((top_size - pad - MINSIZE + (pagesz-1))/pagesz - 1) * pagesz;
3798 if(extra < (long)pagesz)
3799 return 0;
3800 /* Try to shrink. */
3801 if(grow_heap(heap, -extra) != 0)
3802 return 0;
3803 ar_ptr->size -= extra;
f65fd747
UD
3804
3805 /* Success. Adjust top accordingly. */
3806 set_head(top_chunk, (top_size - extra) | PREV_INUSE);
3807 check_chunk(ar_ptr, top_chunk);
3808 return 1;
3809}
3810
8a4b65b4
UD
3811#endif
3812
f65fd747
UD
3813\f
3814
3815/*
3816 malloc_usable_size:
3817
3818 This routine tells you how many bytes you can actually use in an
3819 allocated chunk, which may be more than you requested (although
3820 often not). You can use this many bytes without worrying about
3821 overwriting other allocated objects. Not a particularly great
3822 programming practice, but still sometimes useful.
3823
3824*/
3825
3826#if __STD_C
7e3be507 3827size_t mALLOC_USABLE_SIZe(Void_t* mem)
f65fd747 3828#else
7e3be507 3829size_t mALLOC_USABLE_SIZe(mem) Void_t* mem;
f65fd747
UD
3830#endif
3831{
3832 mchunkptr p;
3833
3834 if (mem == 0)
3835 return 0;
3836 else
3837 {
3838 p = mem2chunk(mem);
3839 if(!chunk_is_mmapped(p))
3840 {
3841 if (!inuse(p)) return 0;
3842 check_inuse_chunk(arena_for_ptr(mem), p);
3843 return chunksize(p) - SIZE_SZ;
3844 }
3845 return chunksize(p) - 2*SIZE_SZ;
3846 }
3847}
3848
3849
3850\f
3851
8a4b65b4 3852/* Utility to update mallinfo for malloc_stats() and mallinfo() */
f65fd747 3853
8a4b65b4
UD
3854static void
3855#if __STD_C
3856malloc_update_mallinfo(arena *ar_ptr, struct mallinfo *mi)
3857#else
3858malloc_update_mallinfo(ar_ptr, mi) arena *ar_ptr; struct mallinfo *mi;
3859#endif
f65fd747 3860{
f65fd747
UD
3861 int i, navail;
3862 mbinptr b;
3863 mchunkptr p;
3864#if MALLOC_DEBUG
3865 mchunkptr q;
3866#endif
3867 INTERNAL_SIZE_T avail;
3868
3869 (void)mutex_lock(&ar_ptr->mutex);
3870 avail = chunksize(top(ar_ptr));
3871 navail = ((long)(avail) >= (long)MINSIZE)? 1 : 0;
3872
3873 for (i = 1; i < NAV; ++i)
3874 {
3875 b = bin_at(ar_ptr, i);
3876 for (p = last(b); p != b; p = p->bk)
3877 {
3878#if MALLOC_DEBUG
3879 check_free_chunk(ar_ptr, p);
3880 for (q = next_chunk(p);
8a4b65b4 3881 q != top(ar_ptr) && inuse(q) && (long)chunksize(q) > 0;
f65fd747
UD
3882 q = next_chunk(q))
3883 check_inuse_chunk(ar_ptr, q);
3884#endif
3885 avail += chunksize(p);
3886 navail++;
3887 }
3888 }
3889
8a4b65b4
UD
3890 mi->arena = ar_ptr->size;
3891 mi->ordblks = navail;
b22fc5f5 3892 mi->smblks = mi->usmblks = mi->fsmblks = 0; /* clear unused fields */
8a4b65b4
UD
3893 mi->uordblks = ar_ptr->size - avail;
3894 mi->fordblks = avail;
3895 mi->hblks = n_mmaps;
3896 mi->hblkhd = mmapped_mem;
3897 mi->keepcost = chunksize(top(ar_ptr));
f65fd747
UD
3898
3899 (void)mutex_unlock(&ar_ptr->mutex);
3900}
3901
8a4b65b4
UD
3902#if !defined(NO_THREADS) && MALLOC_DEBUG > 1
3903
3904/* Print the complete contents of a single heap to stderr. */
3905
3906static void
3907#if __STD_C
3908dump_heap(heap_info *heap)
3909#else
3910dump_heap(heap) heap_info *heap;
3911#endif
3912{
3913 char *ptr;
3914 mchunkptr p;
3915
3916 fprintf(stderr, "Heap %p, size %10lx:\n", heap, (long)heap->size);
3917 ptr = (heap->ar_ptr != (arena*)(heap+1)) ?
3918 (char*)(heap + 1) : (char*)(heap + 1) + sizeof(arena);
3919 p = (mchunkptr)(((unsigned long)ptr + MALLOC_ALIGN_MASK) &
3920 ~MALLOC_ALIGN_MASK);
3921 for(;;) {
3922 fprintf(stderr, "chunk %p size %10lx", p, (long)p->size);
3923 if(p == top(heap->ar_ptr)) {
3924 fprintf(stderr, " (top)\n");
3925 break;
3926 } else if(p->size == (0|PREV_INUSE)) {
3927 fprintf(stderr, " (fence)\n");
3928 break;
3929 }
3930 fprintf(stderr, "\n");
3931 p = next_chunk(p);
3932 }
3933}
3934
3935#endif
3936
f65fd747
UD
3937\f
3938
3939/*
3940
3941 malloc_stats:
3942
6d52618b 3943 For all arenas separately and in total, prints on stderr the
8a4b65b4 3944 amount of space obtained from the system, and the current number
f65fd747
UD
3945 of bytes allocated via malloc (or realloc, etc) but not yet
3946 freed. (Note that this is the number of bytes allocated, not the
3947 number requested. It will be larger than the number requested
8a4b65b4
UD
3948 because of alignment and bookkeeping overhead.) When not compiled
3949 for multiple threads, the maximum amount of allocated memory
3950 (which may be more than current if malloc_trim and/or munmap got
3951 called) is also reported. When using mmap(), prints the maximum
3952 number of simultaneous mmap regions used, too.
f65fd747
UD
3953
3954*/
3955
7e3be507 3956void mALLOC_STATs()
f65fd747 3957{
8a4b65b4
UD
3958 int i;
3959 arena *ar_ptr;
3960 struct mallinfo mi;
3961 unsigned int in_use_b = mmapped_mem, system_b = in_use_b;
3962#if THREAD_STATS
3963 long stat_lock_direct = 0, stat_lock_loop = 0, stat_lock_wait = 0;
3964#endif
3965
7e3be507 3966 for(i=0, ar_ptr = &main_arena;; i++) {
8a4b65b4
UD
3967 malloc_update_mallinfo(ar_ptr, &mi);
3968 fprintf(stderr, "Arena %d:\n", i);
3969 fprintf(stderr, "system bytes = %10u\n", (unsigned int)mi.arena);
3970 fprintf(stderr, "in use bytes = %10u\n", (unsigned int)mi.uordblks);
3971 system_b += mi.arena;
3972 in_use_b += mi.uordblks;
3973#if THREAD_STATS
3974 stat_lock_direct += ar_ptr->stat_lock_direct;
3975 stat_lock_loop += ar_ptr->stat_lock_loop;
3976 stat_lock_wait += ar_ptr->stat_lock_wait;
3977#endif
3978#if !defined(NO_THREADS) && MALLOC_DEBUG > 1
3979 if(ar_ptr != &main_arena) {
c131718c 3980 heap_info *heap;
7e3be507 3981 (void)mutex_lock(&ar_ptr->mutex);
c131718c 3982 heap = heap_for_ptr(top(ar_ptr));
8a4b65b4 3983 while(heap) { dump_heap(heap); heap = heap->prev; }
7e3be507 3984 (void)mutex_unlock(&ar_ptr->mutex);
8a4b65b4
UD
3985 }
3986#endif
7e3be507
UD
3987 ar_ptr = ar_ptr->next;
3988 if(ar_ptr == &main_arena) break;
8a4b65b4 3989 }
7799b7b3 3990#if HAVE_MMAP
8a4b65b4 3991 fprintf(stderr, "Total (incl. mmap):\n");
7799b7b3
UD
3992#else
3993 fprintf(stderr, "Total:\n");
3994#endif
8a4b65b4
UD
3995 fprintf(stderr, "system bytes = %10u\n", system_b);
3996 fprintf(stderr, "in use bytes = %10u\n", in_use_b);
3997#ifdef NO_THREADS
3998 fprintf(stderr, "max system bytes = %10u\n", (unsigned int)max_total_mem);
3999#endif
f65fd747 4000#if HAVE_MMAP
8a4b65b4 4001 fprintf(stderr, "max mmap regions = %10u\n", (unsigned int)max_n_mmaps);
7799b7b3 4002 fprintf(stderr, "max mmap bytes = %10lu\n", max_mmapped_mem);
f65fd747
UD
4003#endif
4004#if THREAD_STATS
8a4b65b4 4005 fprintf(stderr, "heaps created = %10d\n", stat_n_heaps);
f65fd747
UD
4006 fprintf(stderr, "locked directly = %10ld\n", stat_lock_direct);
4007 fprintf(stderr, "locked in loop = %10ld\n", stat_lock_loop);
8a4b65b4
UD
4008 fprintf(stderr, "locked waiting = %10ld\n", stat_lock_wait);
4009 fprintf(stderr, "locked total = %10ld\n",
4010 stat_lock_direct + stat_lock_loop + stat_lock_wait);
f65fd747
UD
4011#endif
4012}
4013
4014/*
4015 mallinfo returns a copy of updated current mallinfo.
8a4b65b4 4016 The information reported is for the arena last used by the thread.
f65fd747
UD
4017*/
4018
4019struct mallinfo mALLINFo()
4020{
8a4b65b4
UD
4021 struct mallinfo mi;
4022 Void_t *vptr = NULL;
4023
1228ed5c 4024#ifndef NO_THREADS
8a4b65b4 4025 tsd_getspecific(arena_key, vptr);
1228ed5c 4026#endif
8a4b65b4
UD
4027 malloc_update_mallinfo((vptr ? (arena*)vptr : &main_arena), &mi);
4028 return mi;
f65fd747
UD
4029}
4030
4031
4032\f
4033
4034/*
4035 mallopt:
4036
4037 mallopt is the general SVID/XPG interface to tunable parameters.
4038 The format is to provide a (parameter-number, parameter-value) pair.
4039 mallopt then sets the corresponding parameter to the argument
4040 value if it can (i.e., so long as the value is meaningful),
4041 and returns 1 if successful else 0.
4042
4043 See descriptions of tunable parameters above.
4044
4045*/
4046
4047#if __STD_C
4048int mALLOPt(int param_number, int value)
4049#else
4050int mALLOPt(param_number, value) int param_number; int value;
4051#endif
4052{
4053 switch(param_number)
4054 {
4055 case M_TRIM_THRESHOLD:
4056 trim_threshold = value; return 1;
4057 case M_TOP_PAD:
4058 top_pad = value; return 1;
4059 case M_MMAP_THRESHOLD:
4060#ifndef NO_THREADS
4061 /* Forbid setting the threshold too high. */
4062 if((unsigned long)value > HEAP_MAX_SIZE/2) return 0;
4063#endif
4064 mmap_threshold = value; return 1;
4065 case M_MMAP_MAX:
4066#if HAVE_MMAP
4067 n_mmaps_max = value; return 1;
4068#else
4069 if (value != 0) return 0; else n_mmaps_max = value; return 1;
4070#endif
10dc2a90
UD
4071 case M_CHECK_ACTION:
4072 check_action = value; return 1;
f65fd747
UD
4073
4074 default:
4075 return 0;
4076 }
4077}
10dc2a90 4078
10dc2a90
UD
4079\f
4080
2f6d1f1b
UD
4081/* Get/set state: malloc_get_state() records the current state of all
4082 malloc variables (_except_ for the actual heap contents and `hook'
4083 function pointers) in a system dependent, opaque data structure.
4084 This data structure is dynamically allocated and can be free()d
4085 after use. malloc_set_state() restores the state of all malloc
4086 variables to the previously obtained state. This is especially
4087 useful when using this malloc as part of a shared library, and when
4088 the heap contents are saved/restored via some other method. The
4089 primary example for this is GNU Emacs with its `dumping' procedure.
4090 `Hook' function pointers are never saved or restored by these
b3864d70
UD
4091 functions, with two exceptions: If malloc checking was in use when
4092 malloc_get_state() was called, then malloc_set_state() calls
4093 __malloc_check_init() if possible; if malloc checking was not in
4094 use in the recorded state but the user requested malloc checking,
4095 then the hooks are reset to 0. */
2f6d1f1b
UD
4096
4097#define MALLOC_STATE_MAGIC 0x444c4541l
9a51759b 4098#define MALLOC_STATE_VERSION (0*0x100l + 1l) /* major*0x100 + minor */
2f6d1f1b
UD
4099
4100struct malloc_state {
4101 long magic;
4102 long version;
4103 mbinptr av[NAV * 2 + 2];
4104 char* sbrk_base;
4105 int sbrked_mem_bytes;
4106 unsigned long trim_threshold;
4107 unsigned long top_pad;
4108 unsigned int n_mmaps_max;
4109 unsigned long mmap_threshold;
4110 int check_action;
4111 unsigned long max_sbrked_mem;
4112 unsigned long max_total_mem;
4113 unsigned int n_mmaps;
4114 unsigned int max_n_mmaps;
4115 unsigned long mmapped_mem;
4116 unsigned long max_mmapped_mem;
9a51759b 4117 int using_malloc_checking;
2f6d1f1b
UD
4118};
4119
4120Void_t*
4121mALLOC_GET_STATe()
4122{
2f6d1f1b
UD
4123 struct malloc_state* ms;
4124 int i;
4125 mbinptr b;
4126
9a51759b
UD
4127 ms = (struct malloc_state*)mALLOc(sizeof(*ms));
4128 if (!ms)
2f6d1f1b 4129 return 0;
9a51759b 4130 (void)mutex_lock(&main_arena.mutex);
2f6d1f1b
UD
4131 ms->magic = MALLOC_STATE_MAGIC;
4132 ms->version = MALLOC_STATE_VERSION;
4133 ms->av[0] = main_arena.av[0];
4134 ms->av[1] = main_arena.av[1];
4135 for(i=0; i<NAV; i++) {
4136 b = bin_at(&main_arena, i);
4137 if(first(b) == b)
4138 ms->av[2*i+2] = ms->av[2*i+3] = 0; /* empty bin (or initial top) */
4139 else {
4140 ms->av[2*i+2] = first(b);
4141 ms->av[2*i+3] = last(b);
4142 }
4143 }
4144 ms->sbrk_base = sbrk_base;
4145 ms->sbrked_mem_bytes = sbrked_mem;
4146 ms->trim_threshold = trim_threshold;
4147 ms->top_pad = top_pad;
4148 ms->n_mmaps_max = n_mmaps_max;
4149 ms->mmap_threshold = mmap_threshold;
4150 ms->check_action = check_action;
4151 ms->max_sbrked_mem = max_sbrked_mem;
4152#ifdef NO_THREADS
4153 ms->max_total_mem = max_total_mem;
4154#else
4155 ms->max_total_mem = 0;
4156#endif
4157 ms->n_mmaps = n_mmaps;
4158 ms->max_n_mmaps = max_n_mmaps;
4159 ms->mmapped_mem = mmapped_mem;
4160 ms->max_mmapped_mem = max_mmapped_mem;
9a51759b
UD
4161#if defined _LIBC || defined MALLOC_HOOKS
4162 ms->using_malloc_checking = using_malloc_checking;
4163#else
4164 ms->using_malloc_checking = 0;
4165#endif
2f6d1f1b
UD
4166 (void)mutex_unlock(&main_arena.mutex);
4167 return (Void_t*)ms;
4168}
4169
4170int
4171#if __STD_C
4172mALLOC_SET_STATe(Void_t* msptr)
4173#else
4174mALLOC_SET_STATe(msptr) Void_t* msptr;
4175#endif
4176{
4177 struct malloc_state* ms = (struct malloc_state*)msptr;
4178 int i;
4179 mbinptr b;
4180
9a51759b
UD
4181#if defined _LIBC || defined MALLOC_HOOKS
4182 disallow_malloc_check = 1;
4183#endif
2f6d1f1b
UD
4184 ptmalloc_init();
4185 if(ms->magic != MALLOC_STATE_MAGIC) return -1;
4186 /* Must fail if the major version is too high. */
4187 if((ms->version & ~0xffl) > (MALLOC_STATE_VERSION & ~0xffl)) return -2;
4188 (void)mutex_lock(&main_arena.mutex);
4189 main_arena.av[0] = ms->av[0];
4190 main_arena.av[1] = ms->av[1];
4191 for(i=0; i<NAV; i++) {
4192 b = bin_at(&main_arena, i);
4193 if(ms->av[2*i+2] == 0)
4194 first(b) = last(b) = b;
4195 else {
4196 first(b) = ms->av[2*i+2];
4197 last(b) = ms->av[2*i+3];
4198 if(i > 0) {
4199 /* Make sure the links to the `av'-bins in the heap are correct. */
4200 first(b)->bk = b;
4201 last(b)->fd = b;
4202 }
4203 }
4204 }
4205 sbrk_base = ms->sbrk_base;
4206 sbrked_mem = ms->sbrked_mem_bytes;
4207 trim_threshold = ms->trim_threshold;
4208 top_pad = ms->top_pad;
4209 n_mmaps_max = ms->n_mmaps_max;
4210 mmap_threshold = ms->mmap_threshold;
4211 check_action = ms->check_action;
4212 max_sbrked_mem = ms->max_sbrked_mem;
4213#ifdef NO_THREADS
4214 max_total_mem = ms->max_total_mem;
4215#endif
4216 n_mmaps = ms->n_mmaps;
4217 max_n_mmaps = ms->max_n_mmaps;
4218 mmapped_mem = ms->mmapped_mem;
4219 max_mmapped_mem = ms->max_mmapped_mem;
4220 /* add version-dependent code here */
9a51759b
UD
4221 if (ms->version >= 1) {
4222#if defined _LIBC || defined MALLOC_HOOKS
b3864d70
UD
4223 /* Check whether it is safe to enable malloc checking, or whether
4224 it is necessary to disable it. */
9a51759b
UD
4225 if (ms->using_malloc_checking && !using_malloc_checking &&
4226 !disallow_malloc_check)
4227 __malloc_check_init ();
b3864d70
UD
4228 else if (!ms->using_malloc_checking && using_malloc_checking) {
4229 __malloc_hook = 0;
4230 __free_hook = 0;
4231 __realloc_hook = 0;
4232 __memalign_hook = 0;
4233 using_malloc_checking = 0;
4234 }
9a51759b
UD
4235#endif
4236 }
4237
2f6d1f1b
UD
4238 (void)mutex_unlock(&main_arena.mutex);
4239 return 0;
4240}
4241
4242\f
4243
dfd2257a 4244#if defined _LIBC || defined MALLOC_HOOKS
10dc2a90
UD
4245
4246/* A simple, standard set of debugging hooks. Overhead is `only' one
4247 byte per chunk; still this will catch most cases of double frees or
b22fc5f5
UD
4248 overruns. The goal here is to avoid obscure crashes due to invalid
4249 usage, unlike in the MALLOC_DEBUG code. */
10dc2a90 4250
c0e45674 4251#define MAGICBYTE(p) ( ( ((size_t)p >> 3) ^ ((size_t)p >> 11)) & 0xFF )
f65fd747 4252
b22fc5f5
UD
4253/* Instrument a chunk with overrun detector byte(s) and convert it
4254 into a user pointer with requested size sz. */
4255
4256static Void_t*
431c33c0 4257internal_function
b22fc5f5
UD
4258#if __STD_C
4259chunk2mem_check(mchunkptr p, size_t sz)
4260#else
4261chunk2mem_check(p, sz) mchunkptr p; size_t sz;
4262#endif
4263{
4264 unsigned char* m_ptr = (unsigned char*)chunk2mem(p);
4265 size_t i;
4266
4267 for(i = chunksize(p) - (chunk_is_mmapped(p) ? 2*SIZE_SZ+1 : SIZE_SZ+1);
4268 i > sz;
4269 i -= 0xFF) {
4270 if(i-sz < 0x100) {
4271 m_ptr[i] = (unsigned char)(i-sz);
4272 break;
4273 }
4274 m_ptr[i] = 0xFF;
4275 }
4276 m_ptr[sz] = MAGICBYTE(p);
4277 return (Void_t*)m_ptr;
4278}
4279
10dc2a90 4280/* Convert a pointer to be free()d or realloc()ed to a valid chunk
b22fc5f5 4281 pointer. If the provided pointer is not valid, return NULL. */
10dc2a90
UD
4282
4283static mchunkptr
dfd2257a 4284internal_function
10dc2a90
UD
4285#if __STD_C
4286mem2chunk_check(Void_t* mem)
4287#else
4288mem2chunk_check(mem) Void_t* mem;
f65fd747 4289#endif
10dc2a90
UD
4290{
4291 mchunkptr p;
b22fc5f5
UD
4292 INTERNAL_SIZE_T sz, c;
4293 unsigned char magic;
10dc2a90
UD
4294
4295 p = mem2chunk(mem);
4296 if(!aligned_OK(p)) return NULL;
4297 if( (char*)p>=sbrk_base && (char*)p<(sbrk_base+sbrked_mem) ) {
7e3be507 4298 /* Must be a chunk in conventional heap memory. */
10dc2a90
UD
4299 if(chunk_is_mmapped(p) ||
4300 ( (sz = chunksize(p)), ((char*)p + sz)>=(sbrk_base+sbrked_mem) ) ||
7e3be507
UD
4301 sz<MINSIZE || sz&MALLOC_ALIGN_MASK || !inuse(p) ||
4302 ( !prev_inuse(p) && (p->prev_size&MALLOC_ALIGN_MASK ||
4303 (long)prev_chunk(p)<(long)sbrk_base ||
4304 next_chunk(prev_chunk(p))!=p) ))
4305 return NULL;
b22fc5f5
UD
4306 magic = MAGICBYTE(p);
4307 for(sz += SIZE_SZ-1; (c = ((unsigned char*)p)[sz]) != magic; sz -= c) {
4308 if(c<=0 || sz<(c+2*SIZE_SZ)) return NULL;
4309 }
4310 ((unsigned char*)p)[sz] ^= 0xFF;
10dc2a90
UD
4311 } else {
4312 unsigned long offset, page_mask = malloc_getpagesize-1;
4313
7e3be507 4314 /* mmap()ed chunks have MALLOC_ALIGNMENT or higher power-of-two
10dc2a90
UD
4315 alignment relative to the beginning of a page. Check this
4316 first. */
4317 offset = (unsigned long)mem & page_mask;
4318 if((offset!=MALLOC_ALIGNMENT && offset!=0 && offset!=0x10 &&
7e3be507
UD
4319 offset!=0x20 && offset!=0x40 && offset!=0x80 && offset!=0x100 &&
4320 offset!=0x200 && offset!=0x400 && offset!=0x800 && offset!=0x1000 &&
4321 offset<0x2000) ||
10dc2a90
UD
4322 !chunk_is_mmapped(p) || (p->size & PREV_INUSE) ||
4323 ( (((unsigned long)p - p->prev_size) & page_mask) != 0 ) ||
4324 ( (sz = chunksize(p)), ((p->prev_size + sz) & page_mask) != 0 ) )
4325 return NULL;
b22fc5f5
UD
4326 magic = MAGICBYTE(p);
4327 for(sz -= 1; (c = ((unsigned char*)p)[sz]) != magic; sz -= c) {
4328 if(c<=0 || sz<(c+2*SIZE_SZ)) return NULL;
4329 }
4330 ((unsigned char*)p)[sz] ^= 0xFF;
10dc2a90
UD
4331 }
4332 return p;
4333}
4334
b22fc5f5
UD
4335/* Check for corruption of the top chunk, and try to recover if
4336 necessary. */
4337
ae9b308c 4338static int
431c33c0 4339internal_function
ae9b308c
UD
4340#if __STD_C
4341top_check(void)
4342#else
b22fc5f5 4343top_check()
ae9b308c 4344#endif
b22fc5f5
UD
4345{
4346 mchunkptr t = top(&main_arena);
4347 char* brk, * new_brk;
4348 INTERNAL_SIZE_T front_misalign, sbrk_size;
4349 unsigned long pagesz = malloc_getpagesize;
4350
4351 if((char*)t + chunksize(t) == sbrk_base + sbrked_mem ||
4352 t == initial_top(&main_arena)) return 0;
4353
b3864d70 4354 if(check_action & 1)
b22fc5f5 4355 fprintf(stderr, "malloc: top chunk is corrupt\n");
b3864d70 4356 if(check_action & 2)
b22fc5f5 4357 abort();
b3864d70 4358
b22fc5f5
UD
4359 /* Try to set up a new top chunk. */
4360 brk = MORECORE(0);
4361 front_misalign = (unsigned long)chunk2mem(brk) & MALLOC_ALIGN_MASK;
4362 if (front_misalign > 0)
4363 front_misalign = MALLOC_ALIGNMENT - front_misalign;
4364 sbrk_size = front_misalign + top_pad + MINSIZE;
4365 sbrk_size += pagesz - ((unsigned long)(brk + sbrk_size) & (pagesz - 1));
4366 new_brk = (char*)(MORECORE (sbrk_size));
4367 if (new_brk == (char*)(MORECORE_FAILURE)) return -1;
4368 sbrked_mem = (new_brk - sbrk_base) + sbrk_size;
4369
4370 top(&main_arena) = (mchunkptr)(brk + front_misalign);
4371 set_head(top(&main_arena), (sbrk_size - front_misalign) | PREV_INUSE);
4372
4373 return 0;
4374}
4375
10dc2a90
UD
4376static Void_t*
4377#if __STD_C
dfd2257a 4378malloc_check(size_t sz, const Void_t *caller)
10dc2a90 4379#else
dfd2257a 4380malloc_check(sz, caller) size_t sz; const Void_t *caller;
a2b08ee5 4381#endif
10dc2a90
UD
4382{
4383 mchunkptr victim;
2e65ca2b 4384 INTERNAL_SIZE_T nb;
10dc2a90 4385
2e65ca2b
UD
4386 if(request2size(sz+1, nb))
4387 return 0;
10dc2a90 4388 (void)mutex_lock(&main_arena.mutex);
b22fc5f5 4389 victim = (top_check() >= 0) ? chunk_alloc(&main_arena, nb) : NULL;
10dc2a90
UD
4390 (void)mutex_unlock(&main_arena.mutex);
4391 if(!victim) return NULL;
b22fc5f5 4392 return chunk2mem_check(victim, sz);
10dc2a90
UD
4393}
4394
4395static void
4396#if __STD_C
dfd2257a 4397free_check(Void_t* mem, const Void_t *caller)
10dc2a90 4398#else
dfd2257a 4399free_check(mem, caller) Void_t* mem; const Void_t *caller;
a2b08ee5 4400#endif
10dc2a90
UD
4401{
4402 mchunkptr p;
4403
4404 if(!mem) return;
7e3be507 4405 (void)mutex_lock(&main_arena.mutex);
10dc2a90
UD
4406 p = mem2chunk_check(mem);
4407 if(!p) {
7e3be507 4408 (void)mutex_unlock(&main_arena.mutex);
b3864d70 4409 if(check_action & 1)
d6765f1d 4410 fprintf(stderr, "free(): invalid pointer %p!\n", mem);
b3864d70 4411 if(check_action & 2)
10dc2a90 4412 abort();
10dc2a90
UD
4413 return;
4414 }
4415#if HAVE_MMAP
4416 if (chunk_is_mmapped(p)) {
7e3be507 4417 (void)mutex_unlock(&main_arena.mutex);
10dc2a90
UD
4418 munmap_chunk(p);
4419 return;
4420 }
4421#endif
7e3be507
UD
4422#if 0 /* Erase freed memory. */
4423 memset(mem, 0, chunksize(p) - (SIZE_SZ+1));
4424#endif
10dc2a90
UD
4425 chunk_free(&main_arena, p);
4426 (void)mutex_unlock(&main_arena.mutex);
4427}
4428
4429static Void_t*
4430#if __STD_C
dfd2257a 4431realloc_check(Void_t* oldmem, size_t bytes, const Void_t *caller)
10dc2a90 4432#else
dfd2257a
UD
4433realloc_check(oldmem, bytes, caller)
4434 Void_t* oldmem; size_t bytes; const Void_t *caller;
a2b08ee5 4435#endif
10dc2a90
UD
4436{
4437 mchunkptr oldp, newp;
4438 INTERNAL_SIZE_T nb, oldsize;
4439
a2b08ee5 4440 if (oldmem == 0) return malloc_check(bytes, NULL);
7e3be507 4441 (void)mutex_lock(&main_arena.mutex);
10dc2a90
UD
4442 oldp = mem2chunk_check(oldmem);
4443 if(!oldp) {
7e3be507 4444 (void)mutex_unlock(&main_arena.mutex);
d957279c 4445 if (check_action & 1)
d6765f1d 4446 fprintf(stderr, "realloc(): invalid pointer %p!\n", oldmem);
d957279c 4447 if (check_action & 2)
10dc2a90 4448 abort();
a2b08ee5 4449 return malloc_check(bytes, NULL);
10dc2a90
UD
4450 }
4451 oldsize = chunksize(oldp);
4452
2e65ca2b
UD
4453 if(request2size(bytes+1, nb)) {
4454 (void)mutex_unlock(&main_arena.mutex);
4455 return 0;
4456 }
10dc2a90 4457
10dc2a90
UD
4458#if HAVE_MMAP
4459 if (chunk_is_mmapped(oldp)) {
4460#if HAVE_MREMAP
4461 newp = mremap_chunk(oldp, nb);
4462 if(!newp) {
4463#endif
4464 /* Note the extra SIZE_SZ overhead. */
4465 if(oldsize - SIZE_SZ >= nb) newp = oldp; /* do nothing */
4466 else {
7e3be507 4467 /* Must alloc, copy, free. */
b22fc5f5 4468 newp = (top_check() >= 0) ? chunk_alloc(&main_arena, nb) : NULL;
7e3be507
UD
4469 if (newp) {
4470 MALLOC_COPY(chunk2mem(newp), oldmem, oldsize - 2*SIZE_SZ);
4471 munmap_chunk(oldp);
4472 }
10dc2a90
UD
4473 }
4474#if HAVE_MREMAP
4475 }
4476#endif
7e3be507 4477 } else {
10dc2a90 4478#endif /* HAVE_MMAP */
b22fc5f5
UD
4479 newp = (top_check() >= 0) ?
4480 chunk_realloc(&main_arena, oldp, oldsize, nb) : NULL;
7e3be507
UD
4481#if 0 /* Erase freed memory. */
4482 nb = chunksize(newp);
4483 if(oldp<newp || oldp>=chunk_at_offset(newp, nb)) {
4484 memset((char*)oldmem + 2*sizeof(mbinptr), 0,
4485 oldsize - (2*sizeof(mbinptr)+2*SIZE_SZ+1));
4486 } else if(nb > oldsize+SIZE_SZ) {
4487 memset((char*)chunk2mem(newp) + oldsize, 0, nb - (oldsize+SIZE_SZ));
4488 }
4489#endif
4490#if HAVE_MMAP
4491 }
4492#endif
10dc2a90
UD
4493 (void)mutex_unlock(&main_arena.mutex);
4494
4495 if(!newp) return NULL;
b22fc5f5 4496 return chunk2mem_check(newp, bytes);
10dc2a90
UD
4497}
4498
4499static Void_t*
4500#if __STD_C
dfd2257a 4501memalign_check(size_t alignment, size_t bytes, const Void_t *caller)
10dc2a90 4502#else
dfd2257a
UD
4503memalign_check(alignment, bytes, caller)
4504 size_t alignment; size_t bytes; const Void_t *caller;
a2b08ee5 4505#endif
10dc2a90
UD
4506{
4507 INTERNAL_SIZE_T nb;
4508 mchunkptr p;
4509
a2b08ee5 4510 if (alignment <= MALLOC_ALIGNMENT) return malloc_check(bytes, NULL);
10dc2a90
UD
4511 if (alignment < MINSIZE) alignment = MINSIZE;
4512
2e65ca2b
UD
4513 if(request2size(bytes+1, nb))
4514 return 0;
10dc2a90 4515 (void)mutex_lock(&main_arena.mutex);
b22fc5f5 4516 p = (top_check() >= 0) ? chunk_align(&main_arena, nb, alignment) : NULL;
10dc2a90
UD
4517 (void)mutex_unlock(&main_arena.mutex);
4518 if(!p) return NULL;
b22fc5f5 4519 return chunk2mem_check(p, bytes);
10dc2a90
UD
4520}
4521
ee74a442
UD
4522#ifndef NO_THREADS
4523
7e3be507
UD
4524/* The following hooks are used when the global initialization in
4525 ptmalloc_init() hasn't completed yet. */
4526
4527static Void_t*
4528#if __STD_C
dfd2257a 4529malloc_starter(size_t sz, const Void_t *caller)
7e3be507 4530#else
dfd2257a 4531malloc_starter(sz, caller) size_t sz; const Void_t *caller;
a2b08ee5 4532#endif
7e3be507 4533{
2e65ca2b
UD
4534 INTERNAL_SIZE_T nb;
4535 mchunkptr victim;
4536
4537 if(request2size(sz, nb))
4538 return 0;
4539 victim = chunk_alloc(&main_arena, nb);
7e3be507
UD
4540
4541 return victim ? chunk2mem(victim) : 0;
4542}
4543
4544static void
4545#if __STD_C
dfd2257a 4546free_starter(Void_t* mem, const Void_t *caller)
7e3be507 4547#else
dfd2257a 4548free_starter(mem, caller) Void_t* mem; const Void_t *caller;
a2b08ee5 4549#endif
7e3be507
UD
4550{
4551 mchunkptr p;
4552
4553 if(!mem) return;
4554 p = mem2chunk(mem);
4555#if HAVE_MMAP
4556 if (chunk_is_mmapped(p)) {
4557 munmap_chunk(p);
4558 return;
4559 }
4560#endif
4561 chunk_free(&main_arena, p);
4562}
4563
ca34d7a7
UD
4564/* The following hooks are used while the `atfork' handling mechanism
4565 is active. */
4566
4567static Void_t*
4568#if __STD_C
dfd2257a 4569malloc_atfork (size_t sz, const Void_t *caller)
ca34d7a7 4570#else
dfd2257a 4571malloc_atfork(sz, caller) size_t sz; const Void_t *caller;
a2b08ee5 4572#endif
ca34d7a7
UD
4573{
4574 Void_t *vptr = NULL;
2e65ca2b 4575 INTERNAL_SIZE_T nb;
431c33c0 4576 mchunkptr victim;
ca34d7a7
UD
4577
4578 tsd_getspecific(arena_key, vptr);
4579 if(!vptr) {
431c33c0 4580 if(save_malloc_hook != malloc_check) {
2e65ca2b
UD
4581 if(request2size(sz, nb))
4582 return 0;
4583 victim = chunk_alloc(&main_arena, nb);
431c33c0
UD
4584 return victim ? chunk2mem(victim) : 0;
4585 } else {
2e65ca2b
UD
4586 if(top_check()<0 || request2size(sz+1, nb))
4587 return 0;
4588 victim = chunk_alloc(&main_arena, nb);
431c33c0
UD
4589 return victim ? chunk2mem_check(victim, sz) : 0;
4590 }
ca34d7a7
UD
4591 } else {
4592 /* Suspend the thread until the `atfork' handlers have completed.
4593 By that time, the hooks will have been reset as well, so that
4594 mALLOc() can be used again. */
4595 (void)mutex_lock(&list_lock);
4596 (void)mutex_unlock(&list_lock);
4597 return mALLOc(sz);
4598 }
4599}
4600
4601static void
4602#if __STD_C
dfd2257a 4603free_atfork(Void_t* mem, const Void_t *caller)
ca34d7a7 4604#else
dfd2257a 4605free_atfork(mem, caller) Void_t* mem; const Void_t *caller;
a2b08ee5 4606#endif
ca34d7a7
UD
4607{
4608 Void_t *vptr = NULL;
4609 arena *ar_ptr;
4610 mchunkptr p; /* chunk corresponding to mem */
4611
4612 if (mem == 0) /* free(0) has no effect */
4613 return;
4614
431c33c0 4615 p = mem2chunk(mem); /* do not bother to replicate free_check here */
ca34d7a7
UD
4616
4617#if HAVE_MMAP
4618 if (chunk_is_mmapped(p)) /* release mmapped memory. */
4619 {
4620 munmap_chunk(p);
4621 return;
4622 }
4623#endif
4624
4625 ar_ptr = arena_for_ptr(p);
4626 tsd_getspecific(arena_key, vptr);
4627 if(vptr)
4628 (void)mutex_lock(&ar_ptr->mutex);
4629 chunk_free(ar_ptr, p);
4630 if(vptr)
4631 (void)mutex_unlock(&ar_ptr->mutex);
4632}
4633
431c33c0 4634#endif /* !defined NO_THREADS */
ee74a442 4635
dfd2257a 4636#endif /* defined _LIBC || defined MALLOC_HOOKS */
f65fd747 4637
7e3be507
UD
4638\f
4639
4640#ifdef _LIBC
4641weak_alias (__libc_calloc, __calloc) weak_alias (__libc_calloc, calloc)
4642weak_alias (__libc_free, __cfree) weak_alias (__libc_free, cfree)
4643weak_alias (__libc_free, __free) weak_alias (__libc_free, free)
4644weak_alias (__libc_malloc, __malloc) weak_alias (__libc_malloc, malloc)
4645weak_alias (__libc_memalign, __memalign) weak_alias (__libc_memalign, memalign)
4646weak_alias (__libc_realloc, __realloc) weak_alias (__libc_realloc, realloc)
4647weak_alias (__libc_valloc, __valloc) weak_alias (__libc_valloc, valloc)
4648weak_alias (__libc_pvalloc, __pvalloc) weak_alias (__libc_pvalloc, pvalloc)
4649weak_alias (__libc_mallinfo, __mallinfo) weak_alias (__libc_mallinfo, mallinfo)
4650weak_alias (__libc_mallopt, __mallopt) weak_alias (__libc_mallopt, mallopt)
4651
4652weak_alias (__malloc_stats, malloc_stats)
4653weak_alias (__malloc_usable_size, malloc_usable_size)
4654weak_alias (__malloc_trim, malloc_trim)
2f6d1f1b
UD
4655weak_alias (__malloc_get_state, malloc_get_state)
4656weak_alias (__malloc_set_state, malloc_set_state)
7e3be507
UD
4657#endif
4658
f65fd747
UD
4659/*
4660
4661History:
4662
2f6d1f1b
UD
4663 V2.6.4-pt3 Thu Feb 20 1997 Wolfram Gloger (wmglo@dent.med.uni-muenchen.de)
4664 * Added malloc_get/set_state() (mainly for use in GNU emacs),
4665 using interface from Marcus Daniels
4666 * All parameters are now adjustable via environment variables
4667
10dc2a90
UD
4668 V2.6.4-pt2 Sat Dec 14 1996 Wolfram Gloger (wmglo@dent.med.uni-muenchen.de)
4669 * Added debugging hooks
4670 * Fixed possible deadlock in realloc() when out of memory
4671 * Don't pollute namespace in glibc: use __getpagesize, __mmap, etc.
4672
f65fd747
UD
4673 V2.6.4-pt Wed Dec 4 1996 Wolfram Gloger (wmglo@dent.med.uni-muenchen.de)
4674 * Very minor updates from the released 2.6.4 version.
4675 * Trimmed include file down to exported data structures.
4676 * Changes from H.J. Lu for glibc-2.0.
4677
4678 V2.6.3i-pt Sep 16 1996 Wolfram Gloger (wmglo@dent.med.uni-muenchen.de)
4679 * Many changes for multiple threads
4680 * Introduced arenas and heaps
4681
4682 V2.6.3 Sun May 19 08:17:58 1996 Doug Lea (dl at gee)
4683 * Added pvalloc, as recommended by H.J. Liu
4684 * Added 64bit pointer support mainly from Wolfram Gloger
4685 * Added anonymously donated WIN32 sbrk emulation
4686 * Malloc, calloc, getpagesize: add optimizations from Raymond Nijssen
4687 * malloc_extend_top: fix mask error that caused wastage after
4688 foreign sbrks
4689 * Add linux mremap support code from HJ Liu
4690
4691 V2.6.2 Tue Dec 5 06:52:55 1995 Doug Lea (dl at gee)
4692 * Integrated most documentation with the code.
4693 * Add support for mmap, with help from
4694 Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
4695 * Use last_remainder in more cases.
4696 * Pack bins using idea from colin@nyx10.cs.du.edu
6d52618b 4697 * Use ordered bins instead of best-fit threshold
f65fd747
UD
4698 * Eliminate block-local decls to simplify tracing and debugging.
4699 * Support another case of realloc via move into top
6d52618b 4700 * Fix error occurring when initial sbrk_base not word-aligned.
f65fd747
UD
4701 * Rely on page size for units instead of SBRK_UNIT to
4702 avoid surprises about sbrk alignment conventions.
4703 * Add mallinfo, mallopt. Thanks to Raymond Nijssen
4704 (raymond@es.ele.tue.nl) for the suggestion.
4705 * Add `pad' argument to malloc_trim and top_pad mallopt parameter.
4706 * More precautions for cases where other routines call sbrk,
4707 courtesy of Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
4708 * Added macros etc., allowing use in linux libc from
4709 H.J. Lu (hjl@gnu.ai.mit.edu)
4710 * Inverted this history list
4711
4712 V2.6.1 Sat Dec 2 14:10:57 1995 Doug Lea (dl at gee)
4713 * Re-tuned and fixed to behave more nicely with V2.6.0 changes.
4714 * Removed all preallocation code since under current scheme
4715 the work required to undo bad preallocations exceeds
4716 the work saved in good cases for most test programs.
4717 * No longer use return list or unconsolidated bins since
4718 no scheme using them consistently outperforms those that don't
4719 given above changes.
4720 * Use best fit for very large chunks to prevent some worst-cases.
4721 * Added some support for debugging
4722
4723 V2.6.0 Sat Nov 4 07:05:23 1995 Doug Lea (dl at gee)
4724 * Removed footers when chunks are in use. Thanks to
4725 Paul Wilson (wilson@cs.texas.edu) for the suggestion.
4726
4727 V2.5.4 Wed Nov 1 07:54:51 1995 Doug Lea (dl at gee)
4728 * Added malloc_trim, with help from Wolfram Gloger
4729 (wmglo@Dent.MED.Uni-Muenchen.DE).
4730
4731 V2.5.3 Tue Apr 26 10:16:01 1994 Doug Lea (dl at g)
4732
4733 V2.5.2 Tue Apr 5 16:20:40 1994 Doug Lea (dl at g)
4734 * realloc: try to expand in both directions
4735 * malloc: swap order of clean-bin strategy;
4736 * realloc: only conditionally expand backwards
4737 * Try not to scavenge used bins
4738 * Use bin counts as a guide to preallocation
4739 * Occasionally bin return list chunks in first scan
4740 * Add a few optimizations from colin@nyx10.cs.du.edu
4741
4742 V2.5.1 Sat Aug 14 15:40:43 1993 Doug Lea (dl at g)
4743 * faster bin computation & slightly different binning
4744 * merged all consolidations to one part of malloc proper
4745 (eliminating old malloc_find_space & malloc_clean_bin)
4746 * Scan 2 returns chunks (not just 1)
4747 * Propagate failure in realloc if malloc returns 0
4748 * Add stuff to allow compilation on non-ANSI compilers
4749 from kpv@research.att.com
4750
4751 V2.5 Sat Aug 7 07:41:59 1993 Doug Lea (dl at g.oswego.edu)
4752 * removed potential for odd address access in prev_chunk
4753 * removed dependency on getpagesize.h
4754 * misc cosmetics and a bit more internal documentation
4755 * anticosmetics: mangled names in macros to evade debugger strangeness
4756 * tested on sparc, hp-700, dec-mips, rs6000
4757 with gcc & native cc (hp, dec only) allowing
4758 Detlefs & Zorn comparison study (in SIGPLAN Notices.)
4759
4760 Trial version Fri Aug 28 13:14:29 1992 Doug Lea (dl at g.oswego.edu)
4761 * Based loosely on libg++-1.2X malloc. (It retains some of the overall
4762 structure of old version, but most details differ.)
4763
4764*/
This page took 0.637486 seconds and 5 git commands to generate.