malloc.c

Go to the documentation of this file.
00001 /*
00002   This is a version (aka dlmalloc) of malloc/free/realloc written by
00003   Doug Lea and released to the public domain, as explained at
00004   http://creativecommons.org/licenses/publicdomain.  Send questions,
00005   comments, complaints, performance data, etc to dl@cs.oswego.edu
00006 
00007 * Version 2.8.3 Thu Sep 22 11:16:15 2005  Doug Lea  (dl at gee)
00008 
00009    Note: There may be an updated version of this malloc obtainable at
00010            ftp://gee.cs.oswego.edu/pub/misc/malloc.c
00011          Check before installing!
00012 
00013 * Quickstart
00014 
00015   This library is all in one file to simplify the most common usage:
00016   ftp it, compile it (-O3), and link it into another program. All of
00017   the compile-time options default to reasonable values for use on
00018   most platforms.  You might later want to step through various
00019   compile-time and dynamic tuning options.
00020 
00021   For convenience, an include file for code using this malloc is at:
00022      ftp://gee.cs.oswego.edu/pub/misc/malloc-2.8.3.h
00023   You don't really need this .h file unless you call functions not
00024   defined in your system include files.  The .h file contains only the
00025   excerpts from this file needed for using this malloc on ANSI C/C++
00026   systems, so long as you haven't changed compile-time options about
00027   naming and tuning parameters.  If you do, then you can create your
00028   own malloc.h that does include all settings by cutting at the point
00029   indicated below. Note that you may already by default be using a C
00030   library containing a malloc that is based on some version of this
00031   malloc (for example in linux). You might still want to use the one
00032   in this file to customize settings or to avoid overheads associated
00033   with library versions.
00034 
00035 * Vital statistics:
00036 
00037   Supported pointer/size_t representation:       4 or 8 bytes
00038        size_t MUST be an unsigned type of the same width as
00039        pointers. (If you are using an ancient system that declares
00040        size_t as a signed type, or need it to be a different width
00041        than pointers, you can use a previous release of this malloc
00042        (e.g. 2.7.2) supporting these.)
00043 
00044   Alignment:                                     8 bytes (default)
00045        This suffices for nearly all current machines and C compilers.
00046        However, you can define MALLOC_ALIGNMENT to be wider than this
00047        if necessary (up to 128bytes), at the expense of using more space.
00048 
00049   Minimum overhead per allocated chunk:   4 or  8 bytes (if 4byte sizes)
00050                                           8 or 16 bytes (if 8byte sizes)
00051        Each malloced chunk has a hidden word of overhead holding size
00052        and status information, and additional cross-check word
00053        if FOOTERS is defined.
00054 
00055   Minimum allocated size: 4-byte ptrs:  16 bytes    (including overhead)
00056                           8-byte ptrs:  32 bytes    (including overhead)
00057 
00058        Even a request for zero bytes (i.e., malloc(0)) returns a
00059        pointer to something of the minimum allocatable size.
00060        The maximum overhead wastage (i.e., number of extra bytes
00061        allocated than were requested in malloc) is less than or equal
00062        to the minimum size, except for requests >= mmap_threshold that
00063        are serviced via mmap(), where the worst case wastage is about
00064        32 bytes plus the remainder from a system page (the minimal
00065        mmap unit); typically 4096 or 8192 bytes.
00066 
00067   Security: static-safe; optionally more or less
00068        The "security" of malloc refers to the ability of malicious
00069        code to accentuate the effects of errors (for example, freeing
00070        space that is not currently malloc'ed or overwriting past the
00071        ends of chunks) in code that calls malloc.  This malloc
00072        guarantees not to modify any memory locations below the base of
00073        heap, i.e., static variables, even in the presence of usage
00074        errors.  The routines additionally detect most improper frees
00075        and reallocs.  All this holds as long as the static bookkeeping
00076        for malloc itself is not corrupted by some other means.  This
00077        is only one aspect of security -- these checks do not, and
00078        cannot, detect all possible programming errors.
00079 
00080        If FOOTERS is defined nonzero, then each allocated chunk
00081        carries an additional check word to verify that it was malloced
00082        from its space.  These check words are the same within each
00083        execution of a program using malloc, but differ across
00084        executions, so externally crafted fake chunks cannot be
00085        freed. This improves security by rejecting frees/reallocs that
00086        could corrupt heap memory, in addition to the checks preventing
00087        writes to statics that are always on.  This may further improve
00088        security at the expense of time and space overhead.  (Note that
00089        FOOTERS may also be worth using with MSPACES.)
00090 
00091        By default detected errors cause the program to abort (calling
00092        "abort()"). You can override this to instead proceed past
00093        errors by defining PROCEED_ON_ERROR.  In this case, a bad free
00094        has no effect, and a malloc that encounters a bad address
00095        caused by user overwrites will ignore the bad address by
00096        dropping pointers and indices to all known memory. This may
00097        be appropriate for programs that should continue if at all
00098        possible in the face of programming errors, although they may
00099        run out of memory because dropped memory is never reclaimed.
00100 
00101        If you don't like either of these options, you can define
00102        CORRUPTION_ERROR_ACTION and USAGE_ERROR_ACTION to do anything
00103        else. And if if you are sure that your program using malloc has
00104        no errors or vulnerabilities, you can define INSECURE to 1,
00105        which might (or might not) provide a small performance improvement.
00106 
00107   Thread-safety: NOT thread-safe unless USE_LOCKS defined
00108        When USE_LOCKS is defined, each public call to malloc, free,
00109        etc is surrounded with either a pthread mutex or a win32
00110        spinlock (depending on WIN32). This is not especially fast, and
00111        can be a major bottleneck.  It is designed only to provide
00112        minimal protection in concurrent environments, and to provide a
00113        basis for extensions.  If you are using malloc in a concurrent
00114        program, consider instead using ptmalloc, which is derived from
00115        a version of this malloc. (See http://www.malloc.de).
00116 
00117   System requirements: Any combination of MORECORE and/or MMAP/MUNMAP
00118        This malloc can use unix sbrk or any emulation (invoked using
00119        the CALL_MORECORE macro) and/or mmap/munmap or any emulation
00120        (invoked using CALL_MMAP/CALL_MUNMAP) to get and release system
00121        memory.  On most unix systems, it tends to work best if both
00122        MORECORE and MMAP are enabled.  On Win32, it uses emulations
00123        based on VirtualAlloc. It also uses common C library functions
00124        like memset.
00125 
00126   Compliance: I believe it is compliant with the Single Unix Specification
00127        (See http://www.unix.org). Also SVID/XPG, ANSI C, and probably
00128        others as well.
00129 
00130 * Overview of algorithms
00131 
00132   This is not the fastest, most space-conserving, most portable, or
00133   most tunable malloc ever written. However it is among the fastest
00134   while also being among the most space-conserving, portable and
00135   tunable.  Consistent balance across these factors results in a good
00136   general-purpose allocator for malloc-intensive programs.
00137 
00138   In most ways, this malloc is a best-fit allocator. Generally, it
00139   chooses the best-fitting existing chunk for a request, with ties
00140   broken in approximately least-recently-used order. (This strategy
00141   normally maintains low fragmentation.) However, for requests less
00142   than 256bytes, it deviates from best-fit when there is not an
00143   exactly fitting available chunk by preferring to use space adjacent
00144   to that used for the previous small request, as well as by breaking
00145   ties in approximately most-recently-used order. (These enhance
00146   locality of series of small allocations.)  And for very large requests
00147   (>= 256Kb by default), it relies on system memory mapping
00148   facilities, if supported.  (This helps avoid carrying around and
00149   possibly fragmenting memory used only for large chunks.)
00150 
00151   All operations (except malloc_stats and mallinfo) have execution
00152   times that are bounded by a constant factor of the number of bits in
00153   a size_t, not counting any clearing in calloc or copying in realloc,
00154   or actions surrounding MORECORE and MMAP that have times
00155   proportional to the number of non-contiguous regions returned by
00156   system allocation routines, which is often just 1.
00157 
00158   The implementation is not very modular and seriously overuses
00159   macros. Perhaps someday all C compilers will do as good a job
00160   inlining modular code as can now be done by brute-force expansion,
00161   but now, enough of them seem not to.
00162 
00163   Some compilers issue a lot of warnings about code that is
00164   dead/unreachable only on some platforms, and also about intentional
00165   uses of negation on unsigned types. All known cases of each can be
00166   ignored.
00167 
00168   For a longer but out of date high-level description, see
00169      http://gee.cs.oswego.edu/dl/html/malloc.html
00170 
00171 * MSPACES
00172   If MSPACES is defined, then in addition to malloc, free, etc.,
00173   this file also defines mspace_malloc, mspace_free, etc. These
00174   are versions of malloc routines that take an "mspace" argument
00175   obtained using create_mspace, to control all internal bookkeeping.
00176   If ONLY_MSPACES is defined, only these versions are compiled.
00177   So if you would like to use this allocator for only some allocations,
00178   and your system malloc for others, you can compile with
00179   ONLY_MSPACES and then do something like...
00180     static mspace mymspace = create_mspace(0,0); // for example
00181     #define mymalloc(bytes)  mspace_malloc(mymspace, bytes)
00182 
00183   (Note: If you only need one instance of an mspace, you can instead
00184   use "USE_DL_PREFIX" to relabel the global malloc.)
00185 
00186   You can similarly create thread-local allocators by storing
00187   mspaces as thread-locals. For example:
00188     static __thread mspace tlms = 0;
00189     void*  tlmalloc(size_t bytes) {
00190       if (tlms == 0) tlms = create_mspace(0, 0);
00191       return mspace_malloc(tlms, bytes);
00192     }
00193     void  tlfree(void* mem) { mspace_free(tlms, mem); }
00194 
00195   Unless FOOTERS is defined, each mspace is completely independent.
00196   You cannot allocate from one and free to another (although
00197   conformance is only weakly checked, so usage errors are not always
00198   caught). If FOOTERS is defined, then each chunk carries around a tag
00199   indicating its originating mspace, and frees are directed to their
00200   originating spaces.
00201 
00202  -------------------------  Compile-time options ---------------------------
00203 
00204 Be careful in setting #define values for numerical constants of type
00205 size_t. On some systems, literal values are not automatically extended
00206 to size_t precision unless they are explicitly casted.
00207 
00208 WIN32                    default: defined if _WIN32 defined
00209   Defining WIN32 sets up defaults for MS environment and compilers.
00210   Otherwise defaults are for unix.
00211 
00212 MALLOC_ALIGNMENT         default: (size_t)8
00213   Controls the minimum alignment for malloc'ed chunks.  It must be a
00214   power of two and at least 8, even on machines for which smaller
00215   alignments would suffice. It may be defined as larger than this
00216   though. Note however that code and data structures are optimized for
00217   the case of 8-byte alignment.
00218 
00219 MSPACES                  default: 0 (false)
00220   If true, compile in support for independent allocation spaces.
00221   This is only supported if HAVE_MMAP is true.
00222 
00223 ONLY_MSPACES             default: 0 (false)
00224   If true, only compile in mspace versions, not regular versions.
00225 
00226 USE_LOCKS                default: 0 (false)
00227   Causes each call to each public routine to be surrounded with
00228   pthread or WIN32 mutex lock/unlock. (If set true, this can be
00229   overridden on a per-mspace basis for mspace versions.)
00230 
00231 FOOTERS                  default: 0
00232   If true, provide extra checking and dispatching by placing
00233   information in the footers of allocated chunks. This adds
00234   space and time overhead.
00235 
00236 INSECURE                 default: 0
00237   If true, omit checks for usage errors and heap space overwrites.
00238 
00239 USE_DL_PREFIX            default: NOT defined
00240   Causes compiler to prefix all public routines with the string 'dl'.
00241   This can be useful when you only want to use this malloc in one part
00242   of a program, using your regular system malloc elsewhere.
00243 
00244 ABORT                    default: defined as abort()
00245   Defines how to abort on failed checks.  On most systems, a failed
00246   check cannot die with an "assert" or even print an informative
00247   message, because the underlying print routines in turn call malloc,
00248   which will fail again.  Generally, the best policy is to simply call
00249   abort(). It's not very useful to do more than this because many
00250   errors due to overwriting will show up as address faults (null, odd
00251   addresses etc) rather than malloc-triggered checks, so will also
00252   abort.  Also, most compilers know that abort() does not return, so
00253   can better optimize code conditionally calling it.
00254 
00255 PROCEED_ON_ERROR           default: defined as 0 (false)
00256   Controls whether detected bad addresses cause them to bypassed
00257   rather than aborting. If set, detected bad arguments to free and
00258   realloc are ignored. And all bookkeeping information is zeroed out
00259   upon a detected overwrite of freed heap space, thus losing the
00260   ability to ever return it from malloc again, but enabling the
00261   application to proceed. If PROCEED_ON_ERROR is defined, the
00262   static variable malloc_corruption_error_count is compiled in
00263   and can be examined to see if errors have occurred. This option
00264   generates slower code than the default abort policy.
00265 
00266 DEBUG                    default: NOT defined
00267   The DEBUG setting is mainly intended for people trying to modify
00268   this code or diagnose problems when porting to new platforms.
00269   However, it may also be able to better isolate user errors than just
00270   using runtime checks.  The assertions in the check routines spell
00271   out in more detail the assumptions and invariants underlying the
00272   algorithms.  The checking is fairly extensive, and will slow down
00273   execution noticeably. Calling malloc_stats or mallinfo with DEBUG
00274   set will attempt to check every non-mmapped allocated and free chunk
00275   in the course of computing the summaries.
00276 
00277 ABORT_ON_ASSERT_FAILURE   default: defined as 1 (true)
00278   Debugging assertion failures can be nearly impossible if your
00279   version of the assert macro causes malloc to be called, which will
00280   lead to a cascade of further failures, blowing the runtime stack.
00281   ABORT_ON_ASSERT_FAILURE cause assertions failures to call abort(),
00282   which will usually make debugging easier.
00283 
00284 MALLOC_FAILURE_ACTION     default: sets errno to ENOMEM, or no-op on win32
00285   The action to take before "return 0" when malloc fails to be able to
00286   return memory because there is none available.
00287 
00288 HAVE_MORECORE             default: 1 (true) unless win32 or ONLY_MSPACES
00289   True if this system supports sbrk or an emulation of it.
00290 
00291 MORECORE                  default: sbrk
00292   The name of the sbrk-style system routine to call to obtain more
00293   memory.  See below for guidance on writing custom MORECORE
00294   functions. The type of the argument to sbrk/MORECORE varies across
00295   systems.  It cannot be size_t, because it supports negative
00296   arguments, so it is normally the signed type of the same width as
00297   size_t (sometimes declared as "intptr_t").  It doesn't much matter
00298   though. Internally, we only call it with arguments less than half
00299   the max value of a size_t, which should work across all reasonable
00300   possibilities, although sometimes generating compiler warnings.  See
00301   near the end of this file for guidelines for creating a custom
00302   version of MORECORE.
00303 
00304 MORECORE_CONTIGUOUS       default: 1 (true)
00305   If true, take advantage of fact that consecutive calls to MORECORE
00306   with positive arguments always return contiguous increasing
00307   addresses.  This is true of unix sbrk. It does not hurt too much to
00308   set it true anyway, since malloc copes with non-contiguities.
00309   Setting it false when definitely non-contiguous saves time
00310   and possibly wasted space it would take to discover this though.
00311 
00312 MORECORE_CANNOT_TRIM      default: NOT defined
00313   True if MORECORE cannot release space back to the system when given
00314   negative arguments. This is generally necessary only if you are
00315   using a hand-crafted MORECORE function that cannot handle negative
00316   arguments.
00317 
00318 HAVE_MMAP                 default: 1 (true)
00319   True if this system supports mmap or an emulation of it.  If so, and
00320   HAVE_MORECORE is not true, MMAP is used for all system
00321   allocation. If set and HAVE_MORECORE is true as well, MMAP is
00322   primarily used to directly allocate very large blocks. It is also
00323   used as a backup strategy in cases where MORECORE fails to provide
00324   space from system. Note: A single call to MUNMAP is assumed to be
00325   able to unmap memory that may have be allocated using multiple calls
00326   to MMAP, so long as they are adjacent.
00327 
00328 HAVE_MREMAP               default: 1 on linux, else 0
00329   If true realloc() uses mremap() to re-allocate large blocks and
00330   extend or shrink allocation spaces.
00331 
00332 MMAP_CLEARS               default: 1 on unix
00333   True if mmap clears memory so calloc doesn't need to. This is true
00334   for standard unix mmap using /dev/zero.
00335 
00336 USE_BUILTIN_FFS            default: 0 (i.e., not used)
00337   Causes malloc to use the builtin ffs() function to compute indices.
00338   Some compilers may recognize and intrinsify ffs to be faster than the
00339   supplied C version. Also, the case of x86 using gcc is special-cased
00340   to an asm instruction, so is already as fast as it can be, and so
00341   this setting has no effect. (On most x86s, the asm version is only
00342   slightly faster than the C version.)
00343 
00344 malloc_getpagesize         default: derive from system includes, or 4096.
00345   The system page size. To the extent possible, this malloc manages
00346   memory from the system in page-size units.  This may be (and
00347   usually is) a function rather than a constant. This is ignored
00348   if WIN32, where page size is determined using getSystemInfo during
00349   initialization.
00350 
00351 USE_DEV_RANDOM             default: 0 (i.e., not used)
00352   Causes malloc to use /dev/random to initialize secure magic seed for
00353   stamping footers. Otherwise, the current time is used.
00354 
00355 NO_MALLINFO                default: 0
00356   If defined, don't compile "mallinfo". This can be a simple way
00357   of dealing with mismatches between system declarations and
00358   those in this file.
00359 
00360 MALLINFO_FIELD_TYPE        default: size_t
00361   The type of the fields in the mallinfo struct. This was originally
00362   defined as "int" in SVID etc, but is more usefully defined as
00363   size_t. The value is used only if  HAVE_USR_INCLUDE_MALLOC_H is not set
00364 
00365 REALLOC_ZERO_BYTES_FREES    default: not defined
00366   This should be set if a call to realloc with zero bytes should 
00367   be the same as a call to free. Some people think it should. Otherwise, 
00368   since this malloc returns a unique pointer for malloc(0), so does 
00369   realloc(p, 0).
00370 
00371 LACKS_UNISTD_H, LACKS_FCNTL_H, LACKS_SYS_PARAM_H, LACKS_SYS_MMAN_H
00372 LACKS_STRINGS_H, LACKS_STRING_H, LACKS_SYS_TYPES_H,  LACKS_ERRNO_H
00373 LACKS_STDLIB_H                default: NOT defined unless on WIN32
00374   Define these if your system does not have these header files.
00375   You might need to manually insert some of the declarations they provide.
00376 
00377 DEFAULT_GRANULARITY        default: page size if MORECORE_CONTIGUOUS,
00378                                 system_info.dwAllocationGranularity in WIN32,
00379                                 otherwise 64K.
00380       Also settable using mallopt(M_GRANULARITY, x)
00381   The unit for allocating and deallocating memory from the system.  On
00382   most systems with contiguous MORECORE, there is no reason to
00383   make this more than a page. However, systems with MMAP tend to
00384   either require or encourage larger granularities.  You can increase
00385   this value to prevent system allocation functions to be called so
00386   often, especially if they are slow.  The value must be at least one
00387   page and must be a power of two.  Setting to 0 causes initialization
00388   to either page size or win32 region size.  (Note: In previous
00389   versions of malloc, the equivalent of this option was called
00390   "TOP_PAD")
00391 
00392 DEFAULT_TRIM_THRESHOLD    default: 2MB
00393       Also settable using mallopt(M_TRIM_THRESHOLD, x)
00394   The maximum amount of unused top-most memory to keep before
00395   releasing via malloc_trim in free().  Automatic trimming is mainly
00396   useful in long-lived programs using contiguous MORECORE.  Because
00397   trimming via sbrk can be slow on some systems, and can sometimes be
00398   wasteful (in cases where programs immediately afterward allocate
00399   more large chunks) the value should be high enough so that your
00400   overall system performance would improve by releasing this much
00401   memory.  As a rough guide, you might set to a value close to the
00402   average size of a process (program) running on your system.
00403   Releasing this much memory would allow such a process to run in
00404   memory.  Generally, it is worth tuning trim thresholds when a
00405   program undergoes phases where several large chunks are allocated
00406   and released in ways that can reuse each other's storage, perhaps
00407   mixed with phases where there are no such chunks at all. The trim
00408   value must be greater than page size to have any useful effect.  To
00409   disable trimming completely, you can set to MAX_SIZE_T. Note that the trick
00410   some people use of mallocing a huge space and then freeing it at
00411   program startup, in an attempt to reserve system memory, doesn't
00412   have the intended effect under automatic trimming, since that memory
00413   will immediately be returned to the system.
00414 
00415 DEFAULT_MMAP_THRESHOLD       default: 256K
00416       Also settable using mallopt(M_MMAP_THRESHOLD, x)
00417   The request size threshold for using MMAP to directly service a
00418   request. Requests of at least this size that cannot be allocated
00419   using already-existing space will be serviced via mmap.  (If enough
00420   normal freed space already exists it is used instead.)  Using mmap
00421   segregates relatively large chunks of memory so that they can be
00422   individually obtained and released from the host system. A request
00423   serviced through mmap is never reused by any other request (at least
00424   not directly; the system may just so happen to remap successive
00425   requests to the same locations).  Segregating space in this way has
00426   the benefits that: Mmapped space can always be individually released
00427   back to the system, which helps keep the system level memory demands
00428   of a long-lived program low.  Also, mapped memory doesn't become
00429   `locked' between other chunks, as can happen with normally allocated
00430   chunks, which means that even trimming via malloc_trim would not
00431   release them.  However, it has the disadvantage that the space
00432   cannot be reclaimed, consolidated, and then used to service later
00433   requests, as happens with normal chunks.  The advantages of mmap
00434   nearly always outweigh disadvantages for "large" chunks, but the
00435   value of "large" may vary across systems.  The default is an
00436   empirically derived value that works well in most systems. You can
00437   disable mmap by setting to MAX_SIZE_T.
00438 
00439 */
00440 
00450 #include <sys/types.h>  /* For size_t */
00451 
00453 #define LACKS_FCNTL_H
00454 #define LACKS_SYS_MMAN_H
00455 #define LACKS_SYS_PARAM_H
00456 #undef HAVE_MMAP
00457 #define HAVE_MMAP 0
00458 #define LACKS_ERRNO_H
00459 /* Set errno? */
00460 #undef MALLOC_FAILURE_ACTION
00461 #define MALLOC_FAILURE_ACTION
00462 
00463 /* The maximum possible size_t value has all bits set */
00464 #define MAX_SIZE_T           (~(size_t)0)
00465 
00466 #define ONLY_MSPACES 0
00467 #define MSPACES 0
00468 
00469 #ifdef MALLOC_ALIGNMENT_16
00470 #define MALLOC_ALIGNMENT ((size_t)16U)
00471 #else
00472 #define MALLOC_ALIGNMENT ((size_t)8U)
00473 #endif
00474 
00475 #define FOOTERS 0
00476 #define ABORT  abort()
00477 #define ABORT_ON_ASSERT_FAILURE 1
00478 #define PROCEED_ON_ERROR 0
00479 #define USE_LOCKS 1
00480 #define INSECURE 0
00481 #define HAVE_MMAP 0
00482 
00483 #define MMAP_CLEARS 1
00484 
00485 #define HAVE_MORECORE 1
00486 #define MORECORE_CONTIGUOUS 1
00487 #define MORECORE sbrk
00488 #define DEFAULT_GRANULARITY (0)  /* 0 means to compute in init_mparams */
00489 
00490 #ifndef DEFAULT_TRIM_THRESHOLD
00491 #ifndef MORECORE_CANNOT_TRIM
00492 #define DEFAULT_TRIM_THRESHOLD ((size_t)2U * (size_t)1024U * (size_t)1024U)
00493 #else   /* MORECORE_CANNOT_TRIM */
00494 #define DEFAULT_TRIM_THRESHOLD MAX_SIZE_T
00495 #endif  /* MORECORE_CANNOT_TRIM */
00496 #endif  /* DEFAULT_TRIM_THRESHOLD */
00497 #ifndef DEFAULT_MMAP_THRESHOLD
00498 #if HAVE_MMAP
00499 #define DEFAULT_MMAP_THRESHOLD ((size_t)256U * (size_t)1024U)
00500 #else   /* HAVE_MMAP */
00501 #define DEFAULT_MMAP_THRESHOLD MAX_SIZE_T
00502 #endif  /* HAVE_MMAP */
00503 #endif  /* DEFAULT_MMAP_THRESHOLD */
00504 #ifndef USE_BUILTIN_FFS
00505 #define USE_BUILTIN_FFS 0
00506 #endif  /* USE_BUILTIN_FFS */
00507 #ifndef USE_DEV_RANDOM
00508 #define USE_DEV_RANDOM 0
00509 #endif  /* USE_DEV_RANDOM */
00510 #ifndef NO_MALLINFO
00511 #define NO_MALLINFO 0
00512 #endif  /* NO_MALLINFO */
00513 #ifndef MALLINFO_FIELD_TYPE
00514 #define MALLINFO_FIELD_TYPE size_t
00515 #endif  /* MALLINFO_FIELD_TYPE */
00516 
00517 /*
00518   mallopt tuning options.  SVID/XPG defines four standard parameter
00519   numbers for mallopt, normally defined in malloc.h.  None of these
00520   are used in this malloc, so setting them has no effect. But this
00521   malloc does support the following options.
00522 */
00523 
00524 #define M_TRIM_THRESHOLD     (-1)
00525 #define M_GRANULARITY        (-2)
00526 #define M_MMAP_THRESHOLD     (-3)
00527 
00528 /*
00529   ========================================================================
00530   To make a fully customizable malloc.h header file, cut everything
00531   above this line, put into file malloc.h, edit to suit, and #include it
00532   on the next line, as well as in programs that use this malloc.
00533   ========================================================================
00534 */
00535 
00536 #include "malloc.h"
00537 
00538 /*------------------------------ internal #includes ---------------------- */
00539 
00540 #include <stdio.h>       /* for printing in malloc_stats */
00541 #include <string.h>
00542 
00543 #ifndef LACKS_ERRNO_H
00544 #include <errno.h>       /* for MALLOC_FAILURE_ACTION */
00545 #endif /* LACKS_ERRNO_H */
00546 #if FOOTERS
00547 #include <time.h>        /* for magic initialization */
00548 #endif /* FOOTERS */
00549 #ifndef LACKS_STDLIB_H
00550 #include <stdlib.h>      /* for abort() */
00551 #endif /* LACKS_STDLIB_H */
00552 #ifdef DEBUG
00553 #if ABORT_ON_ASSERT_FAILURE
00554 #define assert(x) {if(!(x)) {printf(#x);ABORT;}}
00555 #else /* ABORT_ON_ASSERT_FAILURE */
00556 #include <assert.h>
00557 #endif /* ABORT_ON_ASSERT_FAILURE */
00558 #else  /* DEBUG */
00559 #define assert(x)
00560 #endif /* DEBUG */
00561 #if USE_BUILTIN_FFS
00562 #ifndef LACKS_STRINGS_H
00563 #include <strings.h>     /* for ffs */
00564 #endif /* LACKS_STRINGS_H */
00565 #endif /* USE_BUILTIN_FFS */
00566 #if HAVE_MMAP
00567 #ifndef LACKS_SYS_MMAN_H
00568 #include <sys/mman.h>    /* for mmap */
00569 #endif /* LACKS_SYS_MMAN_H */
00570 #ifndef LACKS_FCNTL_H
00571 #include <fcntl.h>
00572 #endif /* LACKS_FCNTL_H */
00573 #endif /* HAVE_MMAP */
00574 #if HAVE_MORECORE
00575 #ifndef LACKS_UNISTD_H
00576 #include <unistd.h>     /* for sbrk */
00577 #else /* LACKS_UNISTD_H */
00578 #if !defined(__FreeBSD__) && !defined(__OpenBSD__) && !defined(__NetBSD__)
00579 extern void*     sbrk(ptrdiff_t);
00580 #endif /* FreeBSD etc */
00581 #endif /* LACKS_UNISTD_H */
00582 #endif /* HAVE_MMAP */
00583 
00584 #ifndef WIN32
00585 #ifndef malloc_getpagesize
00586 #  ifdef _SC_PAGESIZE         /* some SVR4 systems omit an underscore */
00587 #    ifndef _SC_PAGE_SIZE
00588 #      define _SC_PAGE_SIZE _SC_PAGESIZE
00589 #    endif
00590 #  endif
00591 #  ifdef _SC_PAGE_SIZE
00592 #    define malloc_getpagesize sysconf(_SC_PAGE_SIZE)
00593 #  else
00594 #    if defined(BSD) || defined(DGUX) || defined(HAVE_GETPAGESIZE)
00595        extern size_t getpagesize();
00596 #      define malloc_getpagesize getpagesize()
00597 #    else
00598 #      ifdef WIN32 /* use supplied emulation of getpagesize */
00599 #        define malloc_getpagesize getpagesize()
00600 #      else
00601 #        ifndef LACKS_SYS_PARAM_H
00602 #          include <sys/param.h>
00603 #        endif
00604 #        ifdef EXEC_PAGESIZE
00605 #          define malloc_getpagesize EXEC_PAGESIZE
00606 #        else
00607 #          ifdef NBPG
00608 #            ifndef CLSIZE
00609 #              define malloc_getpagesize NBPG
00610 #            else
00611 #              define malloc_getpagesize (NBPG * CLSIZE)
00612 #            endif
00613 #          else
00614 #            ifdef NBPC
00615 #              define malloc_getpagesize NBPC
00616 #            else
00617 #              ifdef PAGESIZE
00618 #                define malloc_getpagesize PAGESIZE
00619 #              else /* just guess */
00620 #                define malloc_getpagesize ((size_t)4096U)
00621 #              endif
00622 #            endif
00623 #          endif
00624 #        endif
00625 #      endif
00626 #    endif
00627 #  endif
00628 #endif
00629 #endif
00630 
00631 /* ------------------- size_t and alignment properties -------------------- */
00632 
00633 /* The byte and bit size of a size_t */
00634 #define SIZE_T_SIZE         (sizeof(size_t))
00635 #define SIZE_T_BITSIZE      (sizeof(size_t) << 3)
00636 
00637 /* Some constants coerced to size_t */
00638 /* Annoying but necessary to avoid errors on some plaftorms */
00639 #define SIZE_T_ZERO         ((size_t)0)
00640 #define SIZE_T_ONE          ((size_t)1)
00641 #define SIZE_T_TWO          ((size_t)2)
00642 #define TWO_SIZE_T_SIZES    (SIZE_T_SIZE<<1)
00643 #define FOUR_SIZE_T_SIZES   (SIZE_T_SIZE<<2)
00644 #define SIX_SIZE_T_SIZES    (FOUR_SIZE_T_SIZES+TWO_SIZE_T_SIZES)
00645 #define HALF_MAX_SIZE_T     (MAX_SIZE_T / 2U)
00646 
00647 /* The bit mask value corresponding to MALLOC_ALIGNMENT */
00648 #define CHUNK_ALIGN_MASK    (MALLOC_ALIGNMENT - SIZE_T_ONE)
00649 
00650 /* True if address a has acceptable alignment */
00651 #define is_aligned(A)       (((size_t)((A)) & (CHUNK_ALIGN_MASK)) == 0)
00652 
00653 /* the number of bytes to offset an address to align it */
00654 #define align_offset(A)\
00655  ((((size_t)(A) & CHUNK_ALIGN_MASK) == 0)? 0 :\
00656   ((MALLOC_ALIGNMENT - ((size_t)(A) & CHUNK_ALIGN_MASK)) & CHUNK_ALIGN_MASK))
00657 
00658 /* -------------------------- MMAP preliminaries ------------------------- */
00659 
00660 /*
00661    If HAVE_MORECORE or HAVE_MMAP are false, we just define calls and
00662    checks to fail so compiler optimizer can delete code rather than
00663    using so many "#if"s.
00664 */
00665 
00666 
00667 /* MORECORE and MMAP must return MFAIL on failure */
00668 #define MFAIL                ((void*)(MAX_SIZE_T))
00669 #define CMFAIL               ((char*)(MFAIL)) /* defined for convenience */
00670 
00671 #if !HAVE_MMAP
00672 #define IS_MMAPPED_BIT       (SIZE_T_ZERO)
00673 #define USE_MMAP_BIT         (SIZE_T_ZERO)
00674 #define CALL_MMAP(s)         MFAIL
00675 #define CALL_MUNMAP(a, s)    (-1)
00676 #define DIRECT_MMAP(s)       MFAIL
00677 
00678 #else /* HAVE_MMAP */
00679 #define IS_MMAPPED_BIT       (SIZE_T_ONE)
00680 #define USE_MMAP_BIT         (SIZE_T_ONE)
00681 
00682 #ifndef WIN32
00683 #define CALL_MUNMAP(a, s)    munmap((a), (s))
00684 #define MMAP_PROT            (PROT_READ|PROT_WRITE)
00685 #if !defined(MAP_ANONYMOUS) && defined(MAP_ANON)
00686 #define MAP_ANONYMOUS        MAP_ANON
00687 #endif /* MAP_ANON */
00688 #ifdef MAP_ANONYMOUS
00689 #define MMAP_FLAGS           (MAP_PRIVATE|MAP_ANONYMOUS)
00690 #define CALL_MMAP(s)         mmap(0, (s), MMAP_PROT, MMAP_FLAGS, -1, 0)
00691 #else /* MAP_ANONYMOUS */
00692 /*
00693    Nearly all versions of mmap support MAP_ANONYMOUS, so the following
00694    is unlikely to be needed, but is supplied just in case.
00695 */
00696 #define MMAP_FLAGS           (MAP_PRIVATE)
00697 static int dev_zero_fd = -1; /* Cached file descriptor for /dev/zero. */
00698 #define CALL_MMAP(s) ((dev_zero_fd < 0) ? \
00699            (dev_zero_fd = open("/dev/zero", O_RDWR), \
00700             mmap(0, (s), MMAP_PROT, MMAP_FLAGS, dev_zero_fd, 0)) : \
00701             mmap(0, (s), MMAP_PROT, MMAP_FLAGS, dev_zero_fd, 0))
00702 #endif /* MAP_ANONYMOUS */
00703 
00704 #define DIRECT_MMAP(s)       CALL_MMAP(s)
00705 #else /* WIN32 */
00706 
00707 /* Win32 MMAP via VirtualAlloc */
00708 static void* win32mmap(size_t size) {
00709   void* ptr = VirtualAlloc(0, size, MEM_RESERVE|MEM_COMMIT, PAGE_READWRITE);
00710   return (ptr != 0)? ptr: MFAIL;
00711 }
00712 
00713 /* For direct MMAP, use MEM_TOP_DOWN to minimize interference */
00714 static void* win32direct_mmap(size_t size) {
00715   void* ptr = VirtualAlloc(0, size, MEM_RESERVE|MEM_COMMIT|MEM_TOP_DOWN,
00716                            PAGE_READWRITE);
00717   return (ptr != 0)? ptr: MFAIL;
00718 }
00719 
00720 /* This function supports releasing coalesed segments */
00721 static int win32munmap(void* ptr, size_t size) {
00722   MEMORY_BASIC_INFORMATION minfo;
00723   char* cptr = ptr;
00724   while (size) {
00725     if (VirtualQuery(cptr, &minfo, sizeof(minfo)) == 0)
00726       return -1;
00727     if (minfo.BaseAddress != cptr || minfo.AllocationBase != cptr ||
00728         minfo.State != MEM_COMMIT || minfo.RegionSize > size)
00729       return -1;
00730     if (VirtualFree(cptr, 0, MEM_RELEASE) == 0)
00731       return -1;
00732     cptr += minfo.RegionSize;
00733     size -= minfo.RegionSize;
00734   }
00735   return 0;
00736 }
00737 
00738 #define CALL_MMAP(s)         win32mmap(s)
00739 #define CALL_MUNMAP(a, s)    win32munmap((a), (s))
00740 #define DIRECT_MMAP(s)       win32direct_mmap(s)
00741 #endif /* WIN32 */
00742 #endif /* HAVE_MMAP */
00743 
00744 #if HAVE_MMAP && HAVE_MREMAP
00745 #define CALL_MREMAP(addr, osz, nsz, mv) mremap((addr), (osz), (nsz), (mv))
00746 #else  /* HAVE_MMAP && HAVE_MREMAP */
00747 #define CALL_MREMAP(addr, osz, nsz, mv) MFAIL
00748 #endif /* HAVE_MMAP && HAVE_MREMAP */
00749 
00750 #if HAVE_MORECORE
00751 #define CALL_MORECORE(S)     MORECORE(S)
00752 #else  /* HAVE_MORECORE */
00753 #define CALL_MORECORE(S)     MFAIL
00754 #endif /* HAVE_MORECORE */
00755 
00756 /* mstate bit set if continguous morecore disabled or failed */
00757 #define USE_NONCONTIGUOUS_BIT (4U)
00758 
00759 /* segment bit set in create_mspace_with_base */
00760 #define EXTERN_BIT            (8U)
00761 
00762 
00763 /* --------------------------- Lock preliminaries ------------------------ */
00764 
00765 #if USE_LOCKS
00766 
00767 /*
00768   When locks are defined, there are up to two global locks:
00769 
00770   * If HAVE_MORECORE, morecore_mutex protects sequences of calls to
00771     MORECORE.  In many cases sys_alloc requires two calls, that should
00772     not be interleaved with calls by other threads.  This does not
00773     protect against direct calls to MORECORE by other threads not
00774     using this lock, so there is still code to cope the best we can on
00775     interference.
00776 
00777   * magic_init_mutex ensures that mparams.magic and other
00778     unique mparams values are initialized only once.
00779 */
00780 
00781 /* By default use posix locks */
00782 #include <futex.h>
00783 #define MLOCK_T atomic_t
00784 #define INITIAL_LOCK(l)      futex_initialize(l, 1)
00785 /* futex_down cannot fail, but can return different
00786  * retvals for OK
00787  */
00788 #define ACQUIRE_LOCK(l)      ({futex_down(l);0;})
00789 #define RELEASE_LOCK(l)      futex_up(l)
00790 
00791 #if HAVE_MORECORE
00792 static MLOCK_T morecore_mutex = FUTEX_INITIALIZER;
00793 #endif /* HAVE_MORECORE */
00794 
00795 static MLOCK_T magic_init_mutex = FUTEX_INITIALIZER;
00796 
00797 
00798 #define USE_LOCK_BIT               (2U)
00799 #else  /* USE_LOCKS */
00800 #define USE_LOCK_BIT               (0U)
00801 #define INITIAL_LOCK(l)
00802 #endif /* USE_LOCKS */
00803 
00804 #if USE_LOCKS && HAVE_MORECORE
00805 #define ACQUIRE_MORECORE_LOCK()    ACQUIRE_LOCK(&morecore_mutex);
00806 #define RELEASE_MORECORE_LOCK()    RELEASE_LOCK(&morecore_mutex);
00807 #else /* USE_LOCKS && HAVE_MORECORE */
00808 #define ACQUIRE_MORECORE_LOCK()
00809 #define RELEASE_MORECORE_LOCK()
00810 #endif /* USE_LOCKS && HAVE_MORECORE */
00811 
00812 #if USE_LOCKS
00813 #define ACQUIRE_MAGIC_INIT_LOCK()  ACQUIRE_LOCK(&magic_init_mutex);
00814 #define RELEASE_MAGIC_INIT_LOCK()  RELEASE_LOCK(&magic_init_mutex);
00815 #else  /* USE_LOCKS */
00816 #define ACQUIRE_MAGIC_INIT_LOCK()
00817 #define RELEASE_MAGIC_INIT_LOCK()
00818 #endif /* USE_LOCKS */
00819 
00820 
00821 /* -----------------------  Chunk representations ------------------------ */
00822 
00823 /*
00824   (The following includes lightly edited explanations by Colin Plumb.)
00825 
00826   The malloc_chunk declaration below is misleading (but accurate and
00827   necessary).  It declares a "view" into memory allowing access to
00828   necessary fields at known offsets from a given base.
00829 
00830   Chunks of memory are maintained using a `boundary tag' method as
00831   originally described by Knuth.  (See the paper by Paul Wilson
00832   ftp://ftp.cs.utexas.edu/pub/garbage/allocsrv.ps for a survey of such
00833   techniques.)  Sizes of free chunks are stored both in the front of
00834   each chunk and at the end.  This makes consolidating fragmented
00835   chunks into bigger chunks fast.  The head fields also hold bits
00836   representing whether chunks are free or in use.
00837 
00838   Here are some pictures to make it clearer.  They are "exploded" to
00839   show that the state of a chunk can be thought of as extending from
00840   the high 31 bits of the head field of its header through the
00841   prev_foot and PINUSE_BIT bit of the following chunk header.
00842 
00843   A chunk that's in use looks like:
00844 
00845    chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
00846            | Size of previous chunk (if P = 1)                             |
00847            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
00848          +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |P|
00849          | Size of this chunk                                         1| +-+
00850    mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
00851          |                                                               |
00852          +-                                                             -+
00853          |                                                               |
00854          +-                                                             -+
00855          |                                                               :
00856          +-      size - sizeof(size_t) available payload bytes          -+
00857          :                                                               |
00858  chunk-> +-                                                             -+
00859          |                                                               |
00860          +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
00861        +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |1|
00862        | Size of next chunk (may or may not be in use)               | +-+
00863  mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
00864 
00865     And if it's free, it looks like this:
00866 
00867    chunk-> +-                                                             -+
00868            | User payload (must be in use, or we would have merged!)       |
00869            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
00870          +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |P|
00871          | Size of this chunk                                         0| +-+
00872    mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
00873          | Next pointer                                                  |
00874          +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
00875          | Prev pointer                                                  |
00876          +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
00877          |                                                               :
00878          +-      size - sizeof(struct chunk) unused bytes               -+
00879          :                                                               |
00880  chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
00881          | Size of this chunk                                            |
00882          +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
00883        +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |0|
00884        | Size of next chunk (must be in use, or we would have merged)| +-+
00885  mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
00886        |                                                               :
00887        +- User payload                                                -+
00888        :                                                               |
00889        +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
00890                                                                      |0|
00891                                                                      +-+
00892   Note that since we always merge adjacent free chunks, the chunks
00893   adjacent to a free chunk must be in use.
00894 
00895   Given a pointer to a chunk (which can be derived trivially from the
00896   payload pointer) we can, in O(1) time, find out whether the adjacent
00897   chunks are free, and if so, unlink them from the lists that they
00898   are on and merge them with the current chunk.
00899 
00900   Chunks always begin on even word boundaries, so the mem portion
00901   (which is returned to the user) is also on an even word boundary, and
00902   thus at least double-word aligned.
00903 
00904   The P (PINUSE_BIT) bit, stored in the unused low-order bit of the
00905   chunk size (which is always a multiple of two words), is an in-use
00906   bit for the *previous* chunk.  If that bit is *clear*, then the
00907   word before the current chunk size contains the previous chunk
00908   size, and can be used to find the front of the previous chunk.
00909   The very first chunk allocated always has this bit set, preventing
00910   access to non-existent (or non-owned) memory. If pinuse is set for
00911   any given chunk, then you CANNOT determine the size of the
00912   previous chunk, and might even get a memory addressing fault when
00913   trying to do so.
00914 
00915   The C (CINUSE_BIT) bit, stored in the unused second-lowest bit of
00916   the chunk size redundantly records whether the current chunk is
00917   inuse. This redundancy enables usage checks within free and realloc,
00918   and reduces indirection when freeing and consolidating chunks.
00919 
00920   Each freshly allocated chunk must have both cinuse and pinuse set.
00921   That is, each allocated chunk borders either a previously allocated
00922   and still in-use chunk, or the base of its memory arena. This is
00923   ensured by making all allocations from the the `lowest' part of any
00924   found chunk.  Further, no free chunk physically borders another one,
00925   so each free chunk is known to be preceded and followed by either
00926   inuse chunks or the ends of memory.
00927 
00928   Note that the `foot' of the current chunk is actually represented
00929   as the prev_foot of the NEXT chunk. This makes it easier to
00930   deal with alignments etc but can be very confusing when trying
00931   to extend or adapt this code.
00932 
00933   The exceptions to all this are
00934 
00935      1. The special chunk `top' is the top-most available chunk (i.e.,
00936         the one bordering the end of available memory). It is treated
00937         specially.  Top is never included in any bin, is used only if
00938         no other chunk is available, and is released back to the
00939         system if it is very large (see M_TRIM_THRESHOLD).  In effect,
00940         the top chunk is treated as larger (and thus less well
00941         fitting) than any other available chunk.  The top chunk
00942         doesn't update its trailing size field since there is no next
00943         contiguous chunk that would have to index off it. However,
00944         space is still allocated for it (TOP_FOOT_SIZE) to enable
00945         separation or merging when space is extended.
00946 
00947      3. Chunks allocated via mmap, which have the lowest-order bit
00948         (IS_MMAPPED_BIT) set in their prev_foot fields, and do not set
00949         PINUSE_BIT in their head fields.  Because they are allocated
00950         one-by-one, each must carry its own prev_foot field, which is
00951         also used to hold the offset this chunk has within its mmapped
00952         region, which is needed to preserve alignment. Each mmapped
00953         chunk is trailed by the first two fields of a fake next-chunk
00954         for sake of usage checks.
00955 
00956 */
00957 
00958 struct malloc_chunk {
00959   size_t               prev_foot;  /* Size of previous chunk (if free).  */
00960   size_t               head;       /* Size and inuse bits. */
00961   struct malloc_chunk* fd;         /* double links -- used only if free. */
00962   struct malloc_chunk* bk;
00963 };
00964 
00965 typedef struct malloc_chunk  mchunk;
00966 typedef struct malloc_chunk* mchunkptr;
00967 typedef struct malloc_chunk* sbinptr;  /* The type of bins of chunks */
00968 typedef unsigned int bindex_t;         /* Described below */
00969 typedef unsigned int binmap_t;         /* Described below */
00970 typedef unsigned int flag_t;           /* The type of various bit flag sets */
00971 
00972 /* ------------------- Chunks sizes and alignments ----------------------- */
00973 
00974 #define MCHUNK_SIZE         (sizeof(mchunk))
00975 
00976 #if FOOTERS
00977 #define CHUNK_OVERHEAD      (TWO_SIZE_T_SIZES)
00978 #else /* FOOTERS */
00979 #define CHUNK_OVERHEAD      (SIZE_T_SIZE)
00980 #endif /* FOOTERS */
00981 
00982 /* MMapped chunks need a second word of overhead ... */
00983 #define MMAP_CHUNK_OVERHEAD (TWO_SIZE_T_SIZES)
00984 /* ... and additional padding for fake next-chunk at foot */
00985 #define MMAP_FOOT_PAD       (FOUR_SIZE_T_SIZES)
00986 
00987 /* The smallest size we can malloc is an aligned minimal chunk */
00988 #define MIN_CHUNK_SIZE\
00989   ((MCHUNK_SIZE + CHUNK_ALIGN_MASK) & ~CHUNK_ALIGN_MASK)
00990 
00991 /* conversion from malloc headers to user pointers, and back */
00992 #define chunk2mem(p)        ((void*)((char*)(p)       + TWO_SIZE_T_SIZES))
00993 #define mem2chunk(mem)      ((mchunkptr)((char*)(mem) - TWO_SIZE_T_SIZES))
00994 /* chunk associated with aligned address A */
00995 #define align_as_chunk(A)   (mchunkptr)((A) + align_offset(chunk2mem(A)))
00996 
00997 /* Bounds on request (not chunk) sizes. */
00998 #define MAX_REQUEST         ((-MIN_CHUNK_SIZE) << 2)
00999 #define MIN_REQUEST         (MIN_CHUNK_SIZE - CHUNK_OVERHEAD - SIZE_T_ONE)
01000 
01001 /* pad request bytes into a usable size */
01002 #define pad_request(req) \
01003    (((req) + CHUNK_OVERHEAD + CHUNK_ALIGN_MASK) & ~CHUNK_ALIGN_MASK)
01004 
01005 /* pad request, checking for minimum (but not maximum) */
01006 #define request2size(req) \
01007   (((req) < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(req))
01008 
01009 
01010 /* ------------------ Operations on head and foot fields ----------------- */
01011 
01012 /*
01013   The head field of a chunk is or'ed with PINUSE_BIT when previous
01014   adjacent chunk in use, and or'ed with CINUSE_BIT if this chunk is in
01015   use. If the chunk was obtained with mmap, the prev_foot field has
01016   IS_MMAPPED_BIT set, otherwise holding the offset of the base of the
01017   mmapped region to the base of the chunk.
01018 */
01019 
01020 #define PINUSE_BIT          (SIZE_T_ONE)
01021 #define CINUSE_BIT          (SIZE_T_TWO)
01022 #define INUSE_BITS          (PINUSE_BIT|CINUSE_BIT)
01023 
01024 /* Head value for fenceposts */
01025 #define FENCEPOST_HEAD      (INUSE_BITS|SIZE_T_SIZE)
01026 
01027 /* extraction of fields from head words */
01028 #define cinuse(p)           ((p)->head & CINUSE_BIT)
01029 #define pinuse(p)           ((p)->head & PINUSE_BIT)
01030 #define chunksize(p)        ((p)->head & ~(INUSE_BITS))
01031 
01032 #define clear_pinuse(p)     ((p)->head &= ~PINUSE_BIT)
01033 #define clear_cinuse(p)     ((p)->head &= ~CINUSE_BIT)
01034 
01035 /* Treat space at ptr +/- offset as a chunk */
01036 #define chunk_plus_offset(p, s)  ((mchunkptr)(((char*)(p)) + (s)))
01037 #define chunk_minus_offset(p, s) ((mchunkptr)(((char*)(p)) - (s)))
01038 
01039 /* Ptr to next or previous physical malloc_chunk. */
01040 #define next_chunk(p) ((mchunkptr)( ((char*)(p)) + ((p)->head & ~INUSE_BITS)))
01041 #define prev_chunk(p) ((mchunkptr)( ((char*)(p)) - ((p)->prev_foot) ))
01042 
01043 /* extract next chunk's pinuse bit */
01044 #define next_pinuse(p)  ((next_chunk(p)->head) & PINUSE_BIT)
01045 
01046 /* Get/set size at footer */
01047 #define get_foot(p, s)  (((mchunkptr)((char*)(p) + (s)))->prev_foot)
01048 #define set_foot(p, s)  (((mchunkptr)((char*)(p) + (s)))->prev_foot = (s))
01049 
01050 /* Set size, pinuse bit, and foot */
01051 #define set_size_and_pinuse_of_free_chunk(p, s)\
01052   ((p)->head = (s|PINUSE_BIT), set_foot(p, s))
01053 
01054 /* Set size, pinuse bit, foot, and clear next pinuse */
01055 #define set_free_with_pinuse(p, s, n)\
01056   (clear_pinuse(n), set_size_and_pinuse_of_free_chunk(p, s))
01057 
01058 #define is_mmapped(p)\
01059   (!((p)->head & PINUSE_BIT) && ((p)->prev_foot & IS_MMAPPED_BIT))
01060 
01061 /* Get the internal overhead associated with chunk p */
01062 #define overhead_for(p)\
01063  (is_mmapped(p)? MMAP_CHUNK_OVERHEAD : CHUNK_OVERHEAD)
01064 
01065 /* Return true if malloced space is not necessarily cleared */
01066 #if MMAP_CLEARS
01067 #define calloc_must_clear(p) (!is_mmapped(p))
01068 #else /* MMAP_CLEARS */
01069 #define calloc_must_clear(p) (1)
01070 #endif /* MMAP_CLEARS */
01071 
01072 /* ---------------------- Overlaid data structures ----------------------- */
01073 
01074 /*
01075   When chunks are not in use, they are treated as nodes of either
01076   lists or trees.
01077 
01078   "Small"  chunks are stored in circular doubly-linked lists, and look
01079   like this:
01080 
01081     chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
01082             |             Size of previous chunk                            |
01083             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
01084     `head:' |             Size of chunk, in bytes                         |P|
01085       mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
01086             |             Forward pointer to next chunk in list             |
01087             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
01088             |             Back pointer to previous chunk in list            |
01089             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
01090             |             Unused space (may be 0 bytes long)                .
01091             .                                                               .
01092             .                                                               |
01093 nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
01094     `foot:' |             Size of chunk, in bytes                           |
01095             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
01096 
01097   Larger chunks are kept in a form of bitwise digital trees (aka
01098   tries) keyed on chunksizes.  Because malloc_tree_chunks are only for
01099   free chunks greater than 256 bytes, their size doesn't impose any
01100   constraints on user chunk sizes.  Each node looks like:
01101 
01102     chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
01103             |             Size of previous chunk                            |
01104             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
01105     `head:' |             Size of chunk, in bytes                         |P|
01106       mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
01107             |             Forward pointer to next chunk of same size        |
01108             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
01109             |             Back pointer to previous chunk of same size       |
01110             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
01111             |             Pointer to left child (child[0])                  |
01112             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
01113             |             Pointer to right child (child[1])                 |
01114             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
01115             |             Pointer to parent                                 |
01116             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
01117             |             bin index of this chunk                           |
01118             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
01119             |             Unused space                                      .
01120             .                                                               |
01121 nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
01122     `foot:' |             Size of chunk, in bytes                           |
01123             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
01124 
01125   Each tree holding treenodes is a tree of unique chunk sizes.  Chunks
01126   of the same size are arranged in a circularly-linked list, with only
01127   the oldest chunk (the next to be used, in our FIFO ordering)
01128   actually in the tree.  (Tree members are distinguished by a non-null
01129   parent pointer.)  If a chunk with the same size an an existing node
01130   is inserted, it is linked off the existing node using pointers that
01131   work in the same way as fd/bk pointers of small chunks.
01132 
01133   Each tree contains a power of 2 sized range of chunk sizes (the
01134   smallest is 0x100 <= x < 0x180), which is is divided in half at each
01135   tree level, with the chunks in the smaller half of the range (0x100
01136   <= x < 0x140 for the top nose) in the left subtree and the larger
01137   half (0x140 <= x < 0x180) in the right subtree.  This is, of course,
01138   done by inspecting individual bits.
01139 
01140   Using these rules, each node's left subtree contains all smaller
01141   sizes than its right subtree.  However, the node at the root of each
01142   subtree has no particular ordering relationship to either.  (The
01143   dividing line between the subtree sizes is based on trie relation.)
01144   If we remove the last chunk of a given size from the interior of the
01145   tree, we need to replace it with a leaf node.  The tree ordering
01146   rules permit a node to be replaced by any leaf below it.
01147 
01148   The smallest chunk in a tree (a common operation in a best-fit
01149   allocator) can be found by walking a path to the leftmost leaf in
01150   the tree.  Unlike a usual binary tree, where we follow left child
01151   pointers until we reach a null, here we follow the right child
01152   pointer any time the left one is null, until we reach a leaf with
01153   both child pointers null. The smallest chunk in the tree will be
01154   somewhere along that path.
01155 
01156   The worst case number of steps to add, find, or remove a node is
01157   bounded by the number of bits differentiating chunks within
01158   bins. Under current bin calculations, this ranges from 6 up to 21
01159   (for 32 bit sizes) or up to 53 (for 64 bit sizes). The typical case
01160   is of course much better.
01161 */
01162 
01163 struct malloc_tree_chunk {
01164   /* The first four fields must be compatible with malloc_chunk */
01165   size_t                    prev_foot;
01166   size_t                    head;
01167   struct malloc_tree_chunk* fd;
01168   struct malloc_tree_chunk* bk;
01169 
01170   struct malloc_tree_chunk* child[2];
01171   struct malloc_tree_chunk* parent;
01172   bindex_t                  index;
01173 };
01174 
01175 typedef struct malloc_tree_chunk  tchunk;
01176 typedef struct malloc_tree_chunk* tchunkptr;
01177 typedef struct malloc_tree_chunk* tbinptr; /* The type of bins of trees */
01178 
01179 /* A little helper macro for trees */
01180 #define leftmost_child(t) ((t)->child[0] != 0? (t)->child[0] : (t)->child[1])
01181 
01182 /* ----------------------------- Segments -------------------------------- */
01183 
01184 /*
01185   Each malloc space may include non-contiguous segments, held in a
01186   list headed by an embedded malloc_segment record representing the
01187   top-most space. Segments also include flags holding properties of
01188   the space. Large chunks that are directly allocated by mmap are not
01189   included in this list. They are instead independently created and
01190   destroyed without otherwise keeping track of them.
01191 
01192   Segment management mainly comes into play for spaces allocated by
01193   MMAP.  Any call to MMAP might or might not return memory that is
01194   adjacent to an existing segment.  MORECORE normally contiguously
01195   extends the current space, so this space is almost always adjacent,
01196   which is simpler and faster to deal with. (This is why MORECORE is
01197   used preferentially to MMAP when both are available -- see
01198   sys_alloc.)  When allocating using MMAP, we don't use any of the
01199   hinting mechanisms (inconsistently) supported in various
01200   implementations of unix mmap, or distinguish reserving from
01201   committing memory. Instead, we just ask for space, and exploit
01202   contiguity when we get it.  It is probably possible to do
01203   better than this on some systems, but no general scheme seems
01204   to be significantly better.
01205 
01206   Management entails a simpler variant of the consolidation scheme
01207   used for chunks to reduce fragmentation -- new adjacent memory is
01208   normally prepended or appended to an existing segment. However,
01209   there are limitations compared to chunk consolidation that mostly
01210   reflect the fact that segment processing is relatively infrequent
01211   (occurring only when getting memory from system) and that we
01212   don't expect to have huge numbers of segments:
01213 
01214   * Segments are not indexed, so traversal requires linear scans.  (It
01215     would be possible to index these, but is not worth the extra
01216     overhead and complexity for most programs on most platforms.)
01217   * New segments are only appended to old ones when holding top-most
01218     memory; if they cannot be prepended to others, they are held in
01219     different segments.
01220 
01221   Except for the top-most segment of an mstate, each segment record
01222   is kept at the tail of its segment. Segments are added by pushing
01223   segment records onto the list headed by &mstate.seg for the
01224   containing mstate.
01225 
01226   Segment flags control allocation/merge/deallocation policies:
01227   * If EXTERN_BIT set, then we did not allocate this segment,
01228     and so should not try to deallocate or merge with others.
01229     (This currently holds only for the initial segment passed
01230     into create_mspace_with_base.)
01231   * If IS_MMAPPED_BIT set, the segment may be merged with
01232     other surrounding mmapped segments and trimmed/de-allocated
01233     using munmap.
01234   * If neither bit is set, then the segment was obtained using
01235     MORECORE so can be merged with surrounding MORECORE'd segments
01236     and deallocated/trimmed using MORECORE with negative arguments.
01237 */
01238 
01239 struct malloc_segment {
01240   char*        base;             /* base address */
01241   size_t       size;             /* allocated size */
01242   struct malloc_segment* next;   /* ptr to next segment */
01243   flag_t       sflags;           /* mmap and extern flag */
01244 };
01245 
01246 #define is_mmapped_segment(S)  ((S)->sflags & IS_MMAPPED_BIT)
01247 #define is_extern_segment(S)   ((S)->sflags & EXTERN_BIT)
01248 
01249 typedef struct malloc_segment  msegment;
01250 typedef struct malloc_segment* msegmentptr;
01251 
01252 /* ---------------------------- malloc_state ----------------------------- */
01253 
01254 /*
01255    A malloc_state holds all of the bookkeeping for a space.
01256    The main fields are:
01257 
01258   Top
01259     The topmost chunk of the currently active segment. Its size is
01260     cached in topsize.  The actual size of topmost space is
01261     topsize+TOP_FOOT_SIZE, which includes space reserved for adding
01262     fenceposts and segment records if necessary when getting more
01263     space from the system.  The size at which to autotrim top is
01264     cached from mparams in trim_check, except that it is disabled if
01265     an autotrim fails.
01266 
01267   Designated victim (dv)
01268     This is the preferred chunk for servicing small requests that
01269     don't have exact fits.  It is normally the chunk split off most
01270     recently to service another small request.  Its size is cached in
01271     dvsize. The link fields of this chunk are not maintained since it
01272     is not kept in a bin.
01273 
01274   SmallBins
01275     An array of bin headers for free chunks.  These bins hold chunks
01276     with sizes less than MIN_LARGE_SIZE bytes. Each bin contains
01277     chunks of all the same size, spaced 8 bytes apart.  To simplify
01278     use in double-linked lists, each bin header acts as a malloc_chunk
01279     pointing to the real first node, if it exists (else pointing to
01280     itself).  This avoids special-casing for headers.  But to avoid
01281     waste, we allocate only the fd/bk pointers of bins, and then use
01282     repositioning tricks to treat these as the fields of a chunk.
01283 
01284   TreeBins
01285     Treebins are pointers to the roots of trees holding a range of
01286     sizes. There are 2 equally spaced treebins for each power of two
01287     from TREE_SHIFT to TREE_SHIFT+16. The last bin holds anything
01288     larger.
01289 
01290   Bin maps
01291     There is one bit map for small bins ("smallmap") and one for
01292     treebins ("treemap).  Each bin sets its bit when non-empty, and
01293     clears the bit when empty.  Bit operations are then used to avoid
01294     bin-by-bin searching -- nearly all "search" is done without ever
01295     looking at bins that won't be selected.  The bit maps
01296     conservatively use 32 bits per map word, even if on 64bit system.
01297     For a good description of some of the bit-based techniques used
01298     here, see Henry S. Warren Jr's book "Hacker's Delight" (and
01299     supplement at http://hackersdelight.org/). Many of these are
01300     intended to reduce the branchiness of paths through malloc etc, as
01301     well as to reduce the number of memory locations read or written.
01302 
01303   Segments
01304     A list of segments headed by an embedded malloc_segment record
01305     representing the initial space.
01306 
01307   Address check support
01308     The least_addr field is the least address ever obtained from
01309     MORECORE or MMAP. Attempted frees and reallocs of any address less
01310     than this are trapped (unless INSECURE is defined).
01311 
01312   Magic tag
01313     A cross-check field that should always hold same value as mparams.magic.
01314 
01315   Flags
01316     Bits recording whether to use MMAP, locks, or contiguous MORECORE
01317 
01318   Statistics
01319     Each space keeps track of current and maximum system memory
01320     obtained via MORECORE or MMAP.
01321 
01322   Locking
01323     If USE_LOCKS is defined, the "mutex" lock is acquired and released
01324     around every public call using this mspace.
01325 */
01326 
01327 /* Bin types, widths and sizes */
01328 #define NSMALLBINS        (32U)
01329 #define NTREEBINS         (32U)
01330 #define SMALLBIN_SHIFT    (3U)
01331 #define SMALLBIN_WIDTH    (SIZE_T_ONE << SMALLBIN_SHIFT)
01332 #define TREEBIN_SHIFT     (8U)
01333 #define MIN_LARGE_SIZE    (SIZE_T_ONE << TREEBIN_SHIFT)
01334 #define MAX_SMALL_SIZE    (MIN_LARGE_SIZE - SIZE_T_ONE)
01335 #define MAX_SMALL_REQUEST (MAX_SMALL_SIZE - CHUNK_ALIGN_MASK - CHUNK_OVERHEAD)
01336 
01337 struct malloc_state {
01338   binmap_t   smallmap;
01339   binmap_t   treemap;
01340   size_t     dvsize;
01341   size_t     topsize;
01342   char*      least_addr;
01343   mchunkptr  dv;
01344   mchunkptr  top;
01345   size_t     trim_check;
01346   size_t     magic;
01347   mchunkptr  smallbins[(NSMALLBINS+1)*2];
01348   tbinptr    treebins[NTREEBINS];
01349   size_t     footprint;
01350   size_t     max_footprint;
01351   flag_t     mflags;
01352 #if USE_LOCKS
01353   MLOCK_T    mutex;     /* locate lock among fields that rarely change */
01354 #endif /* USE_LOCKS */
01355   msegment   seg;
01356 };
01357 
01358 typedef struct malloc_state*    mstate;
01359 
01360 /* ------------- Global malloc_state and malloc_params ------------------- */
01361 
01362 /*
01363   malloc_params holds global properties, including those that can be
01364   dynamically set using mallopt. There is a single instance, mparams,
01365   initialized in init_mparams.
01366 */
01367 
01368 struct malloc_params {
01369   size_t magic;
01370   size_t page_size;
01371   size_t granularity;
01372   size_t mmap_threshold;
01373   size_t trim_threshold;
01374   flag_t default_mflags;
01375 };
01376 
01377 static struct malloc_params mparams;
01378 
01379 /* The global malloc_state used for all non-"mspace" calls */
01380 static struct malloc_state _gm_;
01381 #define gm                 (&_gm_)
01382 #define is_global(M)       ((M) == &_gm_)
01383 #define is_initialized(M)  ((M)->top != 0)
01384 
01385 /* -------------------------- system alloc setup ------------------------- */
01386 
01387 /* Operations on mflags */
01388 
01389 #define use_lock(M)           ((M)->mflags &   USE_LOCK_BIT)
01390 #define enable_lock(M)        ((M)->mflags |=  USE_LOCK_BIT)
01391 #define disable_lock(M)       ((M)->mflags &= ~USE_LOCK_BIT)
01392 
01393 #define use_mmap(M)           ((M)->mflags &   USE_MMAP_BIT)
01394 #define enable_mmap(M)        ((M)->mflags |=  USE_MMAP_BIT)
01395 #define disable_mmap(M)       ((M)->mflags &= ~USE_MMAP_BIT)
01396 
01397 #define use_noncontiguous(M)  ((M)->mflags &   USE_NONCONTIGUOUS_BIT)
01398 #define disable_contiguous(M) ((M)->mflags |=  USE_NONCONTIGUOUS_BIT)
01399 
01400 #define set_lock(M,L)\
01401  ((M)->mflags = (L)?\
01402   ((M)->mflags | USE_LOCK_BIT) :\
01403   ((M)->mflags & ~USE_LOCK_BIT))
01404 
01405 /* page-align a size */
01406 #define page_align(S)\
01407  (((S) + (mparams.page_size)) & ~(mparams.page_size - SIZE_T_ONE))
01408 
01409 /* granularity-align a size */
01410 #define granularity_align(S)\
01411   (((S) + (mparams.granularity)) & ~(mparams.granularity - SIZE_T_ONE))
01412 
01413 #define is_page_aligned(S)\
01414    (((size_t)(S) & (mparams.page_size - SIZE_T_ONE)) == 0)
01415 #define is_granularity_aligned(S)\
01416    (((size_t)(S) & (mparams.granularity - SIZE_T_ONE)) == 0)
01417 
01418 /*  True if segment S holds address A */
01419 #define segment_holds(S, A)\
01420   ((char*)(A) >= S->base && (char*)(A) < S->base + S->size)
01421 
01422 /* Return segment holding given address */
01423 static msegmentptr segment_holding(mstate m, char* addr) {
01424   msegmentptr sp = &m->seg;
01425   for (;;) {
01426     if (addr >= sp->base && addr < sp->base + sp->size)
01427       return sp;
01428     if ((sp = sp->next) == 0)
01429       return 0;
01430   }
01431 }
01432 
01433 /* Return true if segment contains a segment link */
01434 static int has_segment_link(mstate m, msegmentptr ss) {
01435   msegmentptr sp = &m->seg;
01436   for (;;) {
01437     if ((char*)sp >= ss->base && (char*)sp < ss->base + ss->size)
01438       return 1;
01439     if ((sp = sp->next) == 0)
01440       return 0;
01441   }
01442 }
01443 
01444 #ifndef MORECORE_CANNOT_TRIM
01445 #define should_trim(M,s)  ((s) > (M)->trim_check)
01446 #else  /* MORECORE_CANNOT_TRIM */
01447 #define should_trim(M,s)  (0)
01448 #endif /* MORECORE_CANNOT_TRIM */
01449 
01450 /*
01451   TOP_FOOT_SIZE is padding at the end of a segment, including space
01452   that may be needed to place segment records and fenceposts when new
01453   noncontiguous segments are added.
01454 */
01455 #define TOP_FOOT_SIZE\
01456   (align_offset(chunk2mem(0))+pad_request(sizeof(struct malloc_segment))+MIN_CHUNK_SIZE)
01457 
01458 
01459 /* -------------------------------  Hooks -------------------------------- */
01460 
01461 /*
01462   PREACTION should be defined to return 0 on success, and nonzero on
01463   failure. If you are not using locking, you can redefine these to do
01464   anything you like.
01465 */
01466 
01467 #if USE_LOCKS
01468 
01469 /* Ensure locks are initialized */
01470 #define GLOBALLY_INITIALIZE() (mparams.page_size == 0 && init_mparams())
01471 
01472 #define PREACTION(M)  ((GLOBALLY_INITIALIZE() || use_lock(M))? ACQUIRE_LOCK(&(M)->mutex) : 0)
01473 #define POSTACTION(M) { if (use_lock(M)) RELEASE_LOCK(&(M)->mutex); }
01474 #else /* USE_LOCKS */
01475 
01476 #ifndef PREACTION
01477 #define PREACTION(M) (0)
01478 #endif  /* PREACTION */
01479 
01480 #ifndef POSTACTION
01481 #define POSTACTION(M)
01482 #endif  /* POSTACTION */
01483 
01484 #endif /* USE_LOCKS */
01485 
01486 /*
01487   CORRUPTION_ERROR_ACTION is triggered upon detected bad addresses.
01488   USAGE_ERROR_ACTION is triggered on detected bad frees and
01489   reallocs. The argument p is an address that might have triggered the
01490   fault. It is ignored by the two predefined actions, but might be
01491   useful in custom actions that try to help diagnose errors.
01492 */
01493 
01494 #if PROCEED_ON_ERROR
01495 
01496 /* A count of the number of corruption errors causing resets */
01497 int malloc_corruption_error_count;
01498 
01499 /* default corruption action */
01500 static void reset_on_error(mstate m);
01501 
01502 #define CORRUPTION_ERROR_ACTION(m)  reset_on_error(m)
01503 #define USAGE_ERROR_ACTION(m, p)
01504 
01505 #else /* PROCEED_ON_ERROR */
01506 
01507 #ifndef CORRUPTION_ERROR_ACTION
01508 #define CORRUPTION_ERROR_ACTION(m) ABORT
01509 #endif /* CORRUPTION_ERROR_ACTION */
01510 
01511 #ifndef USAGE_ERROR_ACTION
01512 #define USAGE_ERROR_ACTION(m,p) ABORT
01513 #endif /* USAGE_ERROR_ACTION */
01514 
01515 #endif /* PROCEED_ON_ERROR */
01516 
01517 /* -------------------------- Debugging setup ---------------------------- */
01518 
01519 #if ! DEBUG
01520 
01521 #define check_free_chunk(M,P)
01522 #define check_inuse_chunk(M,P)
01523 #define check_malloced_chunk(M,P,N)
01524 #define check_mmapped_chunk(M,P)
01525 #define check_malloc_state(M)
01526 #define check_top_chunk(M,P)
01527 
01528 #else /* DEBUG */
01529 #define check_free_chunk(M,P)       do_check_free_chunk(M,P)
01530 #define check_inuse_chunk(M,P)      do_check_inuse_chunk(M,P)
01531 #define check_top_chunk(M,P)        do_check_top_chunk(M,P)
01532 #define check_malloced_chunk(M,P,N) do_check_malloced_chunk(M,P,N)
01533 #define check_mmapped_chunk(M,P)    do_check_mmapped_chunk(M,P)
01534 #define check_malloc_state(M)       do_check_malloc_state(M)
01535 
01536 static void   do_check_any_chunk(mstate m, mchunkptr p);
01537 static void   do_check_top_chunk(mstate m, mchunkptr p);
01538 static void   do_check_mmapped_chunk(mstate m, mchunkptr p);
01539 static void   do_check_inuse_chunk(mstate m, mchunkptr p);
01540 static void   do_check_free_chunk(mstate m, mchunkptr p);
01541 static void   do_check_malloced_chunk(mstate m, void* mem, size_t s);
01542 static void   do_check_tree(mstate m, tchunkptr t);
01543 static void   do_check_treebin(mstate m, bindex_t i);
01544 static void   do_check_smallbin(mstate m, bindex_t i);
01545 static void   do_check_malloc_state(mstate m);
01546 static int    bin_find(mstate m, mchunkptr x);
01547 static size_t traverse_and_check(mstate m);
01548 #endif /* DEBUG */
01549 
01550 /* ---------------------------- Indexing Bins ---------------------------- */
01551 
01552 #define is_small(s)         (((s) >> SMALLBIN_SHIFT) < NSMALLBINS)
01553 #define small_index(s)      ((s)  >> SMALLBIN_SHIFT)
01554 #define small_index2size(i) ((i)  << SMALLBIN_SHIFT)
01555 #define MIN_SMALL_INDEX     (small_index(MIN_CHUNK_SIZE))
01556 
01557 /* addressing by index. See above about smallbin repositioning */
01558 #define smallbin_at(M, i)   ((sbinptr)((char*)&((M)->smallbins[(i)<<1])))
01559 #define treebin_at(M,i)     (&((M)->treebins[i]))
01560 
01561 /* assign tree index for size S to variable I */
01562 #if defined(__GNUC__) && defined(i386)
01563 #define compute_tree_index(S, I)\
01564 {\
01565   size_t X = S >> TREEBIN_SHIFT;\
01566   if (X == 0)\
01567     I = 0;\
01568   else if (X > 0xFFFF)\
01569     I = NTREEBINS-1;\
01570   else {\
01571     unsigned int K;\
01572     __asm__("bsrl %1,%0\n\t" : "=r" (K) : "rm"  (X));\
01573     I =  (bindex_t)((K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1)));\
01574   }\
01575 }
01576 #else /* GNUC */
01577 #define compute_tree_index(S, I)\
01578 {\
01579   size_t X = S >> TREEBIN_SHIFT;\
01580   if (X == 0)\
01581     I = 0;\
01582   else if (X > 0xFFFF)\
01583     I = NTREEBINS-1;\
01584   else {\
01585     unsigned int Y = (unsigned int)X;\
01586     unsigned int N = ((Y - 0x100) >> 16) & 8;\
01587     unsigned int K = (((Y <<= N) - 0x1000) >> 16) & 4;\
01588     N += K;\
01589     N += K = (((Y <<= K) - 0x4000) >> 16) & 2;\
01590     K = 14 - N + ((Y <<= K) >> 15);\
01591     I = (K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1));\
01592   }\
01593 }
01594 #endif /* GNUC */
01595 
01596 /* Bit representing maximum resolved size in a treebin at i */
01597 #define bit_for_tree_index(i) \
01598    (i == NTREEBINS-1)? (SIZE_T_BITSIZE-1) : (((i) >> 1) + TREEBIN_SHIFT - 2)
01599 
01600 /* Shift placing maximum resolved bit in a treebin at i as sign bit */
01601 #define leftshift_for_tree_index(i) \
01602    ((i == NTREEBINS-1)? 0 : \
01603     ((SIZE_T_BITSIZE-SIZE_T_ONE) - (((i) >> 1) + TREEBIN_SHIFT - 2)))
01604 
01605 /* The size of the smallest chunk held in bin with index i */
01606 #define minsize_for_tree_index(i) \
01607    ((SIZE_T_ONE << (((i) >> 1) + TREEBIN_SHIFT)) |  \
01608    (((size_t)((i) & SIZE_T_ONE)) << (((i) >> 1) + TREEBIN_SHIFT - 1)))
01609 
01610 
01611 /* ------------------------ Operations on bin maps ----------------------- */
01612 
01613 /* bit corresponding to given index */
01614 #define idx2bit(i)              ((binmap_t)(1) << (i))
01615 
01616 /* Mark/Clear bits with given index */
01617 #define mark_smallmap(M,i)      ((M)->smallmap |=  idx2bit(i))
01618 #define clear_smallmap(M,i)     ((M)->smallmap &= ~idx2bit(i))
01619 #define smallmap_is_marked(M,i) ((M)->smallmap &   idx2bit(i))
01620 
01621 #define mark_treemap(M,i)       ((M)->treemap  |=  idx2bit(i))
01622 #define clear_treemap(M,i)      ((M)->treemap  &= ~idx2bit(i))
01623 #define treemap_is_marked(M,i)  ((M)->treemap  &   idx2bit(i))
01624 
01625 /* index corresponding to given bit */
01626 
01627 #if defined(__GNUC__) && defined(i386)
01628 #define compute_bit2idx(X, I)\
01629 {\
01630   unsigned int J;\
01631   __asm__("bsfl %1,%0\n\t" : "=r" (J) : "rm" (X));\
01632   I = (bindex_t)J;\
01633 }
01634 
01635 #else /* GNUC */
01636 #if  USE_BUILTIN_FFS
01637 #define compute_bit2idx(X, I) I = ffs(X)-1
01638 
01639 #else /* USE_BUILTIN_FFS */
01640 #define compute_bit2idx(X, I)\
01641 {\
01642   unsigned int Y = X - 1;\
01643   unsigned int K = Y >> (16-4) & 16;\
01644   unsigned int N = K;        Y >>= K;\
01645   N += K = Y >> (8-3) &  8;  Y >>= K;\
01646   N += K = Y >> (4-2) &  4;  Y >>= K;\
01647   N += K = Y >> (2-1) &  2;  Y >>= K;\
01648   N += K = Y >> (1-0) &  1;  Y >>= K;\
01649   I = (bindex_t)(N + Y);\
01650 }
01651 #endif /* USE_BUILTIN_FFS */
01652 #endif /* GNUC */
01653 
01654 /* isolate the least set bit of a bitmap */
01655 #define least_bit(x)         ((x) & -(x))
01656 
01657 /* mask with all bits to left of least bit of x on */
01658 #define left_bits(x)         ((x<<1) | -(x<<1))
01659 
01660 /* mask with all bits to left of or equal to least bit of x on */
01661 #define same_or_left_bits(x) ((x) | -(x))
01662 
01663 
01664 /* ----------------------- Runtime Check Support ------------------------- */
01665 
01666 /*
01667   For security, the main invariant is that malloc/free/etc never
01668   writes to a static address other than malloc_state, unless static
01669   malloc_state itself has been corrupted, which cannot occur via
01670   malloc (because of these checks). In essence this means that we
01671   believe all pointers, sizes, maps etc held in malloc_state, but
01672   check all of those linked or offsetted from other embedded data
01673   structures.  These checks are interspersed with main code in a way
01674   that tends to minimize their run-time cost.
01675 
01676   When FOOTERS is defined, in addition to range checking, we also
01677   verify footer fields of inuse chunks, which can be used guarantee
01678   that the mstate controlling malloc/free is intact.  This is a
01679   streamlined version of the approach described by William Robertson
01680   et al in "Run-time Detection of Heap-based Overflows" LISA'03
01681   http://www.usenix.org/events/lisa03/tech/robertson.html The footer
01682   of an inuse chunk holds the xor of its mstate and a random seed,
01683   that is checked upon calls to free() and realloc().  This is
01684   (probablistically) unguessable from outside the program, but can be
01685   computed by any code successfully malloc'ing any chunk, so does not
01686   itself provide protection against code that has already broken
01687   security through some other means.  Unlike Robertson et al, we
01688   always dynamically check addresses of all offset chunks (previous,
01689   next, etc). This turns out to be cheaper than relying on hashes.
01690 */
01691 
01692 #if !INSECURE
01693 /* Check if address a is at least as high as any from MORECORE or MMAP */
01694 #define ok_address(M, a) ((char*)(a) >= (M)->least_addr)
01695 /* Check if address of next chunk n is higher than base chunk p */
01696 #define ok_next(p, n)    ((char*)(p) < (char*)(n))
01697 /* Check if p has its cinuse bit on */
01698 #define ok_cinuse(p)     cinuse(p)
01699 /* Check if p has its pinuse bit on */
01700 #define ok_pinuse(p)     pinuse(p)
01701 
01702 #else /* !INSECURE */
01703 #define ok_address(M, a) (1)
01704 #define ok_next(b, n)    (1)
01705 #define ok_cinuse(p)     (1)
01706 #define ok_pinuse(p)     (1)
01707 #endif /* !INSECURE */
01708 
01709 #if (FOOTERS && !INSECURE)
01710 /* Check if (alleged) mstate m has expected magic field */
01711 #define ok_magic(M)      ((M)->magic == mparams.magic)
01712 #else  /* (FOOTERS && !INSECURE) */
01713 #define ok_magic(M)      (1)
01714 #endif /* (FOOTERS && !INSECURE) */
01715 
01716 
01717 /* In gcc, use __builtin_expect to minimize impact of checks */
01718 #if !INSECURE
01719 #if defined(__GNUC__) && __GNUC__ >= 3
01720 #define RTCHECK(e)  __builtin_expect(e, 1)
01721 #else /* GNUC */
01722 #define RTCHECK(e)  (e)
01723 #endif /* GNUC */
01724 #else /* !INSECURE */
01725 #define RTCHECK(e)  (1)
01726 #endif /* !INSECURE */
01727 
01728 /* macros to set up inuse chunks with or without footers */
01729 
01730 #if !FOOTERS
01731 
01732 #define mark_inuse_foot(M,p,s)
01733 
01734 /* Set cinuse bit and pinuse bit of next chunk */
01735 #define set_inuse(M,p,s)\
01736   ((p)->head = (((p)->head & PINUSE_BIT)|s|CINUSE_BIT),\
01737   ((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT)
01738 
01739 /* Set cinuse and pinuse of this chunk and pinuse of next chunk */
01740 #define set_inuse_and_pinuse(M,p,s)\
01741   ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\
01742   ((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT)
01743 
01744 /* Set size, cinuse and pinuse bit of this chunk */
01745 #define set_size_and_pinuse_of_inuse_chunk(M, p, s)\
01746   ((p)->head = (s|PINUSE_BIT|CINUSE_BIT))
01747 
01748 #else /* FOOTERS */
01749 
01750 /* Set foot of inuse chunk to be xor of mstate and seed */
01751 #define mark_inuse_foot(M,p,s)\
01752   (((mchunkptr)((char*)(p) + (s)))->prev_foot = ((size_t)(M) ^ mparams.magic))
01753 
01754 #define get_mstate_for(p)\
01755   ((mstate)(((mchunkptr)((char*)(p) +\
01756     (chunksize(p))))->prev_foot ^ mparams.magic))
01757 
01758 #define set_inuse(M,p,s)\
01759   ((p)->head = (((p)->head & PINUSE_BIT)|s|CINUSE_BIT),\
01760   (((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT), \
01761   mark_inuse_foot(M,p,s))
01762 
01763 #define set_inuse_and_pinuse(M,p,s)\
01764   ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\
01765   (((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT),\
01766  mark_inuse_foot(M,p,s))
01767 
01768 #define set_size_and_pinuse_of_inuse_chunk(M, p, s)\
01769   ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\
01770   mark_inuse_foot(M, p, s))
01771 
01772 #endif /* !FOOTERS */
01773 
01774 /* ---------------------------- setting mparams -------------------------- */
01775 
01776 /* Initialize mparams */
01777 static int init_mparams(void) {
01778   if (mparams.page_size == 0) {
01779     size_t s;
01780 
01781     mparams.mmap_threshold = DEFAULT_MMAP_THRESHOLD;
01782     mparams.trim_threshold = DEFAULT_TRIM_THRESHOLD;
01783 #if MORECORE_CONTIGUOUS
01784     mparams.default_mflags = USE_LOCK_BIT|USE_MMAP_BIT;
01785 #else  /* MORECORE_CONTIGUOUS */
01786     mparams.default_mflags = USE_LOCK_BIT|USE_MMAP_BIT|USE_NONCONTIGUOUS_BIT;
01787 #endif /* MORECORE_CONTIGUOUS */
01788 
01789 #if (FOOTERS && !INSECURE)
01790     {
01791 #if USE_DEV_RANDOM
01792       int fd;
01793       unsigned char buf[sizeof(size_t)];
01794       /* Try to use /dev/urandom, else fall back on using time */
01795       if ((fd = open("/dev/urandom", O_RDONLY)) >= 0 &&
01796           read(fd, buf, sizeof(buf)) == sizeof(buf)) {
01797         s = *((size_t *) buf);
01798         close(fd);
01799       }
01800       else
01801 #endif /* USE_DEV_RANDOM */
01802         s = (size_t)(time(0) ^ (size_t)0x55555555U);
01803 
01804       s |= (size_t)8U;    /* ensure nonzero */
01805       s &= ~(size_t)7U;   /* improve chances of fault for bad values */
01806 
01807     }
01808 #else /* (FOOTERS && !INSECURE) */
01809     s = (size_t)0x58585858U;
01810 #endif /* (FOOTERS && !INSECURE) */
01811     ACQUIRE_MAGIC_INIT_LOCK();
01812     if (mparams.magic == 0) {
01813       mparams.magic = s;
01814       /* Set up lock for main malloc area */
01815       INITIAL_LOCK(&gm->mutex);
01816       gm->mflags = mparams.default_mflags;
01817     }
01818     RELEASE_MAGIC_INIT_LOCK();
01819 
01820 #ifndef WIN32
01821     mparams.page_size = malloc_getpagesize;
01822     mparams.granularity = ((DEFAULT_GRANULARITY != 0)?
01823                            DEFAULT_GRANULARITY : mparams.page_size);
01824 #else /* WIN32 */
01825     {
01826       SYSTEM_INFO system_info;
01827       GetSystemInfo(&system_info);
01828       mparams.page_size = system_info.dwPageSize;
01829       mparams.granularity = system_info.dwAllocationGranularity;
01830     }
01831 #endif /* WIN32 */
01832 
01833     /* Sanity-check configuration:
01834        size_t must be unsigned and as wide as pointer type.
01835        ints must be at least 4 bytes.
01836        alignment must be at least 8.
01837        Alignment, min chunk size, and page size must all be powers of 2.
01838     */
01839     if ((sizeof(size_t) != sizeof(char*)) ||
01840         (MAX_SIZE_T < MIN_CHUNK_SIZE)  ||
01841         (sizeof(int) < 4)  ||
01842         (MALLOC_ALIGNMENT < (size_t)8U) ||
01843         ((MALLOC_ALIGNMENT    & (MALLOC_ALIGNMENT-SIZE_T_ONE))    != 0) ||
01844         ((MCHUNK_SIZE         & (MCHUNK_SIZE-SIZE_T_ONE))         != 0) ||
01845         ((mparams.granularity & (mparams.granularity-SIZE_T_ONE)) != 0) ||
01846         ((mparams.page_size   & (mparams.page_size-SIZE_T_ONE))   != 0))
01847       ABORT;
01848   }
01849   return 0;
01850 }
01851 
01852 /* support for mallopt */
01853 static int change_mparam(int param_number, int value) {
01854   size_t val = (size_t)value;
01855   init_mparams();
01856   switch(param_number) {
01857   case M_TRIM_THRESHOLD:
01858     mparams.trim_threshold = val;
01859     return 1;
01860   case M_GRANULARITY:
01861     if (val >= mparams.page_size && ((val & (val-1)) == 0)) {
01862       mparams.granularity = val;
01863       return 1;
01864     }
01865     else
01866       return 0;
01867   case M_MMAP_THRESHOLD:
01868     mparams.mmap_threshold = val;
01869     return 1;
01870   default:
01871     return 0;
01872   }
01873 }
01874 
01875 #if DEBUG
01876 /* ------------------------- Debugging Support --------------------------- */
01877 
01878 /* Check properties of any chunk, whether free, inuse, mmapped etc  */
01879 static void do_check_any_chunk(mstate m, mchunkptr p) {
01880   assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD));
01881   assert(ok_address(m, p));
01882 }
01883 
01884 /* Check properties of top chunk */
01885 static void do_check_top_chunk(mstate m, mchunkptr p) {
01886   msegmentptr sp = segment_holding(m, (char*)p);
01887   size_t  sz = chunksize(p);
01888   assert(sp != 0);
01889   assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD));
01890   assert(ok_address(m, p));
01891   assert(sz == m->topsize);
01892   assert(sz > 0);
01893   assert(sz == ((sp->base + sp->size) - (char*)p) - TOP_FOOT_SIZE);
01894   assert(pinuse(p));
01895   assert(!next_pinuse(p));
01896 }
01897 
01898 /* Check properties of (inuse) mmapped chunks */
01899 static void do_check_mmapped_chunk(mstate m, mchunkptr p) {
01900   size_t  sz = chunksize(p);
01901   size_t len = (sz + (p->prev_foot & ~IS_MMAPPED_BIT) + MMAP_FOOT_PAD);
01902   assert(is_mmapped(p));
01903   assert(use_mmap(m));
01904   assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD));
01905   assert(ok_address(m, p));
01906   assert(!is_small(sz));
01907   assert((len & (mparams.page_size-SIZE_T_ONE)) == 0);
01908   assert(chunk_plus_offset(p, sz)->head == FENCEPOST_HEAD);
01909   assert(chunk_plus_offset(p, sz+SIZE_T_SIZE)->head == 0);
01910 }
01911 
01912 /* Check properties of inuse chunks */
01913 static void do_check_inuse_chunk(mstate m, mchunkptr p) {
01914   do_check_any_chunk(m, p);
01915   assert(cinuse(p));
01916   assert(next_pinuse(p));
01917   /* If not pinuse and not mmapped, previous chunk has OK offset */
01918   assert(is_mmapped(p) || pinuse(p) || next_chunk(prev_chunk(p)) == p);
01919   if (is_mmapped(p))
01920     do_check_mmapped_chunk(m, p);
01921 }
01922 
01923 /* Check properties of free chunks */
01924 static void do_check_free_chunk(mstate m, mchunkptr p) {
01925   size_t sz = p->head & ~(PINUSE_BIT|CINUSE_BIT);
01926   mchunkptr next = chunk_plus_offset(p, sz);
01927   do_check_any_chunk(m, p);
01928   assert(!cinuse(p));
01929   assert(!next_pinuse(p));
01930   assert (!is_mmapped(p));
01931   if (p != m->dv && p != m->top) {
01932     if (sz >= MIN_CHUNK_SIZE) {
01933       assert((sz & CHUNK_ALIGN_MASK) == 0);
01934       assert(is_aligned(chunk2mem(p)));
01935       assert(next->prev_foot == sz);
01936       assert(pinuse(p));
01937       assert (next == m->top || cinuse(next));
01938       assert(p->fd->bk == p);
01939       assert(p->bk->fd == p);
01940     }
01941     else  /* markers are always of size SIZE_T_SIZE */
01942       assert(sz == SIZE_T_SIZE);
01943   }
01944 }
01945 
01946 /* Check properties of malloced chunks at the point they are malloced */
01947 static void do_check_malloced_chunk(mstate m, void* mem, size_t s) {
01948   if (mem != 0) {
01949     mchunkptr p = mem2chunk(mem);
01950     size_t sz = p->head & ~(PINUSE_BIT|CINUSE_BIT);
01951     do_check_inuse_chunk(m, p);
01952     assert((sz & CHUNK_ALIGN_MASK) == 0);
01953     assert(sz >= MIN_CHUNK_SIZE);
01954     assert(sz >= s);
01955     /* unless mmapped, size is less than MIN_CHUNK_SIZE more than request */
01956     assert(is_mmapped(p) || sz < (s + MIN_CHUNK_SIZE));
01957   }
01958 }
01959 
01960 /* Check a tree and its subtrees.  */
01961 static void do_check_tree(mstate m, tchunkptr t) {
01962   tchunkptr head = 0;
01963   tchunkptr u = t;
01964   bindex_t tindex = t->index;
01965   size_t tsize = chunksize(t);
01966   bindex_t idx;
01967   compute_tree_index(tsize, idx);
01968   assert(tindex == idx);
01969   assert(tsize >= MIN_LARGE_SIZE);
01970   assert(tsize >= minsize_for_tree_index(idx));
01971   assert((idx == NTREEBINS-1) || (tsize < minsize_for_tree_index((idx+1))));
01972 
01973   do { /* traverse through chain of same-sized nodes */
01974     do_check_any_chunk(m, ((mchunkptr)u));
01975     assert(u->index == tindex);
01976     assert(chunksize(u) == tsize);
01977     assert(!cinuse(u));
01978     assert(!next_pinuse(u));
01979     assert(u->fd->bk == u);
01980     assert(u->bk->fd == u);
01981     if (u->parent == 0) {
01982       assert(u->child[0] == 0);
01983       assert(u->child[1] == 0);
01984     }
01985     else {
01986       assert(head == 0); /* only one node on chain has parent */
01987       head = u;
01988       assert(u->parent != u);
01989       assert (u->parent->child[0] == u ||
01990               u->parent->child[1] == u ||
01991               *((tbinptr*)(u->parent)) == u);
01992       if (u->child[0] != 0) {
01993         assert(u->child[0]->parent == u);
01994         assert(u->child[0] != u);
01995         do_check_tree(m, u->child[0]);
01996       }
01997       if (u->child[1] != 0) {
01998         assert(u->child[1]->parent == u);
01999         assert(u->child[1] != u);
02000         do_check_tree(m, u->child[1]);
02001       }
02002       if (u->child[0] != 0 && u->child[1] != 0) {
02003         assert(chunksize(u->child[0]) < chunksize(u->child[1]));
02004       }
02005     }
02006     u = u->fd;
02007   } while (u != t);
02008   assert(head != 0);
02009 }
02010 
02011 /*  Check all the chunks in a treebin.  */
02012 static void do_check_treebin(mstate m, bindex_t i) {
02013   tbinptr* tb = treebin_at(m, i);
02014   tchunkptr t = *tb;
02015   int empty = (m->treemap & (1U << i)) == 0;
02016   if (t == 0)
02017     assert(empty);
02018   if (!empty)
02019     do_check_tree(m, t);
02020 }
02021 
02022 /*  Check all the chunks in a smallbin.  */
02023 static void do_check_smallbin(mstate m, bindex_t i) {
02024   sbinptr b = smallbin_at(m, i);
02025   mchunkptr p = b->bk;
02026   unsigned int empty = (m->smallmap & (1U << i)) == 0;
02027   if (p == b)
02028     assert(empty);
02029   if (!empty) {
02030     for (; p != b; p = p->bk) {
02031       size_t size = chunksize(p);
02032       mchunkptr q;
02033       /* each chunk claims to be free */
02034       do_check_free_chunk(m, p);
02035       /* chunk belongs in bin */
02036       assert(small_index(size) == i);
02037       assert(p->bk == b || chunksize(p->bk) == chunksize(p));
02038       /* chunk is followed by an inuse chunk */
02039       q = next_chunk(p);
02040       if (q->head != FENCEPOST_HEAD)
02041         do_check_inuse_chunk(m, q);
02042     }
02043   }
02044 }
02045 
02046 /* Find x in a bin. Used in other check functions. */
02047 static int bin_find(mstate m, mchunkptr x) {
02048   size_t size = chunksize(x);
02049   if (is_small(size)) {
02050     bindex_t sidx = small_index(size);
02051     sbinptr b = smallbin_at(m, sidx);
02052     if (smallmap_is_marked(m, sidx)) {
02053       mchunkptr p = b;
02054       do {
02055         if (p == x)
02056           return 1;
02057       } while ((p = p->fd) != b);
02058     }
02059   }
02060   else {
02061     bindex_t tidx;
02062     compute_tree_index(size, tidx);
02063     if (treemap_is_marked(m, tidx)) {
02064       tchunkptr t = *treebin_at(m, tidx);
02065       size_t sizebits = size << leftshift_for_tree_index(tidx);
02066       while (t != 0 && chunksize(t) != size) {
02067         t = t->child[(sizebits >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1];
02068         sizebits <<= 1;
02069       }
02070       if (t != 0) {
02071         tchunkptr u = t;
02072         do {
02073           if (u == (tchunkptr)x)
02074             return 1;
02075         } while ((u = u->fd) != t);
02076       }
02077     }
02078   }
02079   return 0;
02080 }
02081 
02082 /* Traverse each chunk and check it; return total */
02083 static size_t traverse_and_check(mstate m) {
02084   size_t sum = 0;
02085   if (is_initialized(m)) {
02086     msegmentptr s = &m->seg;
02087     sum += m->topsize + TOP_FOOT_SIZE;
02088     while (s != 0) {
02089       mchunkptr q = align_as_chunk(s->base);
02090       mchunkptr lastq = 0;
02091       assert(pinuse(q));
02092       while (segment_holds(s, q) &&
02093              q != m->top && q->head != FENCEPOST_HEAD) {
02094         sum += chunksize(q);
02095         if (cinuse(q)) {
02096           assert(!bin_find(m, q));
02097           do_check_inuse_chunk(m, q);
02098         }
02099         else {
02100           assert(q == m->dv || bin_find(m, q));
02101           assert(lastq == 0 || cinuse(lastq)); /* Not 2 consecutive free */
02102           do_check_free_chunk(m, q);
02103         }
02104         lastq = q;
02105         q = next_chunk(q);
02106       }
02107       s = s->next;
02108     }
02109   }
02110   return sum;
02111 }
02112 
02113 /* Check all properties of malloc_state. */
02114 static void do_check_malloc_state(mstate m) {
02115   bindex_t i;
02116   size_t total;
02117   /* check bins */
02118   for (i = 0; i < NSMALLBINS; ++i)
02119     do_check_smallbin(m, i);
02120   for (i = 0; i < NTREEBINS; ++i)
02121     do_check_treebin(m, i);
02122 
02123   if (m->dvsize != 0) { /* check dv chunk */
02124     do_check_any_chunk(m, m->dv);
02125     assert(m->dvsize == chunksize(m->dv));
02126     assert(m->dvsize >= MIN_CHUNK_SIZE);
02127     assert(bin_find(m, m->dv) == 0);
02128   }
02129 
02130   if (m->top != 0) {   /* check top chunk */
02131     do_check_top_chunk(m, m->top);
02132     assert(m->topsize == chunksize(m->top));
02133     assert(m->topsize > 0);
02134     assert(bin_find(m, m->top) == 0);
02135   }
02136 
02137   total = traverse_and_check(m);
02138   assert(total <= m->footprint);
02139   assert(m->footprint <= m->max_footprint);
02140 }
02141 #endif /* DEBUG */
02142 
02143 /* ----------------------------- statistics ------------------------------ */
02144 
02145 #if !NO_MALLINFO
02146 static struct mallinfo internal_mallinfo(mstate m) {
02147   struct mallinfo nm = { 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 };
02148   if (!PREACTION(m)) {
02149     check_malloc_state(m);
02150     if (is_initialized(m)) {
02151       size_t nfree = SIZE_T_ONE; /* top always free */
02152       size_t mfree = m->topsize + TOP_FOOT_SIZE;
02153       size_t sum = mfree;
02154       msegmentptr s = &m->seg;
02155       while (s != 0) {
02156         mchunkptr q = align_as_chunk(s->base);
02157         while (segment_holds(s, q) &&
02158                q != m->top && q->head != FENCEPOST_HEAD) {
02159           size_t sz = chunksize(q);
02160           sum += sz;
02161           if (!cinuse(q)) {
02162             mfree += sz;
02163             ++nfree;
02164           }
02165           q = next_chunk(q);
02166         }
02167         s = s->next;
02168       }
02169 
02170       nm.arena    = sum;
02171       nm.ordblks  = nfree;
02172       nm.hblkhd   = m->footprint - sum;
02173       nm.usmblks  = m->max_footprint;
02174       nm.uordblks = m->footprint - mfree;
02175       nm.fordblks = mfree;
02176       nm.keepcost = m->topsize;
02177     }
02178 
02179     POSTACTION(m);
02180   }
02181   return nm;
02182 }
02183 #endif /* !NO_MALLINFO */
02184 
02185 static void internal_malloc_stats(mstate m) {
02186   if (!PREACTION(m)) {
02187     size_t maxfp = 0;
02188     size_t fp = 0;
02189     size_t used = 0;
02190     check_malloc_state(m);
02191     if (is_initialized(m)) {
02192       msegmentptr s = &m->seg;
02193       maxfp = m->max_footprint;
02194       fp = m->footprint;
02195       used = fp - (m->topsize + TOP_FOOT_SIZE);
02196 
02197       while (s != 0) {
02198         mchunkptr q = align_as_chunk(s->base);
02199         while (segment_holds(s, q) &&
02200                q != m->top && q->head != FENCEPOST_HEAD) {
02201           if (!cinuse(q))
02202             used -= chunksize(q);
02203           q = next_chunk(q);
02204         }
02205         s = s->next;
02206       }
02207     }
02208 
02209     fprintf(stderr, "max system bytes = %10lu\n", (unsigned long)(maxfp));
02210     fprintf(stderr, "system bytes     = %10lu\n", (unsigned long)(fp));
02211     fprintf(stderr, "in use bytes     = %10lu\n", (unsigned long)(used));
02212 
02213     POSTACTION(m);
02214   }
02215 }
02216 
02217 /* ----------------------- Operations on smallbins ----------------------- */
02218 
02219 /*
02220   Various forms of linking and unlinking are defined as macros.  Even
02221   the ones for trees, which are very long but have very short typical
02222   paths.  This is ugly but reduces reliance on inlining support of
02223   compilers.
02224 */
02225 
02226 /* Link a free chunk into a smallbin  */
02227 #define insert_small_chunk(M, P, S) {\
02228   bindex_t I  = small_index(S);\
02229   mchunkptr B = smallbin_at(M, I);\
02230   mchunkptr F = B;\
02231   assert(S >= MIN_CHUNK_SIZE);\
02232   if (!smallmap_is_marked(M, I))\
02233     mark_smallmap(M, I);\
02234   else if (RTCHECK(ok_address(M, B->fd)))\
02235     F = B->fd;\
02236   else {\
02237     CORRUPTION_ERROR_ACTION(M);\
02238   }\
02239   B->fd = P;\
02240   F->bk = P;\
02241   P->fd = F;\
02242   P->bk = B;\
02243 }
02244 
02245 /* Unlink a chunk from a smallbin  */
02246 #define unlink_small_chunk(M, P, S) {\
02247   mchunkptr F = P->fd;\
02248   mchunkptr B = P->bk;\
02249   bindex_t I = small_index(S);\
02250   assert(P != B);\
02251   assert(P != F);\
02252   assert(chunksize(P) == small_index2size(I));\
02253   if (F == B)\
02254     clear_smallmap(M, I);\
02255   else if (RTCHECK((F == smallbin_at(M,I) || ok_address(M, F)) &&\
02256                    (B == smallbin_at(M,I) || ok_address(M, B)))) {\
02257     F->bk = B;\
02258     B->fd = F;\
02259   }\
02260   else {\
02261     CORRUPTION_ERROR_ACTION(M);\
02262   }\
02263 }
02264 
02265 /* Unlink the first chunk from a smallbin */
02266 #define unlink_first_small_chunk(M, B, P, I) {\
02267   mchunkptr F = P->fd;\
02268   assert(P != B);\
02269   assert(P != F);\
02270   assert(chunksize(P) == small_index2size(I));\
02271   if (B == F)\
02272     clear_smallmap(M, I);\
02273   else if (RTCHECK(ok_address(M, F))) {\
02274     B->fd = F;\
02275     F->bk = B;\
02276   }\
02277   else {\
02278     CORRUPTION_ERROR_ACTION(M);\
02279   }\
02280 }
02281 
02282 /* Replace dv node, binning the old one */
02283 /* Used only when dvsize known to be small */
02284 #define replace_dv(M, P, S) {\
02285   size_t DVS = M->dvsize;\
02286   if (DVS != 0) {\
02287     mchunkptr DV = M->dv;\
02288     assert(is_small(DVS));\
02289     insert_small_chunk(M, DV, DVS);\
02290   }\
02291   M->dvsize = S;\
02292   M->dv = P;\
02293 }
02294 
02295 /* ------------------------- Operations on trees ------------------------- */
02296 
02297 /* Insert chunk into tree */
02298 #define insert_large_chunk(M, X, S) {\
02299   tbinptr* H;\
02300   bindex_t I;\
02301   compute_tree_index(S, I);\
02302   H = treebin_at(M, I);\
02303   X->index = I;\
02304   X->child[0] = X->child[1] = 0;\
02305   if (!treemap_is_marked(M, I)) {\
02306     mark_treemap(M, I);\
02307     *H = X;\
02308     X->parent = (tchunkptr)H;\
02309     X->fd = X->bk = X;\
02310   }\
02311   else {\
02312     tchunkptr T = *H;\
02313     size_t K = S << leftshift_for_tree_index(I);\
02314     for (;;) {\
02315       if (chunksize(T) != S) {\
02316         tchunkptr* C = &(T->child[(K >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1]);\
02317         K <<= 1;\
02318         if (*C != 0)\
02319           T = *C;\
02320         else if (RTCHECK(ok_address(M, C))) {\
02321           *C = X;\
02322           X->parent = T;\
02323           X->fd = X->bk = X;\
02324           break;\
02325         }\
02326         else {\
02327           CORRUPTION_ERROR_ACTION(M);\
02328           break;\
02329         }\
02330       }\
02331       else {\
02332         tchunkptr F = T->fd;\
02333         if (RTCHECK(ok_address(M, T) && ok_address(M, F))) {\
02334           T->fd = F->bk = X;\
02335           X->fd = F;\
02336           X->bk = T;\
02337           X->parent = 0;\
02338           break;\
02339         }\
02340         else {\
02341           CORRUPTION_ERROR_ACTION(M);\
02342           break;\
02343         }\
02344       }\
02345     }\
02346   }\
02347 }
02348 
02349 /*
02350   Unlink steps:
02351 
02352   1. If x is a chained node, unlink it from its same-sized fd/bk links
02353      and choose its bk node as its replacement.
02354   2. If x was the last node of its size, but not a leaf node, it must
02355      be replaced with a leaf node (not merely one with an open left or
02356      right), to make sure that lefts and rights of descendents
02357      correspond properly to bit masks.  We use the rightmost descendent
02358      of x.  We could use any other leaf, but this is easy to locate and
02359      tends to counteract removal of leftmosts elsewhere, and so keeps
02360      paths shorter than minimally guaranteed.  This doesn't loop much
02361      because on average a node in a tree is near the bottom.
02362   3. If x is the base of a chain (i.e., has parent links) relink
02363      x's parent and children to x's replacement (or null if none).
02364 */
02365 
02366 #define unlink_large_chunk(M, X) {\
02367   tchunkptr XP = X->parent;\
02368   tchunkptr R;\
02369   if (X->bk != X) {\
02370     tchunkptr F = X->fd;\
02371     R = X->bk;\
02372     if (RTCHECK(ok_address(M, F))) {\
02373       F->bk = R;\
02374       R->fd = F;\
02375     }\
02376     else {\
02377       CORRUPTION_ERROR_ACTION(M);\
02378     }\
02379   }\
02380   else {\
02381     tchunkptr* RP;\
02382     if (((R = *(RP = &(X->child[1]))) != 0) ||\
02383         ((R = *(RP = &(X->child[0]))) != 0)) {\
02384       tchunkptr* CP;\
02385       while ((*(CP = &(R->child[1])) != 0) ||\
02386              (*(CP = &(R->child[0])) != 0)) {\
02387         R = *(RP = CP);\
02388       }\
02389       if (RTCHECK(ok_address(M, RP)))\
02390         *RP = 0;\
02391       else {\
02392         CORRUPTION_ERROR_ACTION(M);\
02393       }\
02394     }\
02395   }\
02396   if (XP != 0) {\
02397     tbinptr* H = treebin_at(M, X->index);\
02398     if (X == *H) {\
02399       if ((*H = R) == 0) \
02400         clear_treemap(M, X->index);\
02401     }\
02402     else if (RTCHECK(ok_address(M, XP))) {\
02403       if (XP->child[0] == X) \
02404         XP->child[0] = R;\
02405       else \
02406         XP->child[1] = R;\
02407     }\
02408     else\
02409       CORRUPTION_ERROR_ACTION(M);\
02410     if (R != 0) {\
02411       if (RTCHECK(ok_address(M, R))) {\
02412         tchunkptr C0, C1;\
02413         R->parent = XP;\
02414         if ((C0 = X->child[0]) != 0) {\
02415           if (RTCHECK(ok_address(M, C0))) {\
02416             R->child[0] = C0;\
02417             C0->parent = R;\
02418           }\
02419           else\
02420             CORRUPTION_ERROR_ACTION(M);\
02421         }\
02422         if ((C1 = X->child[1]) != 0) {\
02423           if (RTCHECK(ok_address(M, C1))) {\
02424             R->child[1] = C1;\
02425             C1->parent = R;\
02426           }\
02427           else\
02428             CORRUPTION_ERROR_ACTION(M);\
02429         }\
02430       }\
02431       else\
02432         CORRUPTION_ERROR_ACTION(M);\
02433     }\
02434   }\
02435 }
02436 
02437 /* Relays to large vs small bin operations */
02438 
02439 #define insert_chunk(M, P, S)\
02440   if (is_small(S)) insert_small_chunk(M, P, S)\
02441   else { tchunkptr TP = (tchunkptr)(P); insert_large_chunk(M, TP, S); }
02442 
02443 #define unlink_chunk(M, P, S)\
02444   if (is_small(S)) unlink_small_chunk(M, P, S)\
02445   else { tchunkptr TP = (tchunkptr)(P); unlink_large_chunk(M, TP); }
02446 
02447 
02448 /* Relays to internal calls to malloc/free from realloc, memalign etc */
02449 
02450 #if ONLY_MSPACES
02451 #define internal_malloc(m, b) mspace_malloc(m, b)
02452 #define internal_free(m, mem) mspace_free(m,mem);
02453 #else /* ONLY_MSPACES */
02454 #if MSPACES
02455 #define internal_malloc(m, b)\
02456    (m == gm)? dlmalloc(b) : mspace_malloc(m, b)
02457 #define internal_free(m, mem)\
02458    if (m == gm) dlfree(mem); else mspace_free(m,mem);
02459 #else /* MSPACES */
02460 #define internal_malloc(m, b) dlmalloc(b)
02461 #define internal_free(m, mem) dlfree(mem)
02462 #endif /* MSPACES */
02463 #endif /* ONLY_MSPACES */
02464 
02465 /* -----------------------  Direct-mmapping chunks ----------------------- */
02466 
02467 /*
02468   Directly mmapped chunks are set up with an offset to the start of
02469   the mmapped region stored in the prev_foot field of the chunk. This
02470   allows reconstruction of the required argument to MUNMAP when freed,
02471   and also allows adjustment of the returned chunk to meet alignment
02472   requirements (especially in memalign).  There is also enough space
02473   allocated to hold a fake next chunk of size SIZE_T_SIZE to maintain
02474   the PINUSE bit so frees can be checked.
02475 */
02476 
02477 /* Malloc using mmap */
02478 static void* mmap_alloc(mstate m, size_t nb) {
02479   size_t mmsize = granularity_align(nb + SIX_SIZE_T_SIZES + CHUNK_ALIGN_MASK);
02480   if (mmsize > nb) {     /* Check for wrap around 0 */
02481     char* mm = (char*)(DIRECT_MMAP(mmsize));
02482     if (mm != CMFAIL) {
02483       size_t offset = align_offset(chunk2mem(mm));
02484       size_t psize = mmsize - offset - MMAP_FOOT_PAD;
02485       mchunkptr p = (mchunkptr)(mm + offset);
02486       p->prev_foot = offset | IS_MMAPPED_BIT;
02487       (p)->head = (psize|CINUSE_BIT);
02488       mark_inuse_foot(m, p, psize);
02489       chunk_plus_offset(p, psize)->head = FENCEPOST_HEAD;
02490       chunk_plus_offset(p, psize+SIZE_T_SIZE)->head = 0;
02491 
02492       if (mm < m->least_addr)
02493         m->least_addr = mm;
02494       if ((m->footprint += mmsize) > m->max_footprint)
02495         m->max_footprint = m->footprint;
02496       assert(is_aligned(chunk2mem(p)));
02497       check_mmapped_chunk(m, p);
02498       return chunk2mem(p);
02499     }
02500   }
02501   return 0;
02502 }
02503 
02504 /* Realloc using mmap */
02505 static mchunkptr mmap_resize(mstate m, mchunkptr oldp, size_t nb) {
02506   size_t oldsize = chunksize(oldp);
02507   if (is_small(nb)) /* Can't shrink mmap regions below small size */
02508     return 0;
02509   /* Keep old chunk if big enough but not too big */
02510   if (oldsize >= nb + SIZE_T_SIZE &&
02511       (oldsize - nb) <= (mparams.granularity << 1))
02512     return oldp;
02513   else {
02514     size_t offset = oldp->prev_foot & ~IS_MMAPPED_BIT;
02515     size_t oldmmsize = oldsize + offset + MMAP_FOOT_PAD;
02516     size_t newmmsize = granularity_align(nb + SIX_SIZE_T_SIZES +
02517                                          CHUNK_ALIGN_MASK);
02518     char* cp = (char*)CALL_MREMAP((char*)oldp - offset,
02519                                   oldmmsize, newmmsize, 1);
02520     if (cp != CMFAIL) {
02521       mchunkptr newp = (mchunkptr)(cp + offset);
02522       size_t psize = newmmsize - offset - MMAP_FOOT_PAD;
02523       newp->head = (psize|CINUSE_BIT);
02524       mark_inuse_foot(m, newp, psize);
02525       chunk_plus_offset(newp, psize)->head = FENCEPOST_HEAD;
02526       chunk_plus_offset(newp, psize+SIZE_T_SIZE)->head = 0;
02527 
02528       if (cp < m->least_addr)
02529         m->least_addr = cp;
02530       if ((m->footprint += newmmsize - oldmmsize) > m->max_footprint)
02531         m->max_footprint = m->footprint;
02532       check_mmapped_chunk(m, newp);
02533       return newp;
02534     }
02535   }
02536   return 0;
02537 }
02538 
02539 /* -------------------------- mspace management -------------------------- */
02540 
02541 /* Initialize top chunk and its size */
02542 static void init_top(mstate m, mchunkptr p, size_t psize) {
02543   /* Ensure alignment */
02544   size_t offset = align_offset(chunk2mem(p));
02545   p = (mchunkptr)((char*)p + offset);
02546   psize -= offset;
02547 
02548   m->top = p;
02549   m->topsize = psize;
02550   p->head = psize | PINUSE_BIT;
02551   /* set size of fake trailing chunk holding overhead space only once */
02552   chunk_plus_offset(p, psize)->head = TOP_FOOT_SIZE;
02553   m->trim_check = mparams.trim_threshold; /* reset on each update */
02554 }
02555 
02556 /* Initialize bins for a new mstate that is otherwise zeroed out */
02557 static void init_bins(mstate m) {
02558   /* Establish circular links for smallbins */
02559   bindex_t i;
02560   for (i = 0; i < NSMALLBINS; ++i) {
02561     sbinptr bin = smallbin_at(m,i);
02562     bin->fd = bin->bk = bin;
02563   }
02564 }
02565 
02566 #if PROCEED_ON_ERROR
02567 
02568 /* default corruption action */
02569 static void reset_on_error(mstate m) {
02570   int i;
02571   ++malloc_corruption_error_count;
02572   /* Reinitialize fields to forget about all memory */
02573   m->smallbins = m->treebins = 0;
02574   m->dvsize = m->topsize = 0;
02575   m->seg.base = 0;
02576   m->seg.size = 0;
02577   m->seg.next = 0;
02578   m->top = m->dv = 0;
02579   for (i = 0; i < NTREEBINS; ++i)
02580     *treebin_at(m, i) = 0;
02581   init_bins(m);
02582 }
02583 #endif /* PROCEED_ON_ERROR */
02584 
02585 /* Allocate chunk and prepend remainder with chunk in successor base. */
02586 static void* prepend_alloc(mstate m, char* newbase, char* oldbase,
02587                            size_t nb) {
02588   mchunkptr p = align_as_chunk(newbase);
02589   mchunkptr oldfirst = align_as_chunk(oldbase);
02590   size_t psize = (char*)oldfirst - (char*)p;
02591   mchunkptr q = chunk_plus_offset(p, nb);
02592   size_t qsize = psize - nb;
02593   set_size_and_pinuse_of_inuse_chunk(m, p, nb);
02594 
02595   assert((char*)oldfirst > (char*)q);
02596   assert(pinuse(oldfirst));
02597   assert(qsize >= MIN_CHUNK_SIZE);
02598 
02599   /* consolidate remainder with first chunk of old base */
02600   if (oldfirst == m->top) {
02601     size_t tsize = m->topsize += qsize;
02602     m->top = q;
02603     q->head = tsize | PINUSE_BIT;
02604     check_top_chunk(m, q);
02605   }
02606   else if (oldfirst == m->dv) {
02607     size_t dsize = m->dvsize += qsize;
02608     m->dv = q;
02609     set_size_and_pinuse_of_free_chunk(q, dsize);
02610   }
02611   else {
02612     if (!cinuse(oldfirst)) {
02613       size_t nsize = chunksize(oldfirst);
02614       unlink_chunk(m, oldfirst, nsize);
02615       oldfirst = chunk_plus_offset(oldfirst, nsize);
02616       qsize += nsize;
02617     }
02618     set_free_with_pinuse(q, qsize, oldfirst);
02619     insert_chunk(m, q, qsize);
02620     check_free_chunk(m, q);
02621   }
02622 
02623   check_malloced_chunk(m, chunk2mem(p), nb);
02624   return chunk2mem(p);
02625 }
02626 
02627 
02628 /* Add a segment to hold a new noncontiguous region */
02629 static void add_segment(mstate m, char* tbase, size_t tsize, flag_t mmapped) {
02630   /* Determine locations and sizes of segment, fenceposts, old top */
02631   char* old_top = (char*)m->top;
02632   msegmentptr oldsp = segment_holding(m, old_top);
02633   char* old_end = oldsp->base + oldsp->size;
02634   size_t ssize = pad_request(sizeof(struct malloc_segment));
02635   char* rawsp = old_end - (ssize + FOUR_SIZE_T_SIZES + CHUNK_ALIGN_MASK);
02636   size_t offset = align_offset(chunk2mem(rawsp));
02637   char* asp = rawsp + offset;
02638   char* csp = (asp < (old_top + MIN_CHUNK_SIZE))? old_top : asp;
02639   mchunkptr sp = (mchunkptr)csp;
02640   msegmentptr ss = (msegmentptr)(chunk2mem(sp));
02641   mchunkptr tnext = chunk_plus_offset(sp, ssize);
02642   mchunkptr p = tnext;
02643   int nfences = 0;
02644 
02645   /* reset top to new space */
02646   init_top(m, (mchunkptr)tbase, tsize - TOP_FOOT_SIZE);
02647 
02648   /* Set up segment record */
02649   assert(is_aligned(ss));
02650   set_size_and_pinuse_of_inuse_chunk(m, sp, ssize);
02651   *ss = m->seg; /* Push current record */
02652   m->seg.base = tbase;
02653   m->seg.size = tsize;
02654   m->seg.sflags = mmapped;
02655   m->seg.next = ss;
02656 
02657   /* Insert trailing fenceposts */
02658   for (;;) {
02659     mchunkptr nextp = chunk_plus_offset(p, SIZE_T_SIZE);
02660     p->head = FENCEPOST_HEAD;
02661     ++nfences;
02662     if ((char*)(&(nextp->head)) < old_end)
02663       p = nextp;
02664     else
02665       break;
02666   }
02667   assert(nfences >= 2);
02668 
02669   /* Insert the rest of old top into a bin as an ordinary free chunk */
02670   if (csp != old_top) {
02671     mchunkptr q = (mchunkptr)old_top;
02672     size_t psize = csp - old_top;
02673     mchunkptr tn = chunk_plus_offset(q, psize);
02674     set_free_with_pinuse(q, psize, tn);
02675     insert_chunk(m, q, psize);
02676   }
02677 
02678   check_top_chunk(m, m->top);
02679 }
02680 
02681 /* -------------------------- System allocation -------------------------- */
02682 
02683 /* Get memory from system using MORECORE or MMAP */
02684 static void* sys_alloc(mstate m, size_t nb) {
02685   char* tbase = CMFAIL;
02686   size_t tsize = 0;
02687   flag_t mmap_flag = 0;
02688 
02689   init_mparams();
02690 
02691   /* Directly map large chunks */
02692   if (use_mmap(m) && nb >= mparams.mmap_threshold) {
02693     void* mem = mmap_alloc(m, nb);
02694     if (mem != 0)
02695       return mem;
02696   }
02697 
02698   /*
02699     Try getting memory in any of three ways (in most-preferred to
02700     least-preferred order):
02701     1. A call to MORECORE that can normally contiguously extend memory.
02702        (disabled if not MORECORE_CONTIGUOUS or not HAVE_MORECORE or
02703        or main space is mmapped or a previous contiguous call failed)
02704     2. A call to MMAP new space (disabled if not HAVE_MMAP).
02705        Note that under the default settings, if MORECORE is unable to
02706        fulfill a request, and HAVE_MMAP is true, then mmap is
02707        used as a noncontiguous system allocator. This is a useful backup
02708        strategy for systems with holes in address spaces -- in this case
02709        sbrk cannot contiguously expand the heap, but mmap may be able to
02710        find space.
02711     3. A call to MORECORE that cannot usually contiguously extend memory.
02712        (disabled if not HAVE_MORECORE)
02713   */
02714 
02715   if (MORECORE_CONTIGUOUS && !use_noncontiguous(m)) {
02716     char* br = CMFAIL;
02717     msegmentptr ss = (m->top == 0)? 0 : segment_holding(m, (char*)m->top);
02718     size_t asize = 0;
02719     ACQUIRE_MORECORE_LOCK();
02720 
02721     if (ss == 0) {  /* First time through or recovery */
02722       char* base = (char*)CALL_MORECORE(0);
02723       if (base != CMFAIL) {
02724         asize = granularity_align(nb + TOP_FOOT_SIZE + SIZE_T_ONE);
02725         /* Adjust to end on a page boundary */
02726         if (!is_page_aligned(base))
02727           asize += (page_align((size_t)base) - (size_t)base);
02728         /* Can't call MORECORE if size is negative when treated as signed */
02729         if (asize < HALF_MAX_SIZE_T &&
02730             (br = (char*)(CALL_MORECORE(asize))) == base) {
02731           tbase = base;
02732           tsize = asize;
02733         }
02734       }
02735     }
02736     else {
02737       /* Subtract out existing available top space from MORECORE request. */
02738       asize = granularity_align(nb - m->topsize + TOP_FOOT_SIZE + SIZE_T_ONE);
02739       /* Use mem here only if it did continuously extend old space */
02740       if (asize < HALF_MAX_SIZE_T &&
02741           (br = (char*)(CALL_MORECORE(asize))) == ss->base+ss->size) {
02742         tbase = br;
02743         tsize = asize;
02744       }
02745     }
02746 
02747     if (tbase == CMFAIL) {    /* Cope with partial failure */
02748       if (br != CMFAIL) {    /* Try to use/extend the space we did get */
02749         if (asize < HALF_MAX_SIZE_T &&
02750             asize < nb + TOP_FOOT_SIZE + SIZE_T_ONE) {
02751           size_t esize = granularity_align(nb + TOP_FOOT_SIZE + SIZE_T_ONE - asize);
02752           if (esize < HALF_MAX_SIZE_T) {
02753             char* end = (char*)CALL_MORECORE(esize);
02754             if (end != CMFAIL)
02755               asize += esize;
02756             else {            /* Can't use; try to release */
02757               CALL_MORECORE(-asize);
02758               br = CMFAIL;
02759             }
02760           }
02761         }
02762       }
02763       if (br != CMFAIL) {    /* Use the space we did get */
02764         tbase = br;
02765         tsize = asize;
02766       }
02767       else
02768         disable_contiguous(m); /* Don't try contiguous path in the future */
02769     }
02770 
02771     RELEASE_MORECORE_LOCK();
02772   }
02773 
02774   if (HAVE_MMAP && tbase == CMFAIL) {  /* Try MMAP */
02775     size_t req = nb + TOP_FOOT_SIZE + SIZE_T_ONE;
02776     size_t rsize = granularity_align(req);
02777     if (rsize > nb) { /* Fail if wraps around zero */
02778       char* mp = (char*)(CALL_MMAP(rsize));
02779       if (mp != CMFAIL) {
02780         tbase = mp;
02781         tsize = rsize;
02782         mmap_flag = IS_MMAPPED_BIT;
02783       }
02784     }
02785   }
02786 
02787   if (HAVE_MORECORE && tbase == CMFAIL) { /* Try noncontiguous MORECORE */
02788     size_t asize = granularity_align(nb + TOP_FOOT_SIZE + SIZE_T_ONE);
02789     if (asize < HALF_MAX_SIZE_T) {
02790       char* br = CMFAIL;
02791       char* end = CMFAIL;
02792       ACQUIRE_MORECORE_LOCK();
02793       br = (char*)(CALL_MORECORE(asize));
02794       end = (char*)(CALL_MORECORE(0));
02795       RELEASE_MORECORE_LOCK();
02796       if (br != CMFAIL && end != CMFAIL && br < end) {
02797         size_t ssize = end - br;
02798         if (ssize > nb + TOP_FOOT_SIZE) {
02799           tbase = br;
02800           tsize = ssize;
02801         }
02802       }
02803     }
02804   }
02805 
02806   if (tbase != CMFAIL) {
02807 
02808     if ((m->footprint += tsize) > m->max_footprint)
02809       m->max_footprint = m->footprint;
02810 
02811     if (!is_initialized(m)) { /* first-time initialization */
02812       m->seg.base = m->least_addr = tbase;
02813       m->seg.size = tsize;
02814       m->seg.sflags = mmap_flag;
02815       m->magic = mparams.magic;
02816       init_bins(m);
02817       if (is_global(m)) 
02818         init_top(m, (mchunkptr)tbase, tsize - TOP_FOOT_SIZE);
02819       else {
02820         /* Offset top by embedded malloc_state */
02821         mchunkptr mn = next_chunk(mem2chunk(m));
02822         init_top(m, mn, (size_t)((tbase + tsize) - (char*)mn) -TOP_FOOT_SIZE);
02823       }
02824     }
02825 
02826     else {
02827       /* Try to merge with an existing segment */
02828       msegmentptr sp = &m->seg;
02829       while (sp != 0 && tbase != sp->base + sp->size)
02830         sp = sp->next;
02831       if (sp != 0 &&
02832           !is_extern_segment(sp) &&
02833           (sp->sflags & IS_MMAPPED_BIT) == mmap_flag &&
02834           segment_holds(sp, m->top)) { /* append */
02835         sp->size += tsize;
02836         init_top(m, m->top, m->topsize + tsize);
02837       }
02838       else {
02839         if (tbase < m->least_addr)
02840           m->least_addr = tbase;
02841         sp = &m->seg;
02842         while (sp != 0 && sp->base != tbase + tsize)
02843           sp = sp->next;
02844         if (sp != 0 &&
02845             !is_extern_segment(sp) &&
02846             (sp->sflags & IS_MMAPPED_BIT) == mmap_flag) {
02847           char* oldbase = sp->base;
02848           sp->base = tbase;
02849           sp->size += tsize;
02850           return prepend_alloc(m, tbase, oldbase, nb);
02851         }
02852         else
02853           add_segment(m, tbase, tsize, mmap_flag);
02854       }
02855     }
02856 
02857     if (nb < m->topsize) { /* Allocate from new or extended top space */
02858       size_t rsize = m->topsize -= nb;
02859       mchunkptr p = m->top;
02860       mchunkptr r = m->top = chunk_plus_offset(p, nb);
02861       r->head = rsize | PINUSE_BIT;
02862       set_size_and_pinuse_of_inuse_chunk(m, p, nb);
02863       check_top_chunk(m, m->top);
02864       check_malloced_chunk(m, chunk2mem(p), nb);
02865       return chunk2mem(p);
02866     }
02867   }
02868 
02869   MALLOC_FAILURE_ACTION;
02870   return 0;
02871 }
02872 
02873 /* -----------------------  system deallocation -------------------------- */
02874 
02875 /* Unmap and unlink any mmapped segments that don't contain used chunks */
02876 static size_t release_unused_segments(mstate m) {
02877   size_t released = 0;
02878   msegmentptr pred = &m->seg;
02879   msegmentptr sp = pred->next;
02880   while (sp != 0) {
02881     char* base = sp->base;
02882     size_t size = sp->size;
02883     msegmentptr next = sp->next;
02884     if (is_mmapped_segment(sp) && !is_extern_segment(sp)) {
02885       mchunkptr p = align_as_chunk(base);
02886       size_t psize = chunksize(p);
02887       /* Can unmap if first chunk holds entire segment and not pinned */
02888       if (!cinuse(p) && (char*)p + psize >= base + size - TOP_FOOT_SIZE) {
02889         tchunkptr tp = (tchunkptr)p;
02890         assert(segment_holds(sp, (char*)sp));
02891         if (p == m->dv) {
02892           m->dv = 0;
02893           m->dvsize = 0;
02894         }
02895         else {
02896           unlink_large_chunk(m, tp);
02897         }
02898         if (CALL_MUNMAP(base, size) == 0) {
02899           released += size;
02900           m->footprint -= size;
02901           /* unlink obsoleted record */
02902           sp = pred;
02903           sp->next = next;
02904         }
02905         else { /* back out if cannot unmap */
02906           insert_large_chunk(m, tp, psize);
02907         }
02908       }
02909     }
02910     pred = sp;
02911     sp = next;
02912   }
02913   return released;
02914 }
02915 
02916 static int sys_trim(mstate m, size_t pad) {
02917   size_t released = 0;
02918   if (pad < MAX_REQUEST && is_initialized(m)) {
02919     pad += TOP_FOOT_SIZE; /* ensure enough room for segment overhead */
02920 
02921     if (m->topsize > pad) {
02922       /* Shrink top space in granularity-size units, keeping at least one */
02923       size_t unit = mparams.granularity;
02924       size_t extra = ((m->topsize - pad + (unit - SIZE_T_ONE)) / unit -
02925                       SIZE_T_ONE) * unit;
02926       msegmentptr sp = segment_holding(m, (char*)m->top);
02927 
02928       if (!is_extern_segment(sp)) {
02929         if (is_mmapped_segment(sp)) {
02930           if (HAVE_MMAP &&
02931               sp->size >= extra &&
02932               !has_segment_link(m, sp)) { /* can't shrink if pinned */
02933             /* Prefer mremap, fall back to munmap */
02934             if ((CALL_MREMAP(sp->base, sp->size, sp->size - extra, 0) != MFAIL) ||
02935                 (CALL_MUNMAP(sp->base + sp->size - extra, extra) == 0)) {
02936               released = extra;
02937             }
02938           }
02939         }
02940         else if (HAVE_MORECORE) {
02941           if (extra >= HALF_MAX_SIZE_T) /* Avoid wrapping negative */
02942             extra = (HALF_MAX_SIZE_T) + SIZE_T_ONE - unit;
02943           ACQUIRE_MORECORE_LOCK();
02944           {
02945             /* Make sure end of memory is where we last set it. */
02946             char* old_br = (char*)(CALL_MORECORE(0));
02947             if (old_br == sp->base + sp->size) {
02948               char* rel_br = (char*)(CALL_MORECORE(-extra));
02949               char* new_br = (char*)(CALL_MORECORE(0));
02950               if (rel_br != CMFAIL && new_br < old_br)
02951                 released = old_br - new_br;
02952             }
02953           }
02954           RELEASE_MORECORE_LOCK();
02955         }
02956       }
02957 
02958       if (released != 0) {
02959         sp->size -= released;
02960         m->footprint -= released;
02961         init_top(m, m->top, m->topsize - released);
02962         check_top_chunk(m, m->top);
02963       }
02964     }
02965 
02966     /* Unmap any unused mmapped segments */
02967     if (HAVE_MMAP) 
02968       released += release_unused_segments(m);
02969 
02970     /* On failure, disable autotrim to avoid repeated failed future calls */
02971     if (released == 0)
02972       m->trim_check = MAX_SIZE_T;
02973   }
02974 
02975   return (released != 0)? 1 : 0;
02976 }
02977 
02978 /* ---------------------------- malloc support --------------------------- */
02979 
02980 /* allocate a large request from the best fitting chunk in a treebin */
02981 static void* tmalloc_large(mstate m, size_t nb) {
02982   tchunkptr v = 0;
02983   size_t rsize = -nb; /* Unsigned negation */
02984   tchunkptr t;
02985   bindex_t idx;
02986   compute_tree_index(nb, idx);
02987 
02988   if ((t = *treebin_at(m, idx)) != 0) {
02989     /* Traverse tree for this bin looking for node with size == nb */
02990     size_t sizebits = nb << leftshift_for_tree_index(idx);
02991     tchunkptr rst = 0;  /* The deepest untaken right subtree */
02992     for (;;) {
02993       tchunkptr rt;
02994       size_t trem = chunksize(t) - nb;
02995       if (trem < rsize) {
02996         v = t;
02997         if ((rsize = trem) == 0)
02998           break;
02999       }
03000       rt = t->child[1];
03001       t = t->child[(sizebits >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1];
03002       if (rt != 0 && rt != t)
03003         rst = rt;
03004       if (t == 0) {
03005         t = rst; /* set t to least subtree holding sizes > nb */
03006         break;
03007       }
03008       sizebits <<= 1;
03009     }
03010   }
03011 
03012   if (t == 0 && v == 0) { /* set t to root of next non-empty treebin */
03013     binmap_t leftbits = left_bits(idx2bit(idx)) & m->treemap;
03014     if (leftbits != 0) {
03015       bindex_t i;
03016       binmap_t leastbit = least_bit(leftbits);
03017       compute_bit2idx(leastbit, i);
03018       t = *treebin_at(m, i);
03019     }
03020   }
03021 
03022   while (t != 0) { /* find smallest of tree or subtree */
03023     size_t trem = chunksize(t) - nb;
03024     if (trem < rsize) {
03025       rsize = trem;
03026       v = t;
03027     }
03028     t = leftmost_child(t);
03029   }
03030 
03031   /*  If dv is a better fit, return 0 so malloc will use it */
03032   if (v != 0 && rsize < (size_t)(m->dvsize - nb)) {
03033     if (RTCHECK(ok_address(m, v))) { /* split */
03034       mchunkptr r = chunk_plus_offset(v, nb);
03035       assert(chunksize(v) == rsize + nb);
03036       if (RTCHECK(ok_next(v, r))) {
03037         unlink_large_chunk(m, v);
03038         if (rsize < MIN_CHUNK_SIZE)
03039           set_inuse_and_pinuse(m, v, (rsize + nb));
03040         else {
03041           set_size_and_pinuse_of_inuse_chunk(m, v, nb);
03042           set_size_and_pinuse_of_free_chunk(r, rsize);
03043           insert_chunk(m, r, rsize);
03044         }
03045         return chunk2mem(v);
03046       }
03047     }
03048     CORRUPTION_ERROR_ACTION(m);
03049   }
03050   return 0;
03051 }
03052 
03053 /* allocate a small request from the best fitting chunk in a treebin */
03054 static void* tmalloc_small(mstate m, size_t nb) {
03055   tchunkptr t, v;
03056   size_t rsize;
03057   bindex_t i;
03058   binmap_t leastbit = least_bit(m->treemap);
03059   compute_bit2idx(leastbit, i);
03060 
03061   v = t = *treebin_at(m, i);
03062   rsize = chunksize(t) - nb;
03063 
03064   while ((t = leftmost_child(t)) != 0) {
03065     size_t trem = chunksize(t) - nb;
03066     if (trem < rsize) {
03067       rsize = trem;
03068       v = t;
03069     }
03070   }
03071 
03072   if (RTCHECK(ok_address(m, v))) {
03073     mchunkptr r = chunk_plus_offset(v, nb);
03074     assert(chunksize(v) == rsize + nb);
03075     if (RTCHECK(ok_next(v, r))) {
03076       unlink_large_chunk(m, v);
03077       if (rsize < MIN_CHUNK_SIZE)
03078         set_inuse_and_pinuse(m, v, (rsize + nb));
03079       else {
03080         set_size_and_pinuse_of_inuse_chunk(m, v, nb);
03081         set_size_and_pinuse_of_free_chunk(r, rsize);
03082         replace_dv(m, r, rsize);
03083       }
03084       return chunk2mem(v);
03085     }
03086   }
03087 
03088   CORRUPTION_ERROR_ACTION(m);
03089   return 0;
03090 }
03091 
03092 /* --------------------------- realloc support --------------------------- */
03093 
03094 static void* internal_realloc(mstate m, void* oldmem, size_t bytes) {
03095   if (bytes >= MAX_REQUEST) {
03096     MALLOC_FAILURE_ACTION;
03097     return 0;
03098   }
03099   if (!PREACTION(m)) {
03100     mchunkptr oldp = mem2chunk(oldmem);
03101     size_t oldsize = chunksize(oldp);
03102     mchunkptr next = chunk_plus_offset(oldp, oldsize);
03103     mchunkptr newp = 0;
03104     void* extra = 0;
03105 
03106     /* Try to either shrink or extend into top. Else malloc-copy-free */
03107 
03108     if (RTCHECK(ok_address(m, oldp) && ok_cinuse(oldp) &&
03109                 ok_next(oldp, next) && ok_pinuse(next))) {
03110       size_t nb = request2size(bytes);
03111       if (is_mmapped(oldp))
03112         newp = mmap_resize(m, oldp, nb);
03113       else if (oldsize >= nb) { /* already big enough */
03114         size_t rsize = oldsize - nb;
03115         newp = oldp;
03116         if (rsize >= MIN_CHUNK_SIZE) {
03117           mchunkptr remainder = chunk_plus_offset(newp, nb);
03118           set_inuse(m, newp, nb);
03119           set_inuse(m, remainder, rsize);
03120           extra = chunk2mem(remainder);
03121         }
03122       }
03123       else if (next == m->top && oldsize + m->topsize > nb) {
03124         /* Expand into top */
03125         size_t newsize = oldsize + m->topsize;
03126         size_t newtopsize = newsize - nb;
03127         mchunkptr newtop = chunk_plus_offset(oldp, nb);
03128         set_inuse(m, oldp, nb);
03129         newtop->head = newtopsize |PINUSE_BIT;
03130         m->top = newtop;
03131         m->topsize = newtopsize;
03132         newp = oldp;
03133       }
03134     }
03135     else {
03136       USAGE_ERROR_ACTION(m, oldmem);
03137       POSTACTION(m);
03138       return 0;
03139     }
03140 
03141     POSTACTION(m);
03142 
03143     if (newp != 0) {
03144       if (extra != 0) {
03145         internal_free(m, extra);
03146       }
03147       check_inuse_chunk(m, newp);
03148       return chunk2mem(newp);
03149     }
03150     else {
03151       void* newmem = internal_malloc(m, bytes);
03152       if (newmem != 0) {
03153         size_t oc = oldsize - overhead_for(oldp);
03154         memcpy(newmem, oldmem, (oc < bytes)? oc : bytes);
03155         internal_free(m, oldmem);
03156       }
03157       return newmem;
03158     }
03159   }
03160   return 0;
03161 }
03162 
03163 /* --------------------------- memalign support -------------------------- */
03164 
03165 static void* internal_memalign(mstate m, size_t alignment, size_t bytes) {
03166   if (alignment <= MALLOC_ALIGNMENT)    /* Can just use malloc */
03167     return internal_malloc(m, bytes);
03168   if (alignment <  MIN_CHUNK_SIZE) /* must be at least a minimum chunk size */
03169     alignment = MIN_CHUNK_SIZE;
03170   if ((alignment & (alignment-SIZE_T_ONE)) != 0) {/* Ensure a power of 2 */
03171     size_t a = MALLOC_ALIGNMENT << 1;
03172     while (a < alignment) a <<= 1;
03173     alignment = a;
03174   }
03175   
03176   if (bytes >= MAX_REQUEST - alignment) {
03177     if (m != 0)  { /* Test isn't needed but avoids compiler warning */
03178       MALLOC_FAILURE_ACTION;
03179     }
03180   }
03181   else {
03182     size_t nb = request2size(bytes);
03183     size_t req = nb + alignment + MIN_CHUNK_SIZE - CHUNK_OVERHEAD;
03184     char* mem = (char*)internal_malloc(m, req);
03185     if (mem != 0) {
03186       void* leader = 0;
03187       void* trailer = 0;
03188       mchunkptr p = mem2chunk(mem);
03189 
03190       if (PREACTION(m)) return 0;
03191       if ((((size_t)(mem)) % alignment) != 0) { /* misaligned */
03192         /*
03193           Find an aligned spot inside chunk.  Since we need to give
03194           back leading space in a chunk of at least MIN_CHUNK_SIZE, if
03195           the first calculation places us at a spot with less than
03196           MIN_CHUNK_SIZE leader, we can move to the next aligned spot.
03197           We've allocated enough total room so that this is always
03198           possible.
03199         */
03200         char* br = (char*)mem2chunk((size_t)(((size_t)(mem +
03201                                                        alignment -
03202                                                        SIZE_T_ONE)) &
03203                                              -alignment));
03204         char* pos = ((size_t)(br - (char*)(p)) >= MIN_CHUNK_SIZE)?
03205           br : br+alignment;
03206         mchunkptr newp = (mchunkptr)pos;
03207         size_t leadsize = pos - (char*)(p);
03208         size_t newsize = chunksize(p) - leadsize;
03209 
03210         if (is_mmapped(p)) { /* For mmapped chunks, just adjust offset */
03211           newp->prev_foot = p->prev_foot + leadsize;
03212           newp->head = (newsize|CINUSE_BIT);
03213         }
03214         else { /* Otherwise, give back leader, use the rest */
03215           set_inuse(m, newp, newsize);
03216           set_inuse(m, p, leadsize);
03217           leader = chunk2mem(p);
03218         }
03219         p = newp;
03220       }
03221 
03222       /* Give back spare room at the end */
03223       if (!is_mmapped(p)) {
03224         size_t size = chunksize(p);
03225         if (size > nb + MIN_CHUNK_SIZE) {
03226           size_t remainder_size = size - nb;
03227           mchunkptr remainder = chunk_plus_offset(p, nb);
03228           set_inuse(m, p, nb);
03229           set_inuse(m, remainder, remainder_size);
03230           trailer = chunk2mem(remainder);
03231         }
03232       }
03233 
03234       assert (chunksize(p) >= nb);
03235       assert((((size_t)(chunk2mem(p))) % alignment) == 0);
03236       check_inuse_chunk(m, p);
03237       POSTACTION(m);
03238       if (leader != 0) {
03239         internal_free(m, leader);
03240       }
03241       if (trailer != 0) {
03242         internal_free(m, trailer);
03243       }
03244       return chunk2mem(p);
03245     }
03246   }
03247   return 0;
03248 }
03249 
03250 /* ------------------------ comalloc/coalloc support --------------------- */
03251 
03252 static void** ialloc(mstate m,
03253                      size_t n_elements,
03254                      size_t* sizes,
03255                      int opts,
03256                      void* chunks[]) {
03257   /*
03258     This provides common support for independent_X routines, handling
03259     all of the combinations that can result.
03260 
03261     The opts arg has:
03262     bit 0 set if all elements are same size (using sizes[0])
03263     bit 1 set if elements should be zeroed
03264   */
03265 
03266   size_t    element_size;   /* chunksize of each element, if all same */
03267   size_t    contents_size;  /* total size of elements */
03268   size_t    array_size;     /* request size of pointer array */
03269   void*     mem;            /* malloced aggregate space */
03270   mchunkptr p;              /* corresponding chunk */
03271   size_t    remainder_size; /* remaining bytes while splitting */
03272   void**    marray;         /* either "chunks" or malloced ptr array */
03273   mchunkptr array_chunk;    /* chunk for malloced ptr array */
03274   flag_t    was_enabled;    /* to disable mmap */
03275   size_t    size;
03276   size_t    i;
03277 
03278   /* compute array length, if needed */
03279   if (chunks != 0) {
03280     if (n_elements == 0)
03281       return chunks; /* nothing to do */
03282     marray = chunks;
03283     array_size = 0;
03284   }
03285   else {
03286     /* if empty req, must still return chunk representing empty array */
03287     if (n_elements == 0)
03288       return (void**)internal_malloc(m, 0);
03289     marray = 0;
03290     array_size = request2size(n_elements * (sizeof(void*)));
03291   }
03292 
03293   /* compute total element size */
03294   if (opts & 0x1) { /* all-same-size */
03295     element_size = request2size(*sizes);
03296     contents_size = n_elements * element_size;
03297   }
03298   else { /* add up all the sizes */
03299     element_size = 0;
03300     contents_size = 0;
03301     for (i = 0; i != n_elements; ++i)
03302       contents_size += request2size(sizes[i]);
03303   }
03304 
03305   size = contents_size + array_size;
03306 
03307   /*
03308      Allocate the aggregate chunk.  First disable direct-mmapping so
03309      malloc won't use it, since we would not be able to later
03310      free/realloc space internal to a segregated mmap region.
03311   */
03312   was_enabled = use_mmap(m);
03313   disable_mmap(m);
03314   mem = internal_malloc(m, size - CHUNK_OVERHEAD);
03315   if (was_enabled)
03316     enable_mmap(m);
03317   if (mem == 0)
03318     return 0;
03319 
03320   if (PREACTION(m)) return 0;
03321   p = mem2chunk(mem);
03322   remainder_size = chunksize(p);
03323 
03324   assert(!is_mmapped(p));
03325 
03326   if (opts & 0x2) {       /* optionally clear the elements */
03327     memset((size_t*)mem, 0, remainder_size - SIZE_T_SIZE - array_size);
03328   }
03329 
03330   /* If not provided, allocate the pointer array as final part of chunk */
03331   if (marray == 0) {
03332     size_t  array_chunk_size;
03333     array_chunk = chunk_plus_offset(p, contents_size);
03334     array_chunk_size = remainder_size - contents_size;
03335     marray = (void**) (chunk2mem(array_chunk));
03336     set_size_and_pinuse_of_inuse_chunk(m, array_chunk, array_chunk_size);
03337     remainder_size = contents_size;
03338   }
03339 
03340   /* split out elements */
03341   for (i = 0; ; ++i) {
03342     marray[i] = chunk2mem(p);
03343     if (i != n_elements-1) {
03344       if (element_size != 0)
03345         size = element_size;
03346       else
03347         size = request2size(sizes[i]);
03348       remainder_size -= size;
03349       set_size_and_pinuse_of_inuse_chunk(m, p, size);
03350       p = chunk_plus_offset(p, size);
03351     }
03352     else { /* the final element absorbs any overallocation slop */
03353       set_size_and_pinuse_of_inuse_chunk(m, p, remainder_size);
03354       break;
03355     }
03356   }
03357 
03358 #if DEBUG
03359   if (marray != chunks) {
03360     /* final element must have exactly exhausted chunk */
03361     if (element_size != 0) {
03362       assert(remainder_size == element_size);
03363     }
03364     else {
03365       assert(remainder_size == request2size(sizes[i]));
03366     }
03367     check_inuse_chunk(m, mem2chunk(marray));
03368   }
03369   for (i = 0; i != n_elements; ++i)
03370     check_inuse_chunk(m, mem2chunk(marray[i]));
03371 
03372 #endif /* DEBUG */
03373 
03374   POSTACTION(m);
03375   return marray;
03376 }
03377 
03378 
03379 /* -------------------------- public routines ---------------------------- */
03380 
03381 #if !ONLY_MSPACES
03382 
03383 void* dlmalloc(size_t bytes) {
03384   /*
03385      Basic algorithm:
03386      If a small request (< 256 bytes minus per-chunk overhead):
03387        1. If one exists, use a remainderless chunk in associated smallbin.
03388           (Remainderless means that there are too few excess bytes to
03389           represent as a chunk.)
03390        2. If it is big enough, use the dv chunk, which is normally the
03391           chunk adjacent to the one used for the most recent small request.
03392        3. If one exists, split the smallest available chunk in a bin,
03393           saving remainder in dv.
03394        4. If it is big enough, use the top chunk.
03395        5. If available, get memory from system and use it
03396      Otherwise, for a large request:
03397        1. Find the smallest available binned chunk that fits, and use it
03398           if it is better fitting than dv chunk, splitting if necessary.
03399        2. If better fitting than any binned chunk, use the dv chunk.
03400        3. If it is big enough, use the top chunk.
03401        4. If request size >= mmap threshold, try to directly mmap this chunk.
03402        5. If available, get memory from system and use it
03403 
03404      The ugly goto's here ensure that postaction occurs along all paths.
03405   */
03406 
03407   if (!PREACTION(gm)) {
03408     void* mem;
03409     size_t nb;
03410     if (bytes <= MAX_SMALL_REQUEST) {
03411       bindex_t idx;
03412       binmap_t smallbits;
03413       nb = (bytes < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(bytes);
03414       idx = small_index(nb);
03415       smallbits = gm->smallmap >> idx;
03416 
03417       if ((smallbits & 0x3U) != 0) { /* Remainderless fit to a smallbin. */
03418         mchunkptr b, p;
03419         idx += ~smallbits & 1;       /* Uses next bin if idx empty */
03420         b = smallbin_at(gm, idx);
03421         p = b->fd;
03422         assert(chunksize(p) == small_index2size(idx));
03423         unlink_first_small_chunk(gm, b, p, idx);
03424         set_inuse_and_pinuse(gm, p, small_index2size(idx));
03425         mem = chunk2mem(p);
03426         check_malloced_chunk(gm, mem, nb);
03427         goto postaction;
03428       }
03429 
03430       else if (nb > gm->dvsize) {
03431         if (smallbits != 0) { /* Use chunk in next nonempty smallbin */
03432           mchunkptr b, p, r;
03433           size_t rsize;
03434           bindex_t i;
03435           binmap_t leftbits = (smallbits << idx) & left_bits(idx2bit(idx));
03436           binmap_t leastbit = least_bit(leftbits);
03437           compute_bit2idx(leastbit, i);
03438           b = smallbin_at(gm, i);
03439           p = b->fd;
03440           assert(chunksize(p) == small_index2size(i));
03441           unlink_first_small_chunk(gm, b, p, i);
03442           rsize = small_index2size(i) - nb;
03443           /* Fit here cannot be remainderless if 4byte sizes */
03444           if (SIZE_T_SIZE != 4 && rsize < MIN_CHUNK_SIZE)
03445             set_inuse_and_pinuse(gm, p, small_index2size(i));
03446           else {
03447             set_size_and_pinuse_of_inuse_chunk(gm, p, nb);
03448             r = chunk_plus_offset(p, nb);
03449             set_size_and_pinuse_of_free_chunk(r, rsize);
03450             replace_dv(gm, r, rsize);
03451           }
03452           mem = chunk2mem(p);
03453           check_malloced_chunk(gm, mem, nb);
03454           goto postaction;
03455         }
03456 
03457         else if (gm->treemap != 0 && (mem = tmalloc_small(gm, nb)) != 0) {
03458           check_malloced_chunk(gm, mem, nb);
03459           goto postaction;
03460         }
03461       }
03462     }
03463     else if (bytes >= MAX_REQUEST)
03464       nb = MAX_SIZE_T; /* Too big to allocate. Force failure (in sys alloc) */
03465     else {
03466       nb = pad_request(bytes);
03467       if (gm->treemap != 0 && (mem = tmalloc_large(gm, nb)) != 0) {
03468         check_malloced_chunk(gm, mem, nb);
03469         goto postaction;
03470       }
03471     }
03472 
03473     if (nb <= gm->dvsize) {
03474       size_t rsize = gm->dvsize - nb;
03475       mchunkptr p = gm->dv;
03476       if (rsize >= MIN_CHUNK_SIZE) { /* split dv */
03477         mchunkptr r = gm->dv = chunk_plus_offset(p, nb);
03478         gm->dvsize = rsize;
03479         set_size_and_pinuse_of_free_chunk(r, rsize);
03480         set_size_and_pinuse_of_inuse_chunk(gm, p, nb);
03481       }
03482       else { /* exhaust dv */
03483         size_t dvs = gm->dvsize;
03484         gm->dvsize = 0;
03485         gm->dv = 0;
03486         set_inuse_and_pinuse(gm, p, dvs);
03487       }
03488       mem = chunk2mem(p);
03489       check_malloced_chunk(gm, mem, nb);
03490       goto postaction;
03491     }
03492 
03493     else if (nb < gm->topsize) { /* Split top */
03494       size_t rsize = gm->topsize -= nb;
03495       mchunkptr p = gm->top;
03496       mchunkptr r = gm->top = chunk_plus_offset(p, nb);
03497       r->head = rsize | PINUSE_BIT;
03498       set_size_and_pinuse_of_inuse_chunk(gm, p, nb);
03499       mem = chunk2mem(p);
03500       check_top_chunk(gm, gm->top);
03501       check_malloced_chunk(gm, mem, nb);
03502       goto postaction;
03503     }
03504 
03505     mem = sys_alloc(gm, nb);
03506 
03507   postaction:
03508     POSTACTION(gm);
03509     return mem;
03510   }
03511 
03512   return 0;
03513 }
03514 
03515 void dlfree(void* mem) {
03516   /*
03517      Consolidate freed chunks with preceeding or succeeding bordering
03518      free chunks, if they exist, and then place in a bin.  Intermixed
03519      with special cases for top, dv, mmapped chunks, and usage errors.
03520   */
03521 
03522   if (mem != 0) {
03523     mchunkptr p  = mem2chunk(mem);
03524 #if FOOTERS
03525     mstate fm = get_mstate_for(p);
03526     if (!ok_magic(fm)) {
03527       USAGE_ERROR_ACTION(fm, p);
03528       return;
03529     }
03530 #else /* FOOTERS */
03531 #define fm gm
03532 #endif /* FOOTERS */
03533     if (!PREACTION(fm)) {
03534       check_inuse_chunk(fm, p);
03535       if (RTCHECK(ok_address(fm, p) && ok_cinuse(p))) {
03536         size_t psize = chunksize(p);
03537         mchunkptr next = chunk_plus_offset(p, psize);
03538         if (!pinuse(p)) {
03539           size_t prevsize = p->prev_foot;
03540           if ((prevsize & IS_MMAPPED_BIT) != 0) {
03541             prevsize &= ~IS_MMAPPED_BIT;
03542             psize += prevsize + MMAP_FOOT_PAD;
03543             if (CALL_MUNMAP((char*)p - prevsize, psize) == 0)
03544               fm->footprint -= psize;
03545             goto postaction;
03546           }
03547           else {
03548             mchunkptr prev = chunk_minus_offset(p, prevsize);
03549             psize += prevsize;
03550             p = prev;
03551             if (RTCHECK(ok_address(fm, prev))) { /* consolidate backward */
03552               if (p != fm->dv) {
03553                 unlink_chunk(fm, p, prevsize);
03554               }
03555               else if ((next->head & INUSE_BITS) == INUSE_BITS) {
03556                 fm->dvsize = psize;
03557                 set_free_with_pinuse(p, psize, next);
03558                 goto postaction;
03559               }
03560             }
03561             else
03562               goto erroraction;
03563           }
03564         }
03565 
03566         if (RTCHECK(ok_next(p, next) && ok_pinuse(next))) {
03567           if (!cinuse(next)) {  /* consolidate forward */
03568             if (next == fm->top) {
03569               size_t tsize = fm->topsize += psize;
03570               fm->top = p;
03571               p->head = tsize | PINUSE_BIT;
03572               if (p == fm->dv) {
03573                 fm->dv = 0;
03574                 fm->dvsize = 0;
03575               }
03576               if (should_trim(fm, tsize))
03577                 sys_trim(fm, 0);
03578               goto postaction;
03579             }
03580             else if (next == fm->dv) {
03581               size_t dsize = fm->dvsize += psize;
03582               fm->dv = p;
03583               set_size_and_pinuse_of_free_chunk(p, dsize);
03584               goto postaction;
03585             }
03586             else {
03587               size_t nsize = chunksize(next);
03588               psize += nsize;
03589               unlink_chunk(fm, next, nsize);
03590               set_size_and_pinuse_of_free_chunk(p, psize);
03591               if (p == fm->dv) {
03592                 fm->dvsize = psize;
03593                 goto postaction;
03594               }
03595             }
03596           }
03597           else
03598             set_free_with_pinuse(p, psize, next);
03599           insert_chunk(fm, p, psize);
03600           check_free_chunk(fm, p);
03601           goto postaction;
03602         }
03603       }
03604     erroraction:
03605       USAGE_ERROR_ACTION(fm, p);
03606     postaction:
03607       POSTACTION(fm);
03608     }
03609   }
03610 #if !FOOTERS
03611 #undef fm
03612 #endif /* FOOTERS */
03613 }
03614 
03615 void* dlcalloc(size_t n_elements, size_t elem_size) {
03616   void* mem;
03617   size_t req = 0;
03618   if (n_elements != 0) {
03619     req = n_elements * elem_size;
03620     if (((n_elements | elem_size) & ~(size_t)0xffff) &&
03621         (req / n_elements != elem_size))
03622       req = MAX_SIZE_T; /* force downstream failure on overflow */
03623   }
03624   mem = dlmalloc(req);
03625   if (mem != 0 && calloc_must_clear(mem2chunk(mem)))
03626     memset(mem, 0, req);
03627   return mem;
03628 }
03629 
03630 void* dlrealloc(void* oldmem, size_t bytes) {
03631   if (oldmem == 0)
03632     return dlmalloc(bytes);
03633 #ifdef REALLOC_ZERO_BYTES_FREES
03634   if (bytes == 0) {
03635     dlfree(oldmem);
03636     return 0;
03637   }
03638 #endif /* REALLOC_ZERO_BYTES_FREES */
03639   else {
03640 #if ! FOOTERS
03641     mstate m = gm;
03642 #else /* FOOTERS */
03643     mstate m = get_mstate_for(mem2chunk(oldmem));
03644     if (!ok_magic(m)) {
03645       USAGE_ERROR_ACTION(m, oldmem);
03646       return 0;
03647     }
03648 #endif /* FOOTERS */
03649     return internal_realloc(m, oldmem, bytes);
03650   }
03651 }
03652 
03653 void* dlmemalign(size_t alignment, size_t bytes) {
03654   return internal_memalign(gm, alignment, bytes);
03655 }
03656 
03657 void** dlindependent_calloc(size_t n_elements, size_t elem_size,
03658                                  void* chunks[]) {
03659   size_t sz = elem_size; /* serves as 1-element array */
03660   return ialloc(gm, n_elements, &sz, 3, chunks);
03661 }
03662 
03663 void** dlindependent_comalloc(size_t n_elements, size_t sizes[],
03664                                    void* chunks[]) {
03665   return ialloc(gm, n_elements, sizes, 0, chunks);
03666 }
03667 
03668 void* dlvalloc(size_t bytes) {
03669   size_t pagesz;
03670   init_mparams();
03671   pagesz = mparams.page_size;
03672   return dlmemalign(pagesz, bytes);
03673 }
03674 
03675 void* dlpvalloc(size_t bytes) {
03676   size_t pagesz;
03677   init_mparams();
03678   pagesz = mparams.page_size;
03679   return dlmemalign(pagesz, (bytes + pagesz - SIZE_T_ONE) & ~(pagesz - SIZE_T_ONE));
03680 }
03681 
03682 int dlmalloc_trim(size_t pad) {
03683   int result = 0;
03684   if (!PREACTION(gm)) {
03685     result = sys_trim(gm, pad);
03686     POSTACTION(gm);
03687   }
03688   return result;
03689 }
03690 
03691 size_t dlmalloc_footprint(void) {
03692   return gm->footprint;
03693 }
03694 
03695 size_t dlmalloc_max_footprint(void) {
03696   return gm->max_footprint;
03697 }
03698 
03699 #if !NO_MALLINFO
03700 struct mallinfo dlmallinfo(void) {
03701   return internal_mallinfo(gm);
03702 }
03703 #endif /* NO_MALLINFO */
03704 
03705 void dlmalloc_stats() {
03706   internal_malloc_stats(gm);
03707 }
03708 
03709 size_t dlmalloc_usable_size(void* mem) {
03710   if (mem != 0) {
03711     mchunkptr p = mem2chunk(mem);
03712     if (cinuse(p))
03713       return chunksize(p) - overhead_for(p);
03714   }
03715   return 0;
03716 }
03717 
03718 int dlmallopt(int param_number, int value) {
03719   return change_mparam(param_number, value);
03720 }
03721 
03722 #endif /* !ONLY_MSPACES */
03723 
03724 /* ----------------------------- user mspaces ---------------------------- */
03725 
03726 #if MSPACES
03727 
03728 static mstate init_user_mstate(char* tbase, size_t tsize) {
03729   size_t msize = pad_request(sizeof(struct malloc_state));
03730   mchunkptr mn;
03731   mchunkptr msp = align_as_chunk(tbase);
03732   mstate m = (mstate)(chunk2mem(msp));
03733   memset(m, 0, msize);
03734   INITIAL_LOCK(&m->mutex);
03735   msp->head = (msize|PINUSE_BIT|CINUSE_BIT);
03736   m->seg.base = m->least_addr = tbase;
03737   m->seg.size = m->footprint = m->max_footprint = tsize;
03738   m->magic = mparams.magic;
03739   m->mflags = mparams.default_mflags;
03740   disable_contiguous(m);
03741   init_bins(m);
03742   mn = next_chunk(mem2chunk(m));
03743   init_top(m, mn, (size_t)((tbase + tsize) - (char*)mn) - TOP_FOOT_SIZE);
03744   check_top_chunk(m, m->top);
03745   return m;
03746 }
03747 
03748 mspace create_mspace(size_t capacity, int locked) {
03749   mstate m = 0;
03750   size_t msize = pad_request(sizeof(struct malloc_state));
03751   init_mparams(); /* Ensure pagesize etc initialized */
03752 
03753   if (capacity < (size_t) -(msize + TOP_FOOT_SIZE + mparams.page_size)) {
03754     size_t rs = ((capacity == 0)? mparams.granularity :
03755                  (capacity + TOP_FOOT_SIZE + msize));
03756     size_t tsize = granularity_align(rs);
03757     char* tbase = (char*)(CALL_MMAP(tsize));
03758     if (tbase != CMFAIL) {
03759       m = init_user_mstate(tbase, tsize);
03760       m->seg.sflags = IS_MMAPPED_BIT;
03761       set_lock(m, locked);
03762     }
03763   }
03764   return (mspace)m;
03765 }
03766 
03767 mspace create_mspace_with_base(void* base, size_t capacity, int locked) {
03768   mstate m = 0;
03769   size_t msize = pad_request(sizeof(struct malloc_state));
03770   init_mparams(); /* Ensure pagesize etc initialized */
03771 
03772   if (capacity > msize + TOP_FOOT_SIZE &&
03773       capacity < (size_t) -(msize + TOP_FOOT_SIZE + mparams.page_size)) {
03774     m = init_user_mstate((char*)base, capacity);
03775     m->seg.sflags = EXTERN_BIT;
03776     set_lock(m, locked);
03777   }
03778   return (mspace)m;
03779 }
03780 
03781 size_t destroy_mspace(mspace msp) {
03782   size_t freed = 0;
03783   mstate ms = (mstate)msp;
03784   if (ok_magic(ms)) {
03785     msegmentptr sp = &ms->seg;
03786     while (sp != 0) {
03787       char* base = sp->base;
03788       size_t size = sp->size;
03789       flag_t flag = sp->sflags;
03790       sp = sp->next;
03791       if ((flag & IS_MMAPPED_BIT) && !(flag & EXTERN_BIT) &&
03792           CALL_MUNMAP(base, size) == 0)
03793         freed += size;
03794     }
03795   }
03796   else {
03797     USAGE_ERROR_ACTION(ms,ms);
03798   }
03799   return freed;
03800 }
03801 
03802 /*
03803   mspace versions of routines are near-clones of the global
03804   versions. This is not so nice but better than the alternatives.
03805 */
03806 
03807 
03808 void* mspace_malloc(mspace msp, size_t bytes) {
03809   mstate ms = (mstate)msp;
03810   if (!ok_magic(ms)) {
03811     USAGE_ERROR_ACTION(ms,ms);
03812     return 0;
03813   }
03814   if (!PREACTION(ms)) {
03815     void* mem;
03816     size_t nb;
03817     if (bytes <= MAX_SMALL_REQUEST) {
03818       bindex_t idx;
03819       binmap_t smallbits;
03820       nb = (bytes < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(bytes);
03821       idx = small_index(nb);
03822       smallbits = ms->smallmap >> idx;
03823 
03824       if ((smallbits & 0x3U) != 0) { /* Remainderless fit to a smallbin. */
03825         mchunkptr b, p;
03826         idx += ~smallbits & 1;       /* Uses next bin if idx empty */
03827         b = smallbin_at(ms, idx);
03828         p = b->fd;
03829         assert(chunksize(p) == small_index2size(idx));
03830         unlink_first_small_chunk(ms, b, p, idx);
03831         set_inuse_and_pinuse(ms, p, small_index2size(idx));
03832         mem = chunk2mem(p);
03833         check_malloced_chunk(ms, mem, nb);
03834         goto postaction;
03835       }
03836 
03837       else if (nb > ms->dvsize) {
03838         if (smallbits != 0) { /* Use chunk in next nonempty smallbin */
03839           mchunkptr b, p, r;
03840           size_t rsize;
03841           bindex_t i;
03842           binmap_t leftbits = (smallbits << idx) & left_bits(idx2bit(idx));
03843           binmap_t leastbit = least_bit(leftbits);
03844           compute_bit2idx(leastbit, i);
03845           b = smallbin_at(ms, i);
03846           p = b->fd;
03847           assert(chunksize(p) == small_index2size(i));
03848           unlink_first_small_chunk(ms, b, p, i);
03849           rsize = small_index2size(i) - nb;
03850           /* Fit here cannot be remainderless if 4byte sizes */
03851           if (SIZE_T_SIZE != 4 && rsize < MIN_CHUNK_SIZE)
03852             set_inuse_and_pinuse(ms, p, small_index2size(i));
03853           else {
03854             set_size_and_pinuse_of_inuse_chunk(ms, p, nb);
03855             r = chunk_plus_offset(p, nb);
03856             set_size_and_pinuse_of_free_chunk(r, rsize);
03857             replace_dv(ms, r, rsize);
03858           }
03859           mem = chunk2mem(p);
03860           check_malloced_chunk(ms, mem, nb);
03861           goto postaction;
03862         }
03863 
03864         else if (ms->treemap != 0 && (mem = tmalloc_small(ms, nb)) != 0) {
03865           check_malloced_chunk(ms, mem, nb);
03866           goto postaction;
03867         }
03868       }
03869     }
03870     else if (bytes >= MAX_REQUEST)
03871       nb = MAX_SIZE_T; /* Too big to allocate. Force failure (in sys alloc) */
03872     else {
03873       nb = pad_request(bytes);
03874       if (ms->treemap != 0 && (mem = tmalloc_large(ms, nb)) != 0) {
03875         check_malloced_chunk(ms, mem, nb);
03876         goto postaction;
03877       }
03878     }
03879 
03880     if (nb <= ms->dvsize) {
03881       size_t rsize = ms->dvsize - nb;
03882       mchunkptr p = ms->dv;
03883       if (rsize >= MIN_CHUNK_SIZE) { /* split dv */
03884         mchunkptr r = ms->dv = chunk_plus_offset(p, nb);
03885         ms->dvsize = rsize;
03886         set_size_and_pinuse_of_free_chunk(r, rsize);
03887         set_size_and_pinuse_of_inuse_chunk(ms, p, nb);
03888       }
03889       else { /* exhaust dv */
03890         size_t dvs = ms->dvsize;
03891         ms->dvsize = 0;
03892         ms->dv = 0;
03893         set_inuse_and_pinuse(ms, p, dvs);
03894       }
03895       mem = chunk2mem(p);
03896       check_malloced_chunk(ms, mem, nb);
03897       goto postaction;
03898     }
03899 
03900     else if (nb < ms->topsize) { /* Split top */
03901       size_t rsize = ms->topsize -= nb;
03902       mchunkptr p = ms->top;
03903       mchunkptr r = ms->top = chunk_plus_offset(p, nb);
03904       r->head = rsize | PINUSE_BIT;
03905       set_size_and_pinuse_of_inuse_chunk(ms, p, nb);
03906       mem = chunk2mem(p);
03907       check_top_chunk(ms, ms->top);
03908       check_malloced_chunk(ms, mem, nb);
03909       goto postaction;
03910     }
03911 
03912     mem = sys_alloc(ms, nb);
03913 
03914   postaction:
03915     POSTACTION(ms);
03916     return mem;
03917   }
03918 
03919   return 0;
03920 }
03921 
03922 void mspace_free(mspace msp, void* mem) {
03923   if (mem != 0) {
03924     mchunkptr p  = mem2chunk(mem);
03925 #if FOOTERS
03926     mstate fm = get_mstate_for(p);
03927 #else /* FOOTERS */
03928     mstate fm = (mstate)msp;
03929 #endif /* FOOTERS */
03930     if (!ok_magic(fm)) {
03931       USAGE_ERROR_ACTION(fm, p);
03932       return;
03933     }
03934     if (!PREACTION(fm)) {
03935       check_inuse_chunk(fm, p);
03936       if (RTCHECK(ok_address(fm, p) && ok_cinuse(p))) {
03937         size_t psize = chunksize(p);
03938         mchunkptr next = chunk_plus_offset(p, psize);
03939         if (!pinuse(p)) {
03940           size_t prevsize = p->prev_foot;
03941           if ((prevsize & IS_MMAPPED_BIT) != 0) {
03942             prevsize &= ~IS_MMAPPED_BIT;
03943             psize += prevsize + MMAP_FOOT_PAD;
03944             if (CALL_MUNMAP((char*)p - prevsize, psize) == 0)
03945               fm->footprint -= psize;
03946             goto postaction;
03947           }
03948           else {
03949             mchunkptr prev = chunk_minus_offset(p, prevsize);
03950             psize += prevsize;
03951             p = prev;
03952             if (RTCHECK(ok_address(fm, prev))) { /* consolidate backward */
03953               if (p != fm->dv) {
03954                 unlink_chunk(fm, p, prevsize);
03955               }
03956               else if ((next->head & INUSE_BITS) == INUSE_BITS) {
03957                 fm->dvsize = psize;
03958                 set_free_with_pinuse(p, psize, next);
03959                 goto postaction;
03960               }
03961             }
03962             else
03963               goto erroraction;
03964           }
03965         }
03966 
03967         if (RTCHECK(ok_next(p, next) && ok_pinuse(next))) {
03968           if (!cinuse(next)) {  /* consolidate forward */
03969             if (next == fm->top) {
03970               size_t tsize = fm->topsize += psize;
03971               fm->top = p;
03972               p->head = tsize | PINUSE_BIT;
03973               if (p == fm->dv) {
03974                 fm->dv = 0;
03975                 fm->dvsize = 0;
03976               }
03977               if (should_trim(fm, tsize))
03978                 sys_trim(fm, 0);
03979               goto postaction;
03980             }
03981             else if (next == fm->dv) {
03982               size_t dsize = fm->dvsize += psize;
03983               fm->dv = p;
03984               set_size_and_pinuse_of_free_chunk(p, dsize);
03985               goto postaction;
03986             }
03987             else {
03988               size_t nsize = chunksize(next);
03989               psize += nsize;
03990               unlink_chunk(fm, next, nsize);
03991               set_size_and_pinuse_of_free_chunk(p, psize);
03992               if (p == fm->dv) {
03993                 fm->dvsize = psize;
03994                 goto postaction;
03995               }
03996             }
03997           }
03998           else
03999             set_free_with_pinuse(p, psize, next);
04000           insert_chunk(fm, p, psize);
04001           check_free_chunk(fm, p);
04002           goto postaction;
04003         }
04004       }
04005     erroraction:
04006       USAGE_ERROR_ACTION(fm, p);
04007     postaction:
04008       POSTACTION(fm);
04009     }
04010   }
04011 }
04012 
04013 void* mspace_calloc(mspace msp, size_t n_elements, size_t elem_size) {
04014   void* mem;
04015   size_t req = 0;
04016   mstate ms = (mstate)msp;
04017   if (!ok_magic(ms)) {
04018     USAGE_ERROR_ACTION(ms,ms);
04019     return 0;
04020   }
04021   if (n_elements != 0) {
04022     req = n_elements * elem_size;
04023     if (((n_elements | elem_size) & ~(size_t)0xffff) &&
04024         (req / n_elements != elem_size))
04025       req = MAX_SIZE_T; /* force downstream failure on overflow */
04026   }
04027   mem = internal_malloc(ms, req);
04028   if (mem != 0 && calloc_must_clear(mem2chunk(mem)))
04029     memset(mem, 0, req);
04030   return mem;
04031 }
04032 
04033 void* mspace_realloc(mspace msp, void* oldmem, size_t bytes) {
04034   if (oldmem == 0)
04035     return mspace_malloc(msp, bytes);
04036 #ifdef REALLOC_ZERO_BYTES_FREES
04037   if (bytes == 0) {
04038     mspace_free(msp, oldmem);
04039     return 0;
04040   }
04041 #endif /* REALLOC_ZERO_BYTES_FREES */
04042   else {
04043 #if FOOTERS
04044     mchunkptr p  = mem2chunk(oldmem);
04045     mstate ms = get_mstate_for(p);
04046 #else /* FOOTERS */
04047     mstate ms = (mstate)msp;
04048 #endif /* FOOTERS */
04049     if (!ok_magic(ms)) {
04050       USAGE_ERROR_ACTION(ms,ms);
04051       return 0;
04052     }
04053     return internal_realloc(ms, oldmem, bytes);
04054   }
04055 }
04056 
04057 void* mspace_memalign(mspace msp, size_t alignment, size_t bytes) {
04058   mstate ms = (mstate)msp;
04059   if (!ok_magic(ms)) {
04060     USAGE_ERROR_ACTION(ms,ms);
04061     return 0;
04062   }
04063   return internal_memalign(ms, alignment, bytes);
04064 }
04065 
04066 void** mspace_independent_calloc(mspace msp, size_t n_elements,
04067                                  size_t elem_size, void* chunks[]) {
04068   size_t sz = elem_size; /* serves as 1-element array */
04069   mstate ms = (mstate)msp;
04070   if (!ok_magic(ms)) {
04071     USAGE_ERROR_ACTION(ms,ms);
04072     return 0;
04073   }
04074   return ialloc(ms, n_elements, &sz, 3, chunks);
04075 }
04076 
04077 void** mspace_independent_comalloc(mspace msp, size_t n_elements,
04078                                    size_t sizes[], void* chunks[]) {
04079   mstate ms = (mstate)msp;
04080   if (!ok_magic(ms)) {
04081     USAGE_ERROR_ACTION(ms,ms);
04082     return 0;
04083   }
04084   return ialloc(ms, n_elements, sizes, 0, chunks);
04085 }
04086 
04087 int mspace_trim(mspace msp, size_t pad) {
04088   int result = 0;
04089   mstate ms = (mstate)msp;
04090   if (ok_magic(ms)) {
04091     if (!PREACTION(ms)) {
04092       result = sys_trim(ms, pad);
04093       POSTACTION(ms);
04094     }
04095   }
04096   else {
04097     USAGE_ERROR_ACTION(ms,ms);
04098   }
04099   return result;
04100 }
04101 
04102 void mspace_malloc_stats(mspace msp) {
04103   mstate ms = (mstate)msp;
04104   if (ok_magic(ms)) {
04105     internal_malloc_stats(ms);
04106   }
04107   else {
04108     USAGE_ERROR_ACTION(ms,ms);
04109   }
04110 }
04111 
04112 size_t mspace_footprint(mspace msp) {
04113   size_t result;
04114   mstate ms = (mstate)msp;
04115   if (ok_magic(ms)) {
04116     result = ms->footprint;
04117   }
04118   USAGE_ERROR_ACTION(ms,ms);
04119   return result;
04120 }
04121 
04122 
04123 size_t mspace_max_footprint(mspace msp) {
04124   size_t result;
04125   mstate ms = (mstate)msp;
04126   if (ok_magic(ms)) {
04127     result = ms->max_footprint;
04128   }
04129   USAGE_ERROR_ACTION(ms,ms);
04130   return result;
04131 }
04132 
04133 
04134 #if !NO_MALLINFO
04135 struct mallinfo mspace_mallinfo(mspace msp) {
04136   mstate ms = (mstate)msp;
04137   if (!ok_magic(ms)) {
04138     USAGE_ERROR_ACTION(ms,ms);
04139   }
04140   return internal_mallinfo(ms);
04141 }
04142 #endif /* NO_MALLINFO */
04143 
04144 int mspace_mallopt(int param_number, int value) {
04145   return change_mparam(param_number, value);
04146 }
04147 
04148 #endif /* MSPACES */
04149 
04150 /* -------------------- Alternative MORECORE functions ------------------- */
04151 
04152 /*
04153   Guidelines for creating a custom version of MORECORE:
04154 
04155   * For best performance, MORECORE should allocate in multiples of pagesize.
04156   * MORECORE may allocate more memory than requested. (Or even less,
04157       but this will usually result in a malloc failure.)
04158   * MORECORE must not allocate memory when given argument zero, but
04159       instead return one past the end address of memory from previous
04160       nonzero call.
04161   * For best performance, consecutive calls to MORECORE with positive
04162       arguments should return increasing addresses, indicating that
04163       space has been contiguously extended.
04164   * Even though consecutive calls to MORECORE need not return contiguous
04165       addresses, it must be OK for malloc'ed chunks to span multiple
04166       regions in those cases where they do happen to be contiguous.
04167   * MORECORE need not handle negative arguments -- it may instead
04168       just return MFAIL when given negative arguments.
04169       Negative arguments are always multiples of pagesize. MORECORE
04170       must not misinterpret negative args as large positive unsigned
04171       args. You can suppress all such calls from even occurring by defining
04172       MORECORE_CANNOT_TRIM,
04173 
04174   As an example alternative MORECORE, here is a custom allocator
04175   kindly contributed for pre-OSX macOS.  It uses virtually but not
04176   necessarily physically contiguous non-paged memory (locked in,
04177   present and won't get swapped out).  You can use it by uncommenting
04178   this section, adding some #includes, and setting up the appropriate
04179   defines above:
04180 
04181       #define MORECORE osMoreCore
04182 
04183   There is also a shutdown routine that should somehow be called for
04184   cleanup upon program exit.
04185 
04186   #define MAX_POOL_ENTRIES 100
04187   #define MINIMUM_MORECORE_SIZE  (64 * 1024U)
04188   static int next_os_pool;
04189   void *our_os_pools[MAX_POOL_ENTRIES];
04190 
04191   void *osMoreCore(int size)
04192   {
04193     void *ptr = 0;
04194     static void *sbrk_top = 0;
04195 
04196     if (size > 0)
04197     {
04198       if (size < MINIMUM_MORECORE_SIZE)
04199          size = MINIMUM_MORECORE_SIZE;
04200       if (CurrentExecutionLevel() == kTaskLevel)
04201          ptr = PoolAllocateResident(size + RM_PAGE_SIZE, 0);
04202       if (ptr == 0)
04203       {
04204         return (void *) MFAIL;
04205       }
04206       // save ptrs so they can be freed during cleanup
04207       our_os_pools[next_os_pool] = ptr;
04208       next_os_pool++;
04209       ptr = (void *) ((((size_t) ptr) + RM_PAGE_MASK) & ~RM_PAGE_MASK);
04210       sbrk_top = (char *) ptr + size;
04211       return ptr;
04212     }
04213     else if (size < 0)
04214     {
04215       // we don't currently support shrink behavior
04216       return (void *) MFAIL;
04217     }
04218     else
04219     {
04220       return sbrk_top;
04221     }
04222   }
04223 
04224   // cleanup any allocated memory pools
04225   // called as last thing before shutting down driver
04226 
04227   void osCleanupMem(void)
04228   {
04229     void **ptr;
04230 
04231     for (ptr = our_os_pools; ptr < &our_os_pools[MAX_POOL_ENTRIES]; ptr++)
04232       if (*ptr)
04233       {
04234          PoolDeallocate(*ptr);
04235          *ptr = 0;
04236       }
04237   }
04238 
04239 */
04240 
04241 
04242 /* -----------------------------------------------------------------------
04243 History:
04244     V2.8.3 Thu Sep 22 11:16:32 2005  Doug Lea  (dl at gee)
04245       * Add max_footprint functions
04246       * Ensure all appropriate literals are size_t
04247       * Fix conditional compilation problem for some #define settings
04248       * Avoid concatenating segments with the one provided
04249         in create_mspace_with_base
04250       * Rename some variables to avoid compiler shadowing warnings
04251       * Use explicit lock initialization.
04252       * Better handling of sbrk interference.
04253       * Simplify and fix segment insertion, trimming and mspace_destroy
04254       * Reinstate REALLOC_ZERO_BYTES_FREES option from 2.7.x
04255       * Thanks especially to Dennis Flanagan for help on these.
04256 
04257     V2.8.2 Sun Jun 12 16:01:10 2005  Doug Lea  (dl at gee)
04258       * Fix memalign brace error.
04259 
04260     V2.8.1 Wed Jun  8 16:11:46 2005  Doug Lea  (dl at gee)
04261       * Fix improper #endif nesting in C++
04262       * Add explicit casts needed for C++
04263 
04264     V2.8.0 Mon May 30 14:09:02 2005  Doug Lea  (dl at gee)
04265       * Use trees for large bins
04266       * Support mspaces
04267       * Use segments to unify sbrk-based and mmap-based system allocation,
04268         removing need for emulation on most platforms without sbrk.
04269       * Default safety checks
04270       * Optional footer checks. Thanks to William Robertson for the idea.
04271       * Internal code refactoring
04272       * Incorporate suggestions and platform-specific changes.
04273         Thanks to Dennis Flanagan, Colin Plumb, Niall Douglas,
04274         Aaron Bachmann,  Emery Berger, and others.
04275       * Speed up non-fastbin processing enough to remove fastbins.
04276       * Remove useless cfree() to avoid conflicts with other apps.
04277       * Remove internal memcpy, memset. Compilers handle builtins better.
04278       * Remove some options that no one ever used and rename others.
04279 
04280     V2.7.2 Sat Aug 17 09:07:30 2002  Doug Lea  (dl at gee)
04281       * Fix malloc_state bitmap array misdeclaration
04282 
04283     V2.7.1 Thu Jul 25 10:58:03 2002  Doug Lea  (dl at gee)
04284       * Allow tuning of FIRST_SORTED_BIN_SIZE
04285       * Use PTR_UINT as type for all ptr->int casts. Thanks to John Belmonte.
04286       * Better detection and support for non-contiguousness of MORECORE.
04287         Thanks to Andreas Mueller, Conal Walsh, and Wolfram Gloger
04288       * Bypass most of malloc if no frees. Thanks To Emery Berger.
04289       * Fix freeing of old top non-contiguous chunk im sysmalloc.
04290       * Raised default trim and map thresholds to 256K.
04291       * Fix mmap-related #defines. Thanks to Lubos Lunak.
04292       * Fix copy macros; added LACKS_FCNTL_H. Thanks to Neal Walfield.
04293       * Branch-free bin calculation
04294       * Default trim and mmap thresholds now 256K.
04295 
04296     V2.7.0 Sun Mar 11 14:14:06 2001  Doug Lea  (dl at gee)
04297       * Introduce independent_comalloc and independent_calloc.
04298         Thanks to Michael Pachos for motivation and help.
04299       * Make optional .h file available
04300       * Allow > 2GB requests on 32bit systems.
04301       * new WIN32 sbrk, mmap, munmap, lock code from <Walter@GeNeSys-e.de>.
04302         Thanks also to Andreas Mueller <a.mueller at paradatec.de>,
04303         and Anonymous.
04304       * Allow override of MALLOC_ALIGNMENT (Thanks to Ruud Waij for
04305         helping test this.)
04306       * memalign: check alignment arg
04307       * realloc: don't try to shift chunks backwards, since this
04308         leads to  more fragmentation in some programs and doesn't
04309         seem to help in any others.
04310       * Collect all cases in malloc requiring system memory into sysmalloc
04311       * Use mmap as backup to sbrk
04312       * Place all internal state in malloc_state
04313       * Introduce fastbins (although similar to 2.5.1)
04314       * Many minor tunings and cosmetic improvements
04315       * Introduce USE_PUBLIC_MALLOC_WRAPPERS, USE_MALLOC_LOCK
04316       * Introduce MALLOC_FAILURE_ACTION, MORECORE_CONTIGUOUS
04317         Thanks to Tony E. Bennett <tbennett@nvidia.com> and others.
04318       * Include errno.h to support default failure action.
04319 
04320     V2.6.6 Sun Dec  5 07:42:19 1999  Doug Lea  (dl at gee)
04321       * return null for negative arguments
04322       * Added Several WIN32 cleanups from Martin C. Fong <mcfong at yahoo.com>
04323          * Add 'LACKS_SYS_PARAM_H' for those systems without 'sys/param.h'
04324           (e.g. WIN32 platforms)
04325          * Cleanup header file inclusion for WIN32 platforms
04326          * Cleanup code to avoid Microsoft Visual C++ compiler complaints
04327          * Add 'USE_DL_PREFIX' to quickly allow co-existence with existing
04328            memory allocation routines
04329          * Set 'malloc_getpagesize' for WIN32 platforms (needs more work)
04330          * Use 'assert' rather than 'ASSERT' in WIN32 code to conform to
04331            usage of 'assert' in non-WIN32 code
04332          * Improve WIN32 'sbrk()' emulation's 'findRegion()' routine to
04333            avoid infinite loop
04334       * Always call 'fREe()' rather than 'free()'
04335 
04336     V2.6.5 Wed Jun 17 15:57:31 1998  Doug Lea  (dl at gee)
04337       * Fixed ordering problem with boundary-stamping
04338 
04339     V2.6.3 Sun May 19 08:17:58 1996  Doug Lea  (dl at gee)
04340       * Added pvalloc, as recommended by H.J. Liu
04341       * Added 64bit pointer support mainly from Wolfram Gloger
04342       * Added anonymously donated WIN32 sbrk emulation
04343       * Malloc, calloc, getpagesize: add optimizations from Raymond Nijssen
04344       * malloc_extend_top: fix mask error that caused wastage after
04345         foreign sbrks
04346       * Add linux mremap support code from HJ Liu
04347 
04348     V2.6.2 Tue Dec  5 06:52:55 1995  Doug Lea  (dl at gee)
04349       * Integrated most documentation with the code.
04350       * Add support for mmap, with help from
04351         Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
04352       * Use last_remainder in more cases.
04353       * Pack bins using idea from  colin@nyx10.cs.du.edu
04354       * Use ordered bins instead of best-fit threshhold
04355       * Eliminate block-local decls to simplify tracing and debugging.
04356       * Support another case of realloc via move into top
04357       * Fix error occuring when initial sbrk_base not word-aligned.
04358       * Rely on page size for units instead of SBRK_UNIT to
04359         avoid surprises about sbrk alignment conventions.
04360       * Add mallinfo, mallopt. Thanks to Raymond Nijssen
04361         (raymond@es.ele.tue.nl) for the suggestion.
04362       * Add `pad' argument to malloc_trim and top_pad mallopt parameter.
04363       * More precautions for cases where other routines call sbrk,
04364         courtesy of Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
04365       * Added macros etc., allowing use in linux libc from
04366         H.J. Lu (hjl@gnu.ai.mit.edu)
04367       * Inverted this history list
04368 
04369     V2.6.1 Sat Dec  2 14:10:57 1995  Doug Lea  (dl at gee)
04370       * Re-tuned and fixed to behave more nicely with V2.6.0 changes.
04371       * Removed all preallocation code since under current scheme
04372         the work required to undo bad preallocations exceeds
04373         the work saved in good cases for most test programs.
04374       * No longer use return list or unconsolidated bins since
04375         no scheme using them consistently outperforms those that don't
04376         given above changes.
04377       * Use best fit for very large chunks to prevent some worst-cases.
04378       * Added some support for debugging
04379 
04380     V2.6.0 Sat Nov  4 07:05:23 1995  Doug Lea  (dl at gee)
04381       * Removed footers when chunks are in use. Thanks to
04382         Paul Wilson (wilson@cs.texas.edu) for the suggestion.
04383 
04384     V2.5.4 Wed Nov  1 07:54:51 1995  Doug Lea  (dl at gee)
04385       * Added malloc_trim, with help from Wolfram Gloger
04386         (wmglo@Dent.MED.Uni-Muenchen.DE).
04387 
04388     V2.5.3 Tue Apr 26 10:16:01 1994  Doug Lea  (dl at g)
04389 
04390     V2.5.2 Tue Apr  5 16:20:40 1994  Doug Lea  (dl at g)
04391       * realloc: try to expand in both directions
04392       * malloc: swap order of clean-bin strategy;
04393       * realloc: only conditionally expand backwards
04394       * Try not to scavenge used bins
04395       * Use bin counts as a guide to preallocation
04396       * Occasionally bin return list chunks in first scan
04397       * Add a few optimizations from colin@nyx10.cs.du.edu
04398 
04399     V2.5.1 Sat Aug 14 15:40:43 1993  Doug Lea  (dl at g)
04400       * faster bin computation & slightly different binning
04401       * merged all consolidations to one part of malloc proper
04402          (eliminating old malloc_find_space & malloc_clean_bin)
04403       * Scan 2 returns chunks (not just 1)
04404       * Propagate failure in realloc if malloc returns 0
04405       * Add stuff to allow compilation on non-ANSI compilers
04406           from kpv@research.att.com
04407 
04408     V2.5 Sat Aug  7 07:41:59 1993  Doug Lea  (dl at g.oswego.edu)
04409       * removed potential for odd address access in prev_chunk
04410       * removed dependency on getpagesize.h
04411       * misc cosmetics and a bit more internal documentation
04412       * anticosmetics: mangled names in macros to evade debugger strangeness
04413       * tested on sparc, hp-700, dec-mips, rs6000
04414           with gcc & native cc (hp, dec only) allowing
04415           Detlefs & Zorn comparison study (in SIGPLAN Notices.)
04416 
04417     Trial version Fri Aug 28 13:14:29 1992  Doug Lea  (dl at g.oswego.edu)
04418       * Based loosely on libg++-1.2X malloc. (It retains some of the overall
04419          structure of old version,  but most details differ.)
04420  
04421 */
04422 

Generated on Sun Jun 18 17:54:21 2006 for HelenOS Userspace (ia32) by  doxygen 1.4.6