cpython/Mac/mwerks/malloc
Jack Jansen 9ae898b415 ANSIfication step 2: make sure all needed prototypes are available, and all needed
header files included.
2000-07-11 21:16:03 +00:00
..
README Documented (slightly) the USE_CACHE_ALIGNED define, for the standalone 1997-05-29 14:57:45 +00:00
calloc.c
malloc.c ANSIfication step 2: make sure all needed prototypes are available, and all needed 2000-07-11 21:16:03 +00:00

README

		What is this?
		-------------
This package is a memory allocator for the Macintosh. It was initially
implemented for use with the MetroWerks CodeWarrior compiler on the
PowerPC Mac, but may also be useful (in a more limited way) for use
with MW 68K or Think compilers.

This is distribution 1.1, dated May 28, 1997.

		How does it work?
		-----------------
Actually, 99% of the code comes straight from the old old BSD-unix
"fast malloc", only the interface to the low-level memory allocator
and the handling of large blocks is different. The allocator follows
one of two strategies, based upon block size:
- for small blocks (anything up to 8K, as distributed), the size is
  rounded up to the next power of two, and that much is
  allocated. Realloc, etc. understand about this. Small blocks are
  packed into 8K segments.
- Larger blocks are allocated directly using NewPtr().

		Why should I want it?
		---------------------
The reason for writing this is that I've had serious problems with MW
malloc, especially in one piece of software, the Python
interpreter. Python is a very-high level interpreted language, and
spends very large amounts of time in malloc. Moreover, it reallocs()
like there's no tomorrow, and allocates and frees tiny and huge blocks
intermixedly. After some time running, this caused two things (using
the original MW malloc): memory useage grew exponentially and so did
runtime. MetroWerks have tried to be helpful, but I have been unable
to provide them with simple sample-programs that showed the problem:
it seems to be something to do with fragmentation and only happens
under very heavy use.

The 68K MW malloc has the same problems, and the Think C malloc
has a similar one: it shows the same growth problem but not the
increase in runtime.

Two additional reasons for using it:
- It is blindingly fast.
- It has pretty good range checking and such (enabled with a #define),
  so it'll help you catch various programming errors like referencing
  outside the bounds of the malloced block.

One reason for not using it:
- It is rather wasteful of memory. Small blocks, on average, occupy
  25% more memory than they need, and the allocation in 8K chunks
  wastes another 50K (on average). Also, memory is never returned from
  malloc()s pool to the Memory Manager.
  
		How do I use it?
		----------------
You may want to look at the source: most debugging options are off by
default, and so is returning cache-aligned blocks. Near the top of
malloc.c you will see a couple of defines you can turn on.

For MW PPC: simply add the sources to your project. Due to the way the
linker works all mallocs will use the new malloc, even the malloc
calls that come from the libraries.

For MW 68K: ditto, only supposedly the library malloc calls will still
use the original malloc. The two packages don't bite each other,
however, so there shouldn't be any problems.

For Think: more work, but you can rebuild the ANSI library with this
malloc, since the Think distribution contains everything you need for
this.

		Is that all?
		------------

Yes. Let me finish off by asking that you send bug reports, fixes,
enhancement, etc to me at the address below.

				Jack Jansen
				Centrum voor Wiskunde en Informatica
				Kruislaan 413
				1098 SJ  Amsterdam
				the Netherlands

				<Jack.Jansen@cwi.nl>