cpython/Mac/mwerks/malloc
Jack Jansen 184c16031e DisposPtr -> DisposePtr 1997-04-08 15:27:29 +00:00
..
README Seems I had never checked the README file in... 1995-08-31 14:18:30 +00:00
calloc.c
malloc.c DisposPtr -> DisposePtr 1997-04-08 15:27:29 +00:00

README

		What is this?
		-------------
This package is a memory allocator for the Macintosh. It was initially
implemented for use with the MetroWerks CodeWarrior compiler on the
PowerPC Mac, but may also be useful (in a more limited way) for use
with MW 68K or Think compilers.

This is distribution 1.0, dated April 19, 1995.

		How does it work?
		-----------------
Actually, 99% of the code comes straight from the old old BSD-unix
"fast malloc", only the interface to the low-level memory allocator
and the handling of large blocks is different. The allocator follows
one of two strategies, based upon block size:
- for small blocks (anything up to 8K, as distributed), the size is
  rounded up to the next power of two, and that much is
  allocated. Realloc, etc. understand about this. Small blocks are
  packed into 8K segments.
- Larger blocks are allocated directly using NewPtr().

		Why should I want it?
		---------------------
The reason for writing this is that I've had serious problems with MW
malloc, especially in one piece of software, the Python
interpreter. Python is a very-high level interpreted language, and
spends very large amounts of time in malloc. Moreover, it reallocs()
like there's no tomorrow, and allocates and frees tiny and huge blocks
intermixedly. After some time running, this caused two things (using
the original MW malloc): memory useage grew exponentially and so did
runtime. MetroWerks have tried to be helpful, but I have been unable
to provide them with simple sample-programs that showed the problem:
it seems to be something to do with fragmentation and only happens
under very heavy use.

The 68K MW malloc has the same problems, and the Think C malloc
has a similar one: it shows the same growth problem but not the
increase in runtime.

Two additional reasons for using it:
- It is blindingly fast.
- It has pretty good range checking and such (enabled with a #define),
  so it'll help you catch various programming errors like referencing
  outside the bounds of the malloced block.

One reason for not using it:
- It is rather wasteful of memory. Small blocks, on average, occupy
  25% more memory than they need, and the allocation in 8K chunks
  wastes another 50K (on average). Also, memory is never returned from
  malloc()s pool to the Memory Manager.
  
		How do I use it?
		----------------
For MW PPC: simply add the sources to your project. Due to the way the
linker works all mallocs will use the new malloc, even the malloc
calls that come from the libraries (if I'm informaed correctly).

For MW 68K: ditto, only supposedly the library malloc calls will still
use the original malloc. The two packages don't bite each other,
however, so there shouldn't be any problems.

For Think: more work, but you can rebuild the ANSI library with this
malloc, since the Think distribution contains everything you need for
this.

		Is that all?
		------------

Yes. Let me finish off by asking that you send bug reports, fixes,
enhancement, etc to me at the address below.

				Jack Jansen
				Centrum voor Wiskunde en Informatica
				Kruislaan 413
				1098 SJ  Amsterdam
				the Netherlands

				<Jack.Jansen@cwi.nl>