unsigned char *buffer = malloc(BUFFER_SIZE);
CSomeClass *object = new (buffer) CSomeClass();
// Use the object for something...
This is mainly useful if you are creating objects from pre-allocated memory.
I also read about several memory allocation methods. The methods I found interesting were the linear method, the stack method, and the buddy system.
The linear method involves having a line of memory which can be allocated in parts. When data is allocated with this method, the data is placed at the beginning of the free space. Subsequent allocations are placed directly after one another in memory, which is why this method is called "linear". Unfortunately, you cannot deallocate specific allocations with this method. The only other operation you are allowed to perform is a complete deallocation of everything in this line of memory. Which brings us to...
This is exactly like the linear method, except it has one advantage: You can deallocate the most recent allocation. This resembles a LIFO (last-in first-out) stack, where the last item pushed onto the stack is the first item popped off of it, and that is where it gets its name.
The buddy system is classed as a pool, which is why I am bringing this up here. Pools allocate memory in chunks, and they have the advantage of allowing specific chunks to be deallocated. Unfortunately, this deallocation can leave small holes in the pool. Getting holes in a pool is called "external fragmentation" and can cause large allocations to fail even though there appears to be enough total free space. This happens because the free space is in little pockets all over the place, and the large allocation can't fit in anywhere. External fragmentation can be lessened by the allocation method and how free space is treated. There is also another type of fragmentation called "internal fragmentation" which is caused by a chunk being too big for the requested allocation size. The difference in size causes some wasted space at the end of the chunk. Again, internal fragmentation can be lessened by the type of allocation you choose to use.
This is a pool-type allocation method like discussed previously. The allocator has a list of acceptable chunk sizes (usually powers of 2) and keeps track of the free chunks of each size. When data is allocated, a free chunk is looked for which can snugly fit the data. If there isn't a free chunk of the required size, but there is one in a larger size, the larger chunk is split in half. If the resulting chunk is still too big, the process is repeated until the smallest allowed size is reached. If there are two free chunks right next to each other and they are the two halves of a split, they can be combined in order to allow for larger allocations later. I am guessing the "splitting" system and having two halves of a larger chunk is why this method is called the buddy system.
I learned about virtual addressing as well, but I don't have much to say about it at this point. The one thing which I found interesting is that it is possible to have a "miss" with it, similar to having a cache miss. The miss happens when the CPU doesn't have the mapping information for the requested page. This is called a "translation lookaside buffer miss".
These are some resources which I found very helpful:
Alternatives to malloc and new
The Memory Management Reference: Beginner's Guide
Start Pre-allocating And Stop Worrying
Virtual Addressing 101
[size="1"]Reposted from http://invisiblegdev.blogspot.com/