Jump to content
  • Advertisement
Sign in to follow this  
Zipster

Heap and Memory Management Questions

This topic is 4975 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Just a few questions that have been on my mind: 1) I'm writing an application (C++) that dynamically links against several DLL modules. Each one uses dynamic memory allocation with new/delete. However they'll all linked against the DLL runtimes (MSVCR71.dll or MSVCR71D.dll). My question is, since they're all linked to the same runtime, does that mean they all use the same heap? Furthermore, is it this linkage determine which heap is used by each module, or is it always the process heap with the difference being in allocation structures (saying for instance I chose one of the modules to be statically linked against the runtime instead)? All my memory management is localized for each module, I'm just curious as to how the above works. 2) I'm always hearing that frequent allocation and deallocation of memory (new/delete, malloc/free, HeapAlloc/HeapFree) can be a bottleneck if it's done too often per frame, however I've never written an application that does this on a large enough scale to be able to see for myself just what the effect is. I was wondering just how slow these memory allocations can be (relative to other common operations), what affects their speed, and on what scale people mean by "frequent". The reason I ask is because I'm planning on writing my own memory manager that acts as a layer between the application and the CRT, keeping track of allocated memory and using a paginated approach to minimize allocations. However, if the overhead of my memory manager is more than what I gain from minimizing frequent allocations, then it wouldn't be worth it. I know something like this depends on how the memory manager is implemented, but do I stand to gain anything from writing my own memory manager, or are allocations simply not that slow? Secondly, would I gain more from better management on the object level instead? An example of object-level management would be keeping two lists of objects, a live list and a dead list, and recycling a dead object whenever a new one is needed, or allocating a new one (via CRT new) if the dead list is empty. The benefit of the object-level management is I don't have to write a complicated memory manager, however the downside is that I have to maintain a static list for each type of object (can be more easily implemented with a templated base class, but still not pretty). The benefit of using my own memory manager is I can just go along allocating and releasing objects and memory all willy nilly, however the downside is that the memory manager might be slower in the long run. Again, I really don't have much experience having to think about this is such detail, so I'm looking for what people have done in the past. Lastly, thanks for reading this monster of a post! [smile]

Share this post


Link to post
Share on other sites
Advertisement
1. Yes, they will share the same heap. The CRT makes private heaps within each module it is statically compiled into, but it constructs these arenas from the base process heap.

2. I don't know enough to say one way or the other. Do try out Doug Lea's malloc replacement though -- it is used in some Linux kernels. I'd think that any sort of object recyling scheme would be far simpler to implement than a fast, general-purpose memory manager.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!