Sign in to follow this  
Ed Welch

Tool to measure memory fragmentation

Recommended Posts

Ed Welch    1008

Does anyone know of a tool to display the memory layout of a c++ program?

Memory fragmentation is a problem that can occur in some long lived apps that delete and assign memory a lot. The problem is the developer can never know that he even has a problem unless there is a tool that displays the heap layout. It would need a graphic display of the entire heap and the "holes" in the memory that aren't getting reallocated.

Share this post


Link to post
Share on other sites
frob    44962

On a PC or other modern workstation that uses virtual memory, those holes don't really mean much. They are only logical holes since the OS will page out lesser-used blocks of memory and page in blocks when needed.

 

If you start to come close to exhausting your memory space (e.g. 2GB or 3GB on a 32-bit program) and need another large allocation then those are important, but for most PC programmers it hasn't been an issue for two decades.

 

If you're on a console or embedded system, or a system that does not use virtual memory, then you will likely be using your own custom allocator. Inside your custom allocator you can build statistics on memory fragments.

 

If you're on a system where it is an actual potential problem, you'll need a custom allocator. Pull from one side for long-term allocations and operate on a stack-based system; allocate on entering that level of the stack, release on exiting that level of the stack, and make sure every system operates on a push/pop design. Short term allocations pull from the opposite side of memory and have strict rules about when you can and cannot allocate. Google can find lots of examples of the rules and requirements for such systems since it needs to match your system's design.

Share this post


Link to post
Share on other sites
Ed Welch    1008

I think you need to analyse the memory usage first.

Then, based on the results of the memory fragmentation analysis you can decide whether you need to use a custom allocator or not. Does that make sense?

Share this post


Link to post
Share on other sites
wintertime    4108

Its pretty easy to replace new and delete and #define malloc/realloc/free, then log all data and call the real malloc/free or some OS function. It can change how the memory is fragmented though.

Or you could map all memory using VirtualQuery, but that probably wont help you much as the C/C++ allocation functions can preallocate and keep freed memory.

Share this post


Link to post
Share on other sites
Hodgman    51323
If you want this quickly, then just buy elephant and goldfish :)
http://www.juryrigsoftware.com/Elephant/Goldfish
I haven't used it, but I very nearly bought it when starting my current game project.

All the companies that I've worked for have had in-house allocators, tracking and visualisation tools, similar to the above. It's not that hard to implement yourself, but it takes a lot of testing in anything allocation-related to have confidence you've not created subtle but dangerous bugs ;)

I think you need to analyse the memory usage first.
Then, based on the results of the memory fragmentation analysis you can decide whether you need to use a custom allocator or not. Does that make sense?

You need a custom allocator to implement tracking/logging though :lol:
To begin with you can just do the logging and pass-through to malloc/etc instead of implementing an full allocator yourself.

If you start to come close to exhausting your memory space (e.g. 2GB or 3GB on a 32-bit program) and need another large allocation then those are important, but for most PC programmers it hasn't been an issue for two decades.

I wish! Lots of companies are still making the switch from x86 to 64bit even now :( My previous employer shipped their first 64bit PC game this year... And only because they were forced to by the address space limit. Regular 32bit x86 address space is really tight for modern games these days.

Share this post


Link to post
Share on other sites
Ed Welch    1008

Thanks Hodgman,

I'll have a look at that.

Incidentally, I made a suggestion to Apple to add memory fragmentation analysis to Instruments and they seemed to be receptive to the idea (iOS is the platform I develop for)

Share this post


Link to post
Share on other sites
Stainless    1875

If you think it is going to be a major issue, then I would advise you to write your own memory manager and add a metrics system.

 

The way we work is we have multiple memory managers designed for different scenarios. From a very simple scratchpad, to a complex self defragging render pool, we have loads of them in game.

 

As you can imagine, this can get very confusing and complicated.

 

To deal with that we have a metrics system. It is very simple in principle. Every time you allocate or free memory, it creates a structure and sends it to a manager.

 

The manager has a ring buffer and whenever it is full it sends it over the network to an external app. The external app then matches allocations and frees and monitors the state of memory.

 

It is incredibly useful for finding memory leaks and keeping a very tight view of the memory use of a game, but it slows the game down a lot. So you have to use it with caution.

 

I don't think if you are a casual coder working on a hobby game it is worth while, but if you want to do a full AAA game, it's essential.

Share this post


Link to post
Share on other sites
SmkViper    5396

I wish! Lots of companies are still making the switch from x86 to 64bit even now sad.png My previous employer shipped their first 64bit PC game this year... And only because they were forced to by the address space limit. Regular 32bit x86 address space is really tight for modern games these days.


I think they're making the switch now because the two major consoles are x64 only. So if you want to write a game for them, you're going to be writing 64-bit code whether you want to or not.

Companies have shown little interest in moving to 64-bit based purely on address space limitations, mostly because very few AAA devs are PC-only and therefore still had to struggle under the 512mb memory limitation of previous-gen consoles. So if you're already limited to a 1/4 of the memory a 32-bit process can use on your major platforms, why go through all the effort to convert your code base to 64-bit on PC where your assets aren't going to be big enough in the first place to make use of the extra memory?

Share this post


Link to post
Share on other sites
Bregma    9214


I wish! Lots of companies are still making the switch from x86 to 64bit even now. My previous employer shipped their first 64bit PC game this year... And only because they were forced to by the address space limit. Regular 32bit x86 address space is really tight for modern games these days.

I had a job porting stuff to 64-bit Unix and Linux systems in 1995.  That's 20 years ago.  Of course, at the time, most Windows games were still 16-bit and slowly being ported to 32-bit protected mode and 8-bit consoles were still available in dwindling numbers.  Ironically, I now spend a lot of time trying to shoehorn stuff into tiny SOCs. Thank goodness we never have to worry about portability and efficiency any more.

Share this post


Link to post
Share on other sites
Hodgman    51323

I think they're making the switch now because the two major consoles are x64 only. So if you want to write a game for them, you're going to be writing 64-bit code whether you want to or not.Companies have shown little interest in moving to 64-bit based purely on address space limitations, mostly because very few AAA devs are PC-only and therefore still had to struggle under the 512mb memory limitation of previous-gen consoles. So if you're already limited to a 1/4 of the memory a 32-bit process can use on your major platforms, why go through all the effort to convert your code base to 64-bit on PC where your assets aren't going to be big enough in the first place to make use of the extra memory?

Nah in this particular case, they ported to the new consoles first, but stayed with x86 on Windows. I pushed hard for 64bit Windows when Renderdoc was unable to perform a GPU capture (as doing ao copies all your GPU resources, doubling your art address space requirements!). Seeing as the new consoles have >3GB RAM, the art team very quickly broke through the old 256MB GPU RAM limit.
Compatibility with debugging tools like this, and occasional D3D "out of memory" errors are I think what pushed them.
Their toolchain stayed on x86 for longer, but some middleware vendors stopped supplying x86 binaries, which was initially dealt with by splitting to tool over multiple processes and using IPC! :LOL: The huge job of fixing that only happened recently. The big factors there was that the IPC was dodgey, but also address space issues. The main tool codebase was C#, which calls out to many different bits of middleware. The texture processing middleware likes to work in 32bit precision, so an 8k texture occupies a gig of RAM. Even with lots of hints and explicit collection calls to the C# GC, trying to malloc that much RAM often failed randomly...

Share this post


Link to post
Share on other sites
SmkViper    5396

I think they're making the switch now because the two major consoles are x64 only. So if you want to write a game for them, you're going to be writing 64-bit code whether you want to or not.Companies have shown little interest in moving to 64-bit based purely on address space limitations, mostly because very few AAA devs are PC-only and therefore still had to struggle under the 512mb memory limitation of previous-gen consoles. So if you're already limited to a 1/4 of the memory a 32-bit process can use on your major platforms, why go through all the effort to convert your code base to 64-bit on PC where your assets aren't going to be big enough in the first place to make use of the extra memory?

Nah in this particular case, they ported to the new consoles first, but stayed with x86 on Windows. I pushed hard for 64bit Windows when Renderdoc was unable to perform a GPU capture (as doing ao copies all your GPU resources, doubling your art address space requirements!). Seeing as the new consoles have >3GB RAM, the art team very quickly broke through the old 256MB GPU RAM limit.
Compatibility with debugging tools like this, and occasional D3D "out of memory" errors are I think what pushed them.
Their toolchain stayed on x86 for longer, but some middleware vendors stopped supplying x86 binaries, which was initially dealt with by splitting to tool over multiple processes and using IPC! laugh.png The huge job of fixing that only happened recently. The big factors there was that the IPC was dodgey, but also address space issues. The main tool codebase was C#, which calls out to many different bits of middleware. The texture processing middleware likes to work in 32bit precision, so an 8k texture occupies a gig of RAM. Even with lots of hints and explicit collection calls to the C# GC, trying to malloc that much RAM often failed randomly...


Ah - always interesting to see how other studios do things. My post was accurate to how things have gone over at my studio smile.png (Even though we still have some 32-bit tools, usually due to middleware vendors not updating libraries to 64-bit, if they're updated at all)

I'm mildly surprised you'd use IPC instead of, say, command line parameters and files, but I guess that depends on the tool and how it is expected to integrate.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this