Memory Management tips

Started by
9 comments, last by Emerican_360 16 years, 10 months ago
Ive written a game but after loading a bunch of models and textures the memory usage is extremely high for the little things my game does. Please could someone give some some tips or tricks into how i can manage the memory effectively Thanks
Advertisement
Are you loading the same texture twice? If so, try to find a way around that so different objects share the same texture. Apart from that, the only solution is lower res textures and texture compression. I'm afraid I don't know how to do the latter automatically, if it is at all possible, but it's always possible to do it manually with a shader that unpacks the compressed values.
If for some reason you're asking for a lot of small memory allocations, consider a pooler.
In some systems I've made this halved memory consumation (and also resulted in a good performance increment)!

Previously "Krohm"

Im sorry if this sounds stupid but whats a pooler?
Sorry, I'm not sure this is the right name.
An object pool. A suballocator. An object manager. Call it like you want.
In short, it's a layer on top of new and delete which exploits application-specific data only code can know by allocating lots of objects in a blow.

For example, if you have a "plane" object and your world is made of solids, you know it'll always be the need for at least 4 planes.
If you know your average world eats - say - 1000 planes you tell the suballocator to allocate 80 planes from the heap each blow.
The suballocator then provides function to "allocate" managed object and "free" them.

Because the heap stores a bit of metadata for each allocation, large mallocs are more efficient.

Previously "Krohm"

Memory optimisation is just the same as any other kind of optimisation - profile first and figure out where the low hanging fruit lies. As a start you might consider tagging all your memory allocations so you can get a rough breakdown of where most of it is being spent, although that won't necessarily show everything if memory is being allocated and managed in external APIs.
Quote:Original post by Emerican_360
Ive written a game but after loading a bunch of models and textures the memory usage is extremely high for the little things my game does.

Please could someone give some some tips or tricks into how i can manage the memory effectively

Thanks


Memory pools doesn't seem to be the answer here unless the OP suffers from some heavy fragmentation. Memory pools (great introduction here) can usually improve performance, but memory consumption often goes up because of the unused memory kept in pools.

As OrangyTang said you need to determine where you use all your memory. If it is because of sloppy programming then you may need to improve your code. If you have too many resources then you may have to consider compressing rarely used resources in memory and kicking very rarely used resources to disk since you don't have room for them and then throw out some other rarely used resource (this approach is known as caching). You just load them in again when you need them. Whatever you choose, remember to find your problem before trying to fix it.

Quote:In short, it's a layer on top of new and delete which exploits application-specific data only code can know by allocating lots of objects in a blow.

Overloading new and delete for pool allocation is generally a bad idea. That means every call to new and delete will have some extra conditional statements and calculations even though it's not needed. Having the user explicitly specify when he/she needs a memory pool, what size and what block size is usually the best approach and is also flexible enough to allow other libraries and the actual application to take advantage of it. This is also the approach taken by the memory pooling library Boost.Pool which is great for integrating pooling in a project.

Quote:Because the heap stores a bit of metadata for each allocation, large mallocs are more efficient.

It does? Can you point me to a source supporting that, because as far as I know it isn't the case unless you have some kind of heavy memory debugging enabled. Small allocations are slow and result in fragmentation, and that is usually the reason for using pooling.
A likely scenario, if you're relatively new to graphics programming is to load a texture for every instance of an object instead of just once for all instances to share. Same with any audio that you load.

It's also entirely possible that you have a memory leak. Does your memory footprint keep increasing if you leave the game running for a while?

-me
Quote:Original post by CTar
but memory consumption often goes up because of the unused memory kept in pools.
No. It goes up because of wrongly sized pools. It's considerably different.
Quote:Original post by CTar
Overloading new and delete for pool allocation is generally a bad idea. That means every call to new and delete will have some extra conditional statements and calculations even though it's not needed.
It isn't overloaded (carefully read the msg) and I'm rather sure the if statement isn't there. The last compiler I remember about producing if statements to resolve object overriding was... in 1993 I think! Overloading is a compile-time feature while overriding is a quick relative-jump trick: it is just as inefficient as a standard function call on today's x86.
Quote:Because the heap stores a bit of metadata for each allocation, large mallocs are more efficient.

It does? Can you point me to a source supporting that, because as far as I know it isn't the case unless you have some kind of heavy memory debugging enabled. Small allocations are slow and result in fragmentation, and that is usually the reason for using pooling.
Which is exactly what I have suggested to look into. What are you trying to say here? The way the heap manages its free lists is implementation specific. It could use this method or a bit-pattern driven or whatever.

I figure out we're saying the same thing. (???)

Going back to the topic
1)What is the "unexpected" memory usage?
2)How many assets are you loading?
3)Could you estimate how much memory you should be using?


Consider that a 100kB JPEG can easily inflate to several megs of memory used.

Previously "Krohm"

Quote:Original post by Krohm
No. It goes up because of wrongly sized pools. It's considerably different.

Of course if you choose a number of memory pools so small that you never have unused blocks then it will be unnoticeable, but the performance increases will also be very small compared to memory pools closer to the average consumption.

The important thing when optimizing something like a game is to reduce peak performance, not best-case performance. A player don't care about getting 2000FPS instead of 1800FPS when looking at the menu, he cares about getting 55FPS instead of 45FPS when engaging in a heavy fight. To truly reduce peak performance we need some pools with more room in them, maybe even dynamically growing and shrinking.

Quote:It isn't overloaded (carefully read the msg) and I'm rather sure the if statement isn't there. The last compiler I remember about producing if statements to resolve object overriding was... in 1993 I think! Overloading is a compile-time feature while overriding is a quick relative-jump trick: it is just as inefficient as a standard function call on today's x86.

Ok, I assumed you meant overloading the global new and delete. This approach is considerably better and I would accept it in most cases, but I still think it's inferior compared to having the user explicitly creating pools. For this to work well you need to guess how many of some object the user will allocate, but it will still be a guess. It would be much more efficient to let the user specify how many he/she actually intends to allocate.

Quote:Which is exactly what I have suggested to look into.

Memory pooling is usually used to improve performance, but the OP complained that his memory usage was high. Techniques like caching and sharing would be much more appropriate for decreasing consumption.

Quote:What are you trying to say here? The way the heap manages its free lists is implementation specific. It could use this method or a bit-pattern driven or whatever.

It's, but how many modern compilers uses a bit-pattern in release mode? Because it's implementation-defined it could also be implemented such that large objects are more expensive than many small objects, but I just don't know of any implementation that does that.

This topic is closed to new replies.

Advertisement