Is memory management a must have?

Started by
14 comments, last by Andrew Kesterson 11 years, 6 months ago

That sounds like the worst kind of micro-optimization. If you're worrying about things down to individual bytes and cycles, then you're worrying about the wrong things. There's rarely meaningful performance gains to be had from that kind of optimization. At the same time, if you have a struct with 5 ints but you only use one of them, the big question is - why on earth do you have 5 ints in the struct? If there's no reason for the other 4 to be there - get rid of them. But that's from a good code-cleanliness perspective rather than anything else.


I think they are more referring to a design issue, where those 5 ints in some way are conceptually coupled (they are parameters to a "vehicle" or such), but you in that particular function you want to optimise only use one of them (like for example the "speed"). In that case it might be bad for memory throughput and cache to work on this sparse array.
Still a micro optimisation though, and nothing you should worry about until you find you need it through performance measurements.

It's good to know about these "tricks" or "gems", but one should not confuse them with code guidelines, and should not worry about them in daily work. (unless you are a performance optimisation specialist)
Advertisement

That sounds like the worst kind of micro-optimization. If you're worrying about things down to individual bytes and cycles, then you're worrying about the wrong things. There's rarely meaningful performance gains to be had from that kind of optimization.

I wouldn't call it a micro-optimization at all but rather a design issue and one that is becoming more important every year. There are two issues at stake, the first being the fact that fetching from RAM is slow and will continue to get slower. Therefore one of the most important issues is how you fetch and cache the data to operate on as there is really no reason not to do this... it usually makes the code much easier to read. On current generation platforms (and likely future) it is extremely important as you can't afford to DMA a bunch of data that you don't need to work on, etc. Therefore I wouldn't call this the "worst kind of optimization" but rather "the best kind of design".

Also note when data is separated out and designed like this it usually goes hand in hand with being able to parallelize operations much easier. You aren't passing a large object with the kitchen sink inside (where realistically anything could be called/changed)... you are able to pass large contiguous blocks of memory that hold specific pieces of data to be worked on.


That sounds like the worst kind of micro-optimization. If you're worrying about things down to individual bytes and cycles, then you're worrying about the wrong things. There's rarely meaningful performance gains to be had from that kind of optimization. At the same time, if you have a struct with 5 ints but you only use one of them, the big question is - why on earth do you have 5 ints in the struct? If there's no reason for the other 4 to be there - get rid of them. But that's from a good code-cleanliness perspective rather than anything else.


in my game I have a on air mesh loading from hdd stream. A mesh of 100 000 verticies needs about 8 arrays of megabytes size to dispatch to GPU ram. If I had been allocating those temporary byte arrays and sending them to GPU and freeing them, my game would be framing intensively. I instead started to use preallocated memory for those temporary large arrays and now I can load several 100 000 vertex models to scene without a noticable frame (without textures of course, those are preloaded for whole world).

This was just an example, we are not talking about preallocating 5 ints. Nitpicking preallocated memory to be useless optimization is realy out of sense.
As the other posters have pointed out, the gem refers to having a pool of memory that you allocate up front, as opposed to allocating on demand. This is the way that the Java JVM works; at startup time, it requests (from the operating system) the maximum amount of memory the program is configured to use (per environment flags), and then does its own allocations out of that memory later. This way it doesn't have to wait on the OS scheduler, kernel, whatever, to do the job for it, and it can optimize its memory arrangement however is optimal for that specific program. The previously mentioned boost::pool does the same thing. There are C libraries that do the same, etc, ad infinitum.

See the wikipedia article on Memory Pools for more generalized information: http://en.wikipedia....iki/Memory_pool

As the other posters have pointed out, the gem refers to having a pool of memory that you allocate up front, as opposed to allocating on demand. This is the way that the Java JVM works;


It is a mechanism I am hesitant to. It reminds me of eating lunch at a place like McDonalds; if it looks like the tables are not going to suffice, people start allocating tables before ordering the food. It is also like Microsoft Windows today. Many applications take a long time to start, so they add some pre-startup functionality with the system or login.

Of course, there may be a benefit of speed. But it can also result in everyone losing. Please excuse me for associations in tangent space.
[size=2]Current project: Ephenation.
[size=2]Sharing OpenGL experiences: http://ephenationopengl.blogspot.com/

It is a mechanism I am hesitant to. ... Of course, there may be a benefit of speed. But it can also result in everyone losing. Please excuse me for associations in tangent space.


It's not suitable for every situation, certainly, but there are times when you know you are better off allocating everything up front, rather than piecemeal. YMMV.

This topic is closed to new replies.

Advertisement