Heap Management

Started by
36 comments, last by ill 12 years, 4 months ago
"Premature optimization" is the wrong warning phrase. I said as much.


In any case, I strongly disagree that people should be (in general) left to decide whether or not to reinvent a wheel. Unless you are exceptionally good, or have very sound evidence that you can do better, existing solutions are almost always the right thing to use.

Wielder of the Sacred Wands
[Work - ArenaNet] [Epoch Language] [Scribblings]

Advertisement

Rule of thumb: When you feel the need to claim “premature optimization”, there is usually a better way to explain why you feel someone should not undergo said task.


And yet, in this thread where you moan about it, the first person to use the term goes on to explain why.... so, what was your point again?

And yet, in this thread where you moan about it, the first person to use the term goes on to explain why.... so, what was your point again?

While I have had a beef with that phrase for a long while, perhaps the reason this thread was the thread that broke my back is because, as rightly mentioned in this article, memory managers are not something you can just one day realize you need and then add them.
It is a mature optimization, in that if you ever need one, you need one from the very start.

Since the needs of each person vary, it is not in our place to say he does not need a memory manager. And the frank fact is if we say he doesn’t, but later down the road he does, we have done a huge disservice, as there is no recovery from that situation.


Yes, he explained why he said that. And in doing so he suggested that you wait until the memory system proves itself to be a bottleneck (#1), trivialized the gains in performance you could potentially get (#2), and suggested that the problem is higher-level (#3).
#1: Again perpetuating the flawed view many programmers have of the meaning of the phrase. No, you most certainly do not wait until your memory system proves itself to be a bottleneck.
#2: The gains can be huge. Not only in performance but in added debugging.
#3: It certainly could be a higher-level problem. But custom memory managers exist and are widely use for a reason. The simple fact is that improving your overall engine design might actually give you even more of a reason to come up with a custom memory manager. You can add features that help your overall engine design, and again this needs to be something you lay down early in the project, not later.



One of my early game engines was slow partially because I didn’t have a custom memory manager. I didn’t have the ability to exploit certain features that would have been a huge gain to my overall performance, and I had to design around that.
On a later project I realized this problem and one of the first things I did was make a custom memory manager that met the needs of a superior engine design. It proved to be one of the most essential parts of my engine.

That is my point.


L. Spiro

I restore Nintendo 64 video-game OST’s into HD! https://www.youtube.com/channel/UCCtX_wedtZ5BoyQBXEhnVZw/playlists?view=1&sort=lad&flow=grid

"Memory manager" is a terrible term and I hereby propose that anyone who uses it henceforth in this thread deserves a swift kick in the face.


Writing a wrapper around an existing allocation system - or, better, deploying specialized allocation schemes instead of dumping everything on a general-purpose heap allocator - is a fine and dandy goal. I would go so far as to say that if you aren't mindful of allocation patterns to the point where you are controlling them via smarter allocation schemes, you're doing it wrong (on non-trivial projects at least; don't forget that plenty of games can run on modern hardware without being stupidly optimal).

If you write your own heap allocator in hopes of improving everything, you're also doing it wrong. If you write a heap allocator without first deploying better allocation schemes in general, you're... well, to keep it politically correct, you've got a hell of a lot to learn.


Advocating smarter allocation schemes is good, and I support this. For 99.9% of programmers, this means using existing allocation scheme libraries - which, generally speaking, also get you access to debug instrumentation and inspection tools, which solves basically all of the problem.

Advocating custom heap allocators is bad, and I do not support it in any way, shape, or form - especially when the advocate is working with precisely zero information on the nature of the program and its allocation patterns in the first place.


Yes, a little bit of pre-emptive consideration can save you a lot of pain down the road. But deploying complex allocation strategy frameworks is not always a foregone necessity. If you're writing PacMan or a Zelda clone for the contemporary PC, you have no need to waste your time on such things. If you're working on a mobile device or writing the next Halo, then yeah, you'd be silly not to do some memory planning up front.



Sometimes, the "right tool for the job" means not dropping a bomb on a village just to eradicate a couple of mice.



Carelessly broad generalizations are for people who are incapable of thinking for themselves.

Wielder of the Sacred Wands
[Work - ArenaNet] [Epoch Language] [Scribblings]


Advocating custom heap allocators is bad, and I do not support it in any way, shape, or form....

Before I begin to violently disagree with you, could you clarify what you mean by "custom heap allocators?"
Hand-rolling a general-purpose heap allocator by yourself.

There do exist a few general-purpose heap allocation implementations that are very, very good - and I use them. (dlmalloc comes to mind readily, for instance.)

I should have been clearer: I'm not against replacing heap allocators if there is a need for it (and on many platforms, especially when pushing the limits of the system, there is a definite need). I'm against the average programmer striking out to reinvent that particular wheel, because it's bloody hard and I've seen far more horrifically bad attempts at it than good ones.

Wielder of the Sacred Wands
[Work - ArenaNet] [Epoch Language] [Scribblings]

Ok, that makes a lot more sense than the first sentence.
On 11/29/2011 at 5:38 PM, ApochPiQ said:

"Premature optimization" is the wrong warning phrase.

I recognize that you have other, very valid, concerns about this as well, but the P.O. objection is perfectly valid.

Granted, it's possible to deploy a custom heap without introducing any custom requirements about how it's used, such that most engineers can proceed as usual without even being aware of it, and most existing code just works with it without modification. I've never seen one end up quite that clean in practice though. But I concede that, as opposed to the general case of premature optimization, replacing a heap allocator is likely to add relatively little complication to the rest of development.

More significantly, I've never seen a performance problem, whose root cause involved dynamic memory allocation, that wasn't addressed far more effectively with a little careful consideration of resource usage strategies. In other words, when allocation related performance problems do arise, improving heap allocation performance is rarely necessary or sufficient. Spending time and effort to improve it speculatively is, by definition... well, you get the idea.

I"m sure a heap manager is very useful on consoles and any other systems with terrible default allocators..

But testing nedmalloc, using the C++ tests it supplies..


Speed test:
vector<unsigned int>:
Appending each of 10,000,000 elements: 85.200315ms
Clearing 5,000,000 elements: 0.000769ms
Assigning 5,000,000 elements: 8.873440ms
Appending block of 5,000,000 elements: 36.902473ms
Popping 4,999,999 elements: 10.842796ms
Overallocation wastage: 56.249992%
Total time: 141.819793ms

vector<UIntish>:
Appending each of 10,000,000 elements: 94.803858ms
Clearing 5,000,000 elements: 0.000000ms
Assigning 5,000,000 elements: 13.504598ms
Appending block of 5,000,000 elements: 42.104648ms
Popping 4,999,999 elements: 10.760179ms
Overallocation wastage: 56.249992%
Total time: 161.173283ms

nedallocatorise<vector, UIntish, nedpolicy::typeIsPOD<true>>:
Appending each of 10,000,000 elements: 105.057961ms
Clearing 5,000,000 elements: 0.000384ms
Assigning 5,000,000 elements: 13.207177ms
Appending block of 5,000,000 elements: 46.086399ms
Popping 4,999,999 elements: 10.768633ms
Overallocation wastage: 56.249992%
Total time: 175.120553ms


The vector using the default allocator was the fastest.
I'm strongly against "Not Invented Here" syndrome which is what rolling your own allocation system is, as many posters have suggested.

I'm a pretty competent programmer and in toy projects have tinkered with memory allocation, but when it comes to getting work done, I'd much rather abstract allocations, or resource loading or whatever somewhat and plug in a proven third party library than roll my own. If it becomes a problem when profiling then it's worth revisiting, but other than that, why bother?

Perhaps premature optimization is the wrong term. Maybe obsessive compulsive optimization is more fitting? So many developers seem to lose sight of the point, making things go and using all the resources at their disposal to do so, instead, they worry about edge cases and optimizations that may or may not be required.

Surely the true skill is coding in such a way that you don't wait yourself into a corner if you do decide you need to write your own DDS loader or memory allocator?

This topic is closed to new replies.

Advertisement