boost::any: how fast/how slow is it?

Started by
16 comments, last by rip-off 13 years, 7 months ago
That's a good point Fire Lancer, thanks.
...
Advertisement
Are you really going to pass different renderers to the same sprite? Why can't the renderer just create sprites for you? That way, the sprite can store a pointer to the right graphics device internally. That way you'll very rarely have to cast anything. This is what I do.
Quote:Original post by theOcelot
Are you really going to pass different renderers to the same sprite? Why can't the renderer just create sprites for you? That way, the sprite can store a pointer to the right graphics device internally. That way you'll very rarely have to cast anything. This is what I do.


No. A sprite will only get one renderer. This is why the casting isn't necessary all the time.
...
Casting is your conscious. It screams 'this is wrong'.
I confess I'm quite confused now. I know that explicit casting and pointers must be avoided while we "think C++", but shall my conscious scream when I use any_cast as well? It is certainly possible to completely avoid casts, but it would make some simple tasks a lot harder.
...
Casting in this way is generally a strong sign that you could solve your problem using polymorphism. Yes, it may make things look harder at first, but once you're familiar with the correct tool for the job, it's actually a lot easier and a lot more powerful.

I would say that any_cast in your situation is on par with using void*, albeit slightly safer - it indicates that you could probably get much better results by thinking about things from a different angle.


Don't worry if that new angle doesn't come naturally or easily, though; it can take a lot of time and a lot of mind-bending to get used to some of the approaches that are necessary in designing robust code [smile]

Wielder of the Sacred Wands
[Work - ArenaNet] [Epoch Language] [Scribblings]

I tend to always think if I find myself reaching for a (cast) or dynamic_cast that I'm a) doing something wrong b) misunderstood something c) could do it be d) all of the above.

I have this idea in my mind that explicit casts are horrendously bad for performance (whether they are or not I don't know) I just avoid them at all costs, but have never found this to be restrictive.
Quote:Original post by AndyEsser
I have this idea in my mind that explicit casts are horrendously bad for performance (whether they are or not I don't know) I just avoid them at all costs, but have never found this to be restrictive.

As a programmer, you need to strive to reduce this type of irrational behaviour. Even if it works in your favour in this specific case, it can become cargo-cult programming in the general case.

You should understand why casts (implicit and explicit) are best avoided, and then use that to inform your behaviour. You should also have passing knowledge of the relative performance impact of special casts: for example some implementations of dynamic_cast<> will involve string comparison, and is generally unsuitable for innermost loops. But don't base all your decisions on the latter, as you must keep the pareto principle in mind when programming.

And basing such decisions on some unestablished performance metric indicates to me that you have no real idea how performance is gained in real systems. You'll hobble along with such arbitrary rules and end up with a system that:

  • Isn't particularly fast1

  • Is buggy

  • Is hard to maintain and extend

  • Is hard to optimise later


From my experience, unless you are very familiar with the problem domain you will not predict all the bottlenecks in advance2. Profiling is the only meaningful way to optimise a program, everything else is just wishful thinking. If you cannot measure it, don't try to optimise it.

1. At best - misplaced optimisation can have a negative performance impact.

2 - We recently had an issue with our continuous integration server. The full builds - particularly the unit tests - had been getting slower and slower for weeks and we couldn't figure out why. Everyone had a stab at what they suspected was the key (memory usage was a favourite contender). One guy was actually going to optimise the memory usage of a particular module he thought was the culprit. I spend some time measuring where the build was eating up time. Turned out it was a set of tests that were intantiating thousands SecureRandom instances, which quickly exhausted the available pool of entropy in /dev/random or /dev/urandom. Changing the unit tests to use regular Random instances produced a speedup from 45 minute to less than 20 minutes on average. Tiny change for massive results. Also, my colleages memory optimisations would have totally failed to solve the problem. My lesson from that: always measure. Especially when you are "sure".

This topic is closed to new replies.

Advertisement