Sign in to follow this  

boost::any: how fast/how slow is it?

This topic is 2659 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Has anybody ever had problems regarding permance with the boost::any object? I'm calling the boost::any_cast about 3 times for every sprite I render. Would that be too much?

Share this post


Link to post
Share on other sites
I've read that it is quite slow, but again, if you're not going to use it in critical parts of the code or accesing it hundrieds of times it gets the job done. It might get the job done then anyway.

Run a bechmark yourself in a test that resembles the code you're going to implement with and without any.

google for other benchmarks people have run.

Share this post


Link to post
Share on other sites
Quote:
Original post by Antheus
Are these transgender sprites having an identity crisis? Why are they hiding in the any closet?

They are sprites, why not declare them as such.


They aren't. I was just experimenting with a modular rendering plug-in system. Then I have saved all API specific information before rendering to avoid using any_cast during the rendering and I had my FPS rate considerably increased.

I'm making a class that abstracts all sprite rendering, doesn't matter what API it uses. It is something like this:

class Renderer
{
// ... ... ...
virtual boost::any GetRenderContext() = 0;
};

class Sprite
{
// ...
virtual bool Draw() = 0;
virtual bool CreateSprite(RendererPtr renderer) = 0;
};




This way I may implement a D3D9Renderer and a D3D9Sprite and use GetRenderContext to give me the device pointer.

Then the sprite is holding a reference to a generic renderer because, depending on what graphics API we're using, the sprite renderer would need specific data, like a device or other context stuff.
I'm not sure if it's the best way to go but it has worked for me so far.

Share this post


Link to post
Share on other sites
Why does it need to be "anything"? In almost every case I can think of I was able to use some parent interface (eg ISprite which then has D3D9Sprite and D3D11Sprite implementations) which also meant I could use fast static casts (and double checks with dynamic_cast in debug builds).

Why can you not make an IRenderContext? If your just returning say a IDirect3DDevice9 type thing you could also use the COM interfaces (IUnknown and QueryInterface) or you could use a void* with an unsafe cast and check the type data from somewhere else (eg a GetRendererType method which returns an enum to id the implementations, eg D3D9, D3D10, OPENGL, etc).

Share this post


Link to post
Share on other sites
Quote:
Original post by Zahlman
What is a RenderContext, then? What types will you be any_cast<>ing to? And why aren't those types related by polymorphism?


RenderContext, in the Direct3D implementation, will be the device.
The Renderer object takes care of general render states and window related operations.

The Renderer represents the device as a polymorphic class itself. But for some API's, such as Direct3D, we still need a pointer to the device instance in order to load resources and buffers. That means other objects could use it.

Quote:
Original post by Fire Lancer
Why does it need to be "anything"? In almost every case I can think of I was able to use some parent interface (eg ISprite which then has D3D9Sprite and D3D11Sprite implementations) which also meant I could use fast static casts (and double checks with dynamic_cast in debug builds).

Why can you not make an IRenderContext? If your just returning say a IDirect3DDevice9 type thing you could also use the COM interfaces (IUnknown and QueryInterface) or you could use a void* with an unsafe cast and check the type data from somewhere else (eg a GetRendererType method which returns an enum to id the implementations, eg D3D9, D3D10, OPENGL, etc).


I thought it would be nice to let it be anything because then it would be capable of implementing any graphics API. This way it woudn't let the user pass an OpenGL Renderer to implement a DIrect3D 9 Sprite. That would be stupid, but still, the code is safer. Am I being over protective? Anyway, it would not affect performance since I do all of it in load time.

I don't want to return IUnkown because then I would HAVE to inclue MS headers (please correct me if I'm wrong), and that kills my cross-platformbility.

Share this post


Link to post
Share on other sites
I still don't see why you can not have a common interface.

Eg I have an "IGraphicsDevice" interface (with implementations for each renderer). My D3D9GraphicsDevice class holds the IDirect3DDevice9 object, and provides a "GetDirect3DDevice()" method. Any class which operates with Direct3D at a low level and needs the underlying IDirect3DDevice9 can down cast IGraphicsDevice to D3D9GraphicsDevice (I generally use static_cast for release builds, the chances of getting it wrong is fairly low and their are debug builds for that) to get the device. I can do similar things for getting a IDirect3DTexture9 from my ITexture interface.

In this way I have a set of common interfaces which is all mayby 90% of my game code needs, and can get the low level under laying DirectX objects where really needed.



Also what's wrong with my GetRendererTyoe suggestion? As long as you make sure each of your implementations uses a different return value for it your sorted as it effectively forms your own RTTI system. A given implementation doesnt have to know about any others, if your Direct3D9 implementation used a value of 2 and when you tried to create a Direct3D9 sprite the renderer returned a type value of 5 (perhaps OpenGL) you know it is wrong. (this obviously assumes that casting the void pointer to your type will work, and the pointer does not need adjusting due to multiple inheritance of other complexities).

Share this post


Link to post
Share on other sites
Are you really going to pass different renderers to the same sprite? Why can't the renderer just create sprites for you? That way, the sprite can store a pointer to the right graphics device internally. That way you'll very rarely have to cast anything. This is what I do.

Share this post


Link to post
Share on other sites
Quote:
Original post by theOcelot
Are you really going to pass different renderers to the same sprite? Why can't the renderer just create sprites for you? That way, the sprite can store a pointer to the right graphics device internally. That way you'll very rarely have to cast anything. This is what I do.


No. A sprite will only get one renderer. This is why the casting isn't necessary all the time.

Share this post


Link to post
Share on other sites
I confess I'm quite confused now. I know that explicit casting and pointers must be avoided while we "think C++", but shall my conscious scream when I use any_cast as well? It is certainly possible to completely avoid casts, but it would make some simple tasks a lot harder.

Share this post


Link to post
Share on other sites
Casting in this way is generally a strong sign that you could solve your problem using polymorphism. Yes, it may make things look harder at first, but once you're familiar with the correct tool for the job, it's actually a lot easier and a lot more powerful.

I would say that any_cast in your situation is on par with using void*, albeit slightly safer - it indicates that you could probably get much better results by thinking about things from a different angle.


Don't worry if that new angle doesn't come naturally or easily, though; it can take a lot of time and a lot of mind-bending to get used to some of the approaches that are necessary in designing robust code [smile]

Share this post


Link to post
Share on other sites
I tend to always think if I find myself reaching for a (cast) or dynamic_cast that I'm a) doing something wrong b) misunderstood something c) could do it be d) all of the above.

I have this idea in my mind that explicit casts are horrendously bad for performance (whether they are or not I don't know) I just avoid them at all costs, but have never found this to be restrictive.

Share this post


Link to post
Share on other sites
Quote:
Original post by AndyEsser
I have this idea in my mind that explicit casts are horrendously bad for performance (whether they are or not I don't know) I just avoid them at all costs, but have never found this to be restrictive.

As a programmer, you need to strive to reduce this type of irrational behaviour. Even if it works in your favour in this specific case, it can become cargo-cult programming in the general case.

You should understand why casts (implicit and explicit) are best avoided, and then use that to inform your behaviour. You should also have passing knowledge of the relative performance impact of special casts: for example some implementations of dynamic_cast<> will involve string comparison, and is generally unsuitable for innermost loops. But don't base all your decisions on the latter, as you must keep the pareto principle in mind when programming.

And basing such decisions on some unestablished performance metric indicates to me that you have no real idea how performance is gained in real systems. You'll hobble along with such arbitrary rules and end up with a system that:

  • Isn't particularly fast1

  • Is buggy

  • Is hard to maintain and extend

  • Is hard to optimise later


From my experience, unless you are very familiar with the problem domain you will not predict all the bottlenecks in advance2. Profiling is the only meaningful way to optimise a program, everything else is just wishful thinking. If you cannot measure it, don't try to optimise it.


1. At best - misplaced optimisation can have a negative performance impact.

2 - We recently had an issue with our continuous integration server. The full builds - particularly the unit tests - had been getting slower and slower for weeks and we couldn't figure out why. Everyone had a stab at what they suspected was the key (memory usage was a favourite contender). One guy was actually going to optimise the memory usage of a particular module he thought was the culprit. I spend some time measuring where the build was eating up time. Turned out it was a set of tests that were intantiating thousands SecureRandom instances, which quickly exhausted the available pool of entropy in /dev/random or /dev/urandom. Changing the unit tests to use regular Random instances produced a speedup from 45 minute to less than 20 minutes on average. Tiny change for massive results. Also, my colleages memory optimisations would have totally failed to solve the problem. My lesson from that: always measure. Especially when you are "sure".

Share this post


Link to post
Share on other sites

This topic is 2659 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this