Jump to content
  • Advertisement
Sign in to follow this  
downward_spiral

Ok to use a mix of floats and doubles?

This topic is 4491 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I'm building a simulator which needs precision from 1000km down to 1cm, which unfortunately seems to be just outside the size of your standard float. I can't be bothered using sectors, and I'm willing to live with the memory loss from switching to doubles. I have two questions though; 1) How much speed is really lost from using doubles instead of floats? 2) There is a lot of data which doesn't need to be stored in doubles. If I pass floats into functions which take doubles, is there a performance hit from conversion?

Share this post


Link to post
Share on other sites
Advertisement
Well answer to this is I guess architecture dependent because some machines perform floating calculation better than double and some allow double calculations better than floats.
Nowadays,most of the machines perform calculations faster with double types.

Share this post


Link to post
Share on other sites
Quote:
Original post by downward_spiral
1) How much speed is really lost from using doubles instead of floats?


I'm not sure about this, but I believe the FPU uses doubles for everything. However since the data bus on most systems is 32-bit it might take longer to suffle around a large number of doubles instead of floats.

Quote:
Original post by downward_spiral
2) There is a lot of data which doesn't need to be stored in doubles. If I pass floats into functions which take doubles, is there a performance hit from conversion?


Possibly. However as with point 1, things like this only affect performance if you're really crunching the numbers. For most things the difference between a few CPU cycles won't make any difference on a system that isn't a dinosaur.

Share this post


Link to post
Share on other sites
Quote:
Original post by TEUTON
Well answer to this is I guess architecture dependent because some machines perform floating calculation better than double and some allow double calculations better than floats.


Architecture as in processor, OS, or compiler dependent? I'm writing for Win32 X86 32bit, GCC, and probably won't ever need to port it as it's for a specific purpose.

Quote:
Original post by TEUTONNowadays,most of the machines perform calculations faster with double types.


Reassuring, thanks. Maybe I'm just being too retro and should just fire in with whatever works?

Share this post


Link to post
Share on other sites
Remember DOPE: (D)on't (O)ptimize (P)rematurely, (E)ver! Get it correct before you get it fast, because most of the time it's fast enough, and the times when it isn't, it's still better to be right than to be fast.

Share this post


Link to post
Share on other sites
Quote:
Original post by kSquared
Remember DOPE: (D)on't (O)ptimize (P)rematurely, (E)ver! Get it correct before you get it fast, because most of the time it's fast enough, and the times when it isn't, it's still better to be right than to be fast.


I disagree in the original posters context.

In games it is typical that numeric computations fall under 90% of the cpu's budget for scene graph traversal, physic simulations, collision, and path finding. While the other 10% is spread across thin to core game logic. In simulations the allocation for numeric computation may be even higher.

It is imperitive that you spend a good amount of time optimizing and designing for speed early. Other-wise, if you don't pick the correct data structures and formats it will be difficult in the end to refactor core data structures for optimization.

Share this post


Link to post
Share on other sites
My suggestion would be to use a typedef (usually the name 'real' is chosen) for a double and use that everywhere. If you later find that the performance is a problem you can switch the typedef and code in some kind of sectors/space partitioning, or just replace the critical sections.

Share this post


Link to post
Share on other sites
Quote:
Original post by downward_spiral
I'm building a simulator which needs precision from 1000km down to 1cm, which unfortunately seems to be just outside the size of your standard float.
I can't be bothered using sectors, and I'm willing to live with the memory loss from switching to doubles. I have two questions though;
1) How much speed is really lost from using doubles instead of floats?
2) There is a lot of data which doesn't need to be stored in doubles. If I pass floats into functions which take doubles, is there a performance hit from conversion?


Look at CPU benchmarks. (Benchmax would show you a SSE2 instruction speed)
It's actually compiler/CPU dependant. Often the only difference is a memory usage.

If in doubt remmember, Doubles means round circles.

for your second question, the easy half day answer is test it. I remmember Charles Bloom saying why that compiler is additing an unnecessary load / store, when that value should stay in the registry.

Share this post


Link to post
Share on other sites
Quote:
Original post by ph33r
Quote:
Original post by kSquared
Remember DOPE: (D)on't (O)ptimize (P)rematurely, (E)ver! Get it correct before you get it fast, because most of the time it's fast enough, and the times when it isn't, it's still better to be right than to be fast.


I disagree in the original posters context.

In games it is typical that numeric computations fall under 90% of the cpu's budget for scene graph traversal, physic simulations, collision, and path finding. While the other 10% is spread across thin to core game logic. In simulations the allocation for numeric computation may be even higher.

It is imperitive that you spend a good amount of time optimizing and designing for speed early. Other-wise, if you don't pick the correct data structures and formats it will be difficult in the end to refactor core data structures for optimization.

Of course it's a good idea to take performance into account in your design. But what you described isn't premature optimization. Premature optimization is when you design for speed without having any idea of how fast it will be, how fast it needs to be, or how fast it should be.

Share this post


Link to post
Share on other sites
Quote:
Original post by ph33rOther-wise, if you don't pick the correct data structures and formats it will be difficult in the end to refactor core data structures for optimization.


This would not be a problem if designed properly (use interfaces, typedefs, etc).

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!