Jump to content
  • Advertisement
Sign in to follow this  
fuzzymoochicken

Are flops a "worthless metric" in relation to 3D physics calculations?

This topic is 492 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Someone I know claims this to be true, that floating point operations per second have no bearing at all on the performance of real-time 3D physics calculations and are thus a "worthless metric".

How true is his claim? Do flops matter at all?

Edited by fuzzymoochicken

Share this post


Link to post
Share on other sites
Advertisement

So would ~25 gflops of a CPU be kind of irrelevant these days in a sense?


Irrelevant in what regard? That is around the performance of many current CPUs. You're missing the point. This is totally dependent on the application. 10 FLOPS is perfectly fine for a calculator. 25 GFLOPS would cripple a supercomputer running a weather simulation. A 25 GFLOP CPU would run any modern game without issue with tasks running in the background.

Share this post


Link to post
Share on other sites

So would ~25 gflops of a CPU be kind of irrelevant these days in a sense?

No, that's not what I said. You need to look at more than the FLOPS rating to guess performance, and better than guessing is measuring actual performance... What matters is how may GFLOPS your particular bit of code can actually achieve on the hardware, which depends on the hardware architecture, the software architecture, the theoretical FLOPS rating of the CPU, the memory bandwidth, the cache architecture, etc, etc...

A 25 GFLOP CPU would run any modern game without issue with tasks running in the background.

Well that depends though. If it's completely changed up the architecture in order to achieve that FLOPS score, then no modern game will run on it.
e.g. there's GPUs now that easily hit the 500 GFLOPS range, but if you ran single-threaded code on it like an old video game, you would only be lucky to reach about 1 GFLOPS... or the PS3's CELL CPU had well over 200 GFLOPS in theory, but in practice most games would probably get closer to 50 GFLOPS out of it, simply because real world code performs very different to the theoretical maximum that CPU can do. Real programs have to spend time doing other boring things, like moving memory around.

...so the hardware's theoretical rating matters less than what your specific software can make that hardware do.

Also, 25 GFLOPS isn't that much any more. A good laptop from two years ago will have a 25 GFLOPS CPU :D

Edited by Hodgman

Share this post


Link to post
Share on other sites

Also, 25 GFLOPS isn't that much any more. A good laptop from two years ago will have a 25 GFLOPS CPU :D


I'm running an AMD FX-8150 (21~ GFLOPS) and can play most games with ease. The reason is that the GPU I have is far more powerful and most modern games are not as CPU bound as earlier games. So what I posted was more to your point. It is an pointless metric by itself.

Share this post


Link to post
Share on other sites

It's a pointless metric because it doesn't really give any useful information about performance. It's like trying to compare cars by comparing the RPM their engines spin at, one might sound like it is better and yet be a much less useful car because of other bottlenecks or design problems.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!