Jump to content

  • Log In with Google      Sign In   
  • Create Account

eating potatoe


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
6 replies to this topic

#1 RoundPotato   Members   -  Reputation: 126

Like
2Likes
Like

Posted 05 August 2014 - 02:25 PM




Sponsor:

#2 SeanMiddleditch   Members   -  Reputation: 7170

Like
2Likes
Like

Posted 05 August 2014 - 03:08 PM

Such statements need a lot of understanding to grok. In general, yes, floats are faster on the GPU. Ints can be faster on the CPU but the gap is a lot smaller today than it was. However, an operation that is naturally a floating-point operation is likely much, much faster on the CPU to just do with floats than it is to contort yourself into an integer and using a bunch of specialized math.

The overwhelming rule of thumb with performance advice is that it's all lies. What people learned 5 years ago is _wrong_ today. What I just told you above will probably be _wrong_ soon (if not already). If you want to know which is faster _for your target platforms_ (you have no reason to care about other platforms), try both, profile them, and see which performs better.

That said, given how close the performance is, and given how you're not exactly writing the next Crytek or in a position where squeezing out every single like tenth of a percent of performance matters, just use floats where they seem appropriate and ints where they seem appropriate. The differences in speed between the two are so incredibly minor in most cases that it's a complete and absolute waste of time to try to contort naturally floating-point operations into integer math or vice versa.

#3 frob   Moderators   -  Reputation: 22731

Like
3Likes
Like

Posted 05 August 2014 - 03:22 PM

What is the reason behind the question?  What problem are you trying to solve?

 

Yes, the GPU can be use used for processing generally.  General purpose GPU programming goes by the rather utilitarian acronym GPGPU

 

Basically you are making a tradeoff.  You give up the benefits of the CPU, which is very versatile, and replacing it with a massively parallel processing system designed for a dense grid of data.

 

This is why things like bitcoin miners love running code on the GPU. They get away from a 6-processor or 8-processor system that has gigabytes of memory and other resources, and exchange it for thousands of tiny processors. Their processing task is small but needs to be repeated on an enormous collection of data. They can load all the data up into a texture and run a series of shaders to get the desired results.

 

 

Crossing the boundary between the systems is relatively expensive.

 

So in order to help give you good information, why are you asking the questions about integer and floating point performance?


Check out my book, Game Development with Unity, aimed at beginners who want to build fun games fast.

Also check out my personal website at bryanwagstaff.com, where I write about assorted stuff.


#4 RoundPotato   Members   -  Reputation: 126

Like
0Likes
Like

Posted 05 August 2014 - 04:01 PM

 

Edited by RoundPotato, 23 August 2014 - 05:37 PM.


#5 Aardvajk   Crossbones+   -  Reputation: 6201

Like
2Likes
Like

Posted 06 August 2014 - 03:10 AM

Short answer: Yes, under all circumstances I am aware of, shaders run on the GPU.

 

Longer answer: Maybe there are some obscure debugging setups where the shaders can be run on CPU but I find this unlikely as it would take a long time to render a screenful of pixels if the pixel shader had to be run on the CPU. GPU supports massive parrallelisation which cannot be matched on CPU i.e. it can run the same shader hundreds of times at the same time.



#6 Ryan_001   Prime Members   -  Reputation: 1458

Like
0Likes
Like

Posted 06 August 2014 - 04:11 AM

I was under the impression that on modern NVidia and ATI cards that most OPS (both float and int) that don't use the special function unit are single cycle execution.  64-bit and divide, pow, trig, ect... take more though.  As far as Intel chips, I'm not too sure.



#7 frob   Moderators   -  Reputation: 22731

Like
0Likes
Like

Posted 06 August 2014 - 10:44 AM

I was under the impression that on modern NVidia and ATI cards that most OPS (both float and int) that don't use the special function unit are single cycle execution.  64-bit and divide, pow, trig, ect... take more though.  As far as Intel chips, I'm not too sure.

 

It is somewhat difficult to guarantee your customers are running those specific cards. 

 

Most broad-consumer games look for shader model 2.0 (2002) or 3.0 (2004). It is fairly rare for mainstream games to require more modern hardware on mainstream games.


Check out my book, Game Development with Unity, aimed at beginners who want to build fun games fast.

Also check out my personal website at bryanwagstaff.com, where I write about assorted stuff.





Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS