Jump to content

  • Log In with Google      Sign In   
  • Create Account


why are GPUs basedonfloating point operations?


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
20 replies to this topic

#1 lomateron   Members   -  Reputation: 300

Like
0Likes
Like

Posted 05 March 2013 - 03:12 AM

everything I have done in GPU doesnt needs those extra bits, and I think GPUs could do more things and would have support for more if it was based on unsigned integers.

 

for example a texture that is an array could have a load that does the same the read() from c++ does.

Or blending could have the bitwise operations: and, or, xor.

The best concecuence is that it will save lots of GPU memory.

 

 

 



Sponsor:

#2 SimonForsman   Crossbones+   -  Reputation: 5761

Like
0Likes
Like

Posted 05 March 2013 - 03:39 AM

everything I have done in GPU doesnt needs those extra bits, and I think GPUs could do more things and would have support for more if it was based on unsigned integers.

 

for example a texture that is an array could have a load that does the same the read() from c++ does.

Or blending could have the bitwise operations: and, or, xor.

The best concecuence is that it will save lots of GPU memory.

 

GPUs are based on floating point operations since that is what most 3D graphics operations require, it is possible to squeeze an almost insane number of shader units onto a small chip simply because each shader unit is very basic. support for integer operations would have to be added on top of the floating point support (it cannot replace it) thus making each unit larger, more expensive and thus reducing the number of units you can fit in a single GPU. (The Geforce GTX 690 has 2x1536 shader units, that number would not be achievable if each unit were significantly bigger than they are)


Edited by SimonForsman, 05 March 2013 - 03:41 AM.

I don't suffer from insanity, I'm enjoying every minute of it.
The voices in my head may not be real, but they have some good ideas!

#3 Hodgman   Moderators   -  Reputation: 27585

Like
0Likes
Like

Posted 05 March 2013 - 04:33 AM

DX10+ GPUs do support both integer and float operations.
Earlier GPUs gave more precedence to float (and fixed-point pretending to be float) because they're more useful in implementing "hardware transform and lighting" pipelines.

#4 NightCreature83   Crossbones+   -  Reputation: 2670

Like
0Likes
Like

Posted 05 March 2013 - 04:43 AM

everything I have done in GPU doesnt needs those extra bits, and I think GPUs could do more things and would have support for more if it was based on unsigned integers.

 

for example a texture that is an array could have a load that does the same the read() from c++ does.

Or blending could have the bitwise operations: and, or, xor.

The best concecuence is that it will save lots of GPU memory.

 

GPUs are based on floating point operations since that is what most 3D graphics operations require, it is possible to squeeze an almost insane number of shader units onto a small chip simply because each shader unit is very basic. support for integer operations would have to be added on top of the floating point support (it cannot replace it) thus making each unit larger, more expensive and thus reducing the number of units you can fit in a single GPU. (The Geforce GTX 690 has 2x1536 shader units, that number would not be achievable if each unit were significantly bigger than they are)

As of SM4 card GPUs also support integer instructions, this is DX10 and above GPUs. Whilst you can do all of those operations with xor, and, or they are generally not done that way. Instead they are implemented by arithmetic operations and have been for a long time. These equations where first implemented in hardware and accessed through setting blend functions in the API, nowadays however we just specify most of these operations in shader code as we can do more then just the fixed number of things FF hardware allowed.


 


Worked on titles: CMR:DiRT2, DiRT 3, DiRT: Showdown, GRID 2, Mad Max

#5 phantom   Moderators   -  Reputation: 6786

Like
0Likes
Like

Posted 05 March 2013 - 04:45 AM

Also DX11.1 level GPUs have support for logical operations when it comes to blending things together in the frame buffer (this includes all of AMD's GCN architecture; I'm not sure about NV's support level as they have done their usual 'we dont support it so d3d11.1 doesn't matter' thing)

#6 NightCreature83   Crossbones+   -  Reputation: 2670

Like
0Likes
Like

Posted 05 March 2013 - 04:48 AM

Also DX11.1 level GPUs have support for logical operations when it comes to blending things together in the frame buffer (this includes all of AMD's GCN architecture; I'm not sure about NV's support level as they have done their usual 'we dont support it so d3d11.1 doesn't matter' thing)

Yeah this seems to be a vendor specific thing where AMD historically doesn't really care about GL and Nvidia doesn't really care about D3D it's a shame.


Worked on titles: CMR:DiRT2, DiRT 3, DiRT: Showdown, GRID 2, Mad Max

#7 Hodgman   Moderators   -  Reputation: 27585

Like
0Likes
Like

Posted 05 March 2013 - 04:59 AM

Also DX11.1 level GPUs have support for logical operations when it comes to blending things together in the frame buffer

This is one of those features that's been supported on and off for ages. Some of nVidia's DX9-level cards actually support this (only through GL though)!
I wish it had become a standard feature back then...

#8 phantom   Moderators   -  Reputation: 6786

Like
0Likes
Like

Posted 05 March 2013 - 05:40 AM

Yeah, it's a shame its taken this long to become part of a standard of some sort :|

#9 lomateron   Members   -  Reputation: 300

Like
0Likes
Like

Posted 05 March 2013 - 06:32 AM

working in 3Dspace with floats wastes memory too

 

You have set an scale of your simulation, if i have a point and i move it to this place using this vector:

-using a float if the value is too small the value will be losed, if its too big too.

(so the ability to use very big values or very small means its using some memory to do it, but the values are useless)

-using an integer you can underestand more easily your scale so you know your limits and adjust the scale of the simulation and don't waste memory



#10 Hodgman   Moderators   -  Reputation: 27585

Like
0Likes
Like

Posted 05 March 2013 - 06:35 AM

For vertices and textures, we do often use integer formats to save memory though... They're only expanded to float in order to be processed in registers.

While they're in memory, they can be compact integers.

This doesn't even require a modern GPU with integer support, in DX9 you can do this.


Edited by Hodgman, 05 March 2013 - 06:55 AM.


#11 lomateron   Members   -  Reputation: 300

Like
0Likes
Like

Posted 05 March 2013 - 06:49 AM

So then in general if the GPU could have the same support for ints like it has for floats, the ints would be more used, and thats why i ask that question, 

it looks like the reason for why it uses floats has to do with the way GPUs started, isnt it?



#12 Hodgman   Moderators   -  Reputation: 27585

Like
0Likes
Like

Posted 05 March 2013 - 07:18 AM

The way it used to work:

* We store ints and floats (as necessary) in RAM.

* The processor implements float registers only to save complexity, so shader math must be float.

* Int data is converted to float when loaded, and back to int when stored.

 

The way it works now:

* As above, but we also have int registers, so we can do integer math (bitwise operations, etc) in shaders now.



#13 lomateron   Members   -  Reputation: 300

Like
0Likes
Like

Posted 05 March 2013 - 07:29 AM

"to save complexity"

Is that lose of complexity caused by the use of less bits?(a 16bit int uses 16 bits, a 16 bit float uses 11 bits--> fraction part with sign--->the important part) 


Edited by lomateron, 05 March 2013 - 07:54 AM.


#14 Hodgman   Moderators   -  Reputation: 27585

Like
0Likes
Like

Posted 05 March 2013 - 07:51 AM

No it's more of a RISC vs CISC issue.

If you only have one data type, then your processor can get away with a smaller instruction set, which means you can build them out of less transistors, which means you can fit more of them onto a chip and/or produce them cheaper/smaller.

 

16bit floats use 16bits, otherwise they wouldn't be 16-bit floats!



#15 Eidetic Ex   Members   -  Reputation: 133

Like
0Likes
Like

Posted 05 March 2013 - 10:07 AM

I think it has to do with the things we do for rendering a game. In order to render an object we need a way to represent it's position on the screen. However we don't know when we are writing our code what the resolution of the user's screen is. It makes sense to represent that screen in a common coordiante system and abstract the screen size away. We also need a way to represent the game world's coordinates, which can cover any range your game calls for. Once we have a coordiante system we need a way to move things around, rotate them, scale them, skew them, and whatever other transforms you may come up with. Once you examine all of these requirements and have a rudimentary understanding of vectors and matrices you realize that's really the only way to go to keep things generalized enough to cover majority use cases while still being efficient enough to process at high speed.

 

I remember my days of stubbornly clinging to integer coordinate systems, came up with all sorts of wild things to cover transform in arbitrary order. I facepalmed hard when I finally accepted a challenge to learn how to work with vectors and matrices, tons of code replaced by just a couple hundred lines that could be reused over and over again.



#16 mhagain   Crossbones+   -  Reputation: 7422

Like
5Likes
Like

Posted 05 March 2013 - 11:21 AM

I fail to see where the OP is coming from here.

 

GPUs use floating point because floating point is well established as the most suitable solution for what a GPU needs to do.  The whole "wasting memory" angle is completely bogus; memory usage is a very poor arbiter of performance and it's sometimes even the case that burning a little extra memory will get you an order of magnitude more performance.  That's not "wasting"; it's tradeoff (and a damn good one too).  It's not the 1970s any more.

 

Put it this way - decades of research backed up by billions of dollars has established that this is the way that works.  Versus Random Dude on the internet with outdated theories about "wasting memory".  I don't think we need an explanation for why things are the way they are with this one.


It appears that the gentleman thought C++ was extremely difficult and he was overjoyed that the machine was absorbing it; he understood that good C++ is difficult but the best C++ is well-nigh unintelligible.


#17 D_Tr   Members   -  Reputation: 362

Like
0Likes
Like

Posted 05 March 2013 - 04:14 PM

Floating point hardware is just inevitable in todays GPUs which offer such a great degree of programmability. The transistor budget is there and it is seems imposible to me to make robust physics simulations or generally sophisticated shaders with integer hardware, without important limitations. To overcome these limitations, one would find himself inventing hacks trying to approach floating point behaviour, and would end up with slower and less robust code. Suddenly floating point hardware would seem like a no brainer...



#18 Hodgman   Moderators   -  Reputation: 27585

Like
1Likes
Like

Posted 05 March 2013 - 04:34 PM

The whole "wasting memory" angle is completely bogus; memory usage is a very poor arbiter of performance and it's sometimes even the case that burning a little extra memory will get you an order of magnitude more performance. That's not "wasting"; it's tradeoff (and a damn good one too). It's not the 1970s any more.

It's bogus because the central assertion is false -- GPUs aren't based around storing floats in RAM. Most of your textures should be 8bit int, and most of your vertices 16bit int or half-float.
You will have terrible performance if you convert all your textures and vertex attributes to float, due to bandwidth becoming a bottleneck.

#19 TheChubu   Crossbones+   -  Reputation: 3698

Like
0Likes
Like

Posted 05 March 2013 - 05:44 PM

Hm, is half-float widely supported on GPUs? If so, what DX/GL version?

 

Last I heard it was some nVidia extension.


"I AM ZE EMPRAH OPENGL 3.3 THE CORE, I DEMAND FROM THEE ZE SHADERZ AND MATRIXEZ"

 

My journals: dustArtemis ECS framework and Making a Terrain Generator


#20 Hodgman   Moderators   -  Reputation: 27585

Like
1Likes
Like

Posted 05 March 2013 - 06:30 PM

Half float vertex and pixel formats should be standard as of DX10 and GL3.

In DX9 they're supported from roughly GeForce 5(FX) and Radeon X800 onwards.

In GL2, support is a bit harder to come by for some reason.

 

If you want 16-bit vertex attributes and half-float isn't supported though, you can always try 16-bit integers and a scaling value in the shader.


Edited by Hodgman, 05 March 2013 - 06:32 PM.





Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS