why are GPUs basedonfloating point operations?

Started by
19 comments, last by TheChubu 11 years, 1 month ago

everything I have done in GPU doesnt needs those extra bits, and I think GPUs could do more things and would have support for more if it was based on unsigned integers.

for example a texture that is an array could have a load that does the same the read() from c++ does.

Or blending could have the bitwise operations: and, or, xor.

The best concecuence is that it will save lots of GPU memory.

Advertisement

everything I have done in GPU doesnt needs those extra bits, and I think GPUs could do more things and would have support for more if it was based on unsigned integers.

for example a texture that is an array could have a load that does the same the read() from c++ does.

Or blending could have the bitwise operations: and, or, xor.

The best concecuence is that it will save lots of GPU memory.

GPUs are based on floating point operations since that is what most 3D graphics operations require, it is possible to squeeze an almost insane number of shader units onto a small chip simply because each shader unit is very basic. support for integer operations would have to be added on top of the floating point support (it cannot replace it) thus making each unit larger, more expensive and thus reducing the number of units you can fit in a single GPU. (The Geforce GTX 690 has 2x1536 shader units, that number would not be achievable if each unit were significantly bigger than they are)

[size="1"]I don't suffer from insanity, I'm enjoying every minute of it.
The voices in my head may not be real, but they have some good ideas!
DX10+ GPUs do support both integer and float operations.
Earlier GPUs gave more precedence to float (and fixed-point pretending to be float) because they're more useful in implementing "hardware transform and lighting" pipelines.

everything I have done in GPU doesnt needs those extra bits, and I think GPUs could do more things and would have support for more if it was based on unsigned integers.

for example a texture that is an array could have a load that does the same the read() from c++ does.

Or blending could have the bitwise operations: and, or, xor.

The best concecuence is that it will save lots of GPU memory.

GPUs are based on floating point operations since that is what most 3D graphics operations require, it is possible to squeeze an almost insane number of shader units onto a small chip simply because each shader unit is very basic. support for integer operations would have to be added on top of the floating point support (it cannot replace it) thus making each unit larger, more expensive and thus reducing the number of units you can fit in a single GPU. (The Geforce GTX 690 has 2x1536 shader units, that number would not be achievable if each unit were significantly bigger than they are)

As of SM4 card GPUs also support integer instructions, this is DX10 and above GPUs. Whilst you can do all of those operations with xor, and, or they are generally not done that way. Instead they are implemented by arithmetic operations and have been for a long time. These equations where first implemented in hardware and accessed through setting blend functions in the API, nowadays however we just specify most of these operations in shader code as we can do more then just the fixed number of things FF hardware allowed.


Worked on titles: CMR:DiRT2, DiRT 3, DiRT: Showdown, GRID 2, theHunter, theHunter: Primal, Mad Max, Watch Dogs: Legion

Also DX11.1 level GPUs have support for logical operations when it comes to blending things together in the frame buffer (this includes all of AMD's GCN architecture; I'm not sure about NV's support level as they have done their usual 'we dont support it so d3d11.1 doesn't matter' thing)

Also DX11.1 level GPUs have support for logical operations when it comes to blending things together in the frame buffer (this includes all of AMD's GCN architecture; I'm not sure about NV's support level as they have done their usual 'we dont support it so d3d11.1 doesn't matter' thing)

Yeah this seems to be a vendor specific thing where AMD historically doesn't really care about GL and Nvidia doesn't really care about D3D it's a shame.

Worked on titles: CMR:DiRT2, DiRT 3, DiRT: Showdown, GRID 2, theHunter, theHunter: Primal, Mad Max, Watch Dogs: Legion

Also DX11.1 level GPUs have support for logical operations when it comes to blending things together in the frame buffer

This is one of those features that's been supported on and off for ages. Some of nVidia's DX9-level cards actually support this (only through GL though)!
I wish it had become a standard feature back then...
Yeah, it's a shame its taken this long to become part of a standard of some sort :|

working in 3Dspace with floats wastes memory too

You have set an scale of your simulation, if i have a point and i move it to this place using this vector:

-using a float if the value is too small the value will be losed, if its too big too.

(so the ability to use very big values or very small means its using some memory to do it, but the values are useless)

-using an integer you can underestand more easily your scale so you know your limits and adjust the scale of the simulation and don't waste memory

For vertices and textures, we do often use integer formats to save memory though... They're only expanded to float in order to be processed in registers.

While they're in memory, they can be compact integers.

This doesn't even require a modern GPU with integer support, in DX9 you can do this.

This topic is closed to new replies.

Advertisement