• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.
Sign in to follow this  
Followers 0
lomateron

why are GPUs basedonfloating point operations?

20 posts in this topic

everything I have done in GPU doesnt needs those extra bits, and I think GPUs could do more things and would have support for more if it was based on unsigned integers.

 

for example a texture that is an array could have a load that does the same the read() from c++ does.

Or blending could have the bitwise operations: and, or, xor.

The best concecuence is that it will save lots of GPU memory.

 

 

 

0

Share this post


Link to post
Share on other sites

everything I have done in GPU doesnt needs those extra bits, and I think GPUs could do more things and would have support for more if it was based on unsigned integers.

 

for example a texture that is an array could have a load that does the same the read() from c++ does.

Or blending could have the bitwise operations: and, or, xor.

The best concecuence is that it will save lots of GPU memory.

 

GPUs are based on floating point operations since that is what most 3D graphics operations require, it is possible to squeeze an almost insane number of shader units onto a small chip simply because each shader unit is very basic. support for integer operations would have to be added on top of the floating point support (it cannot replace it) thus making each unit larger, more expensive and thus reducing the number of units you can fit in a single GPU. (The Geforce GTX 690 has 2x1536 shader units, that number would not be achievable if each unit were significantly bigger than they are)

Edited by SimonForsman
0

Share this post


Link to post
Share on other sites
DX10+ GPUs do support both integer and float operations.
Earlier GPUs gave more precedence to float (and fixed-point pretending to be float) because they're more useful in implementing "hardware transform and lighting" pipelines.
0

Share this post


Link to post
Share on other sites

everything I have done in GPU doesnt needs those extra bits, and I think GPUs could do more things and would have support for more if it was based on unsigned integers.

 

for example a texture that is an array could have a load that does the same the read() from c++ does.

Or blending could have the bitwise operations: and, or, xor.

The best concecuence is that it will save lots of GPU memory.

 

GPUs are based on floating point operations since that is what most 3D graphics operations require, it is possible to squeeze an almost insane number of shader units onto a small chip simply because each shader unit is very basic. support for integer operations would have to be added on top of the floating point support (it cannot replace it) thus making each unit larger, more expensive and thus reducing the number of units you can fit in a single GPU. (The Geforce GTX 690 has 2x1536 shader units, that number would not be achievable if each unit were significantly bigger than they are)

As of SM4 card GPUs also support integer instructions, this is DX10 and above GPUs. Whilst you can do all of those operations with xor, and, or they are generally not done that way. Instead they are implemented by arithmetic operations and have been for a long time. These equations where first implemented in hardware and accessed through setting blend functions in the API, nowadays however we just specify most of these operations in shader code as we can do more then just the fixed number of things FF hardware allowed.


 

0

Share this post


Link to post
Share on other sites
Also DX11.1 level GPUs have support for logical operations when it comes to blending things together in the frame buffer (this includes all of AMD's GCN architecture; I'm not sure about NV's support level as they have done their usual 'we dont support it so d3d11.1 doesn't matter' thing)
0

Share this post


Link to post
Share on other sites

Also DX11.1 level GPUs have support for logical operations when it comes to blending things together in the frame buffer (this includes all of AMD's GCN architecture; I'm not sure about NV's support level as they have done their usual 'we dont support it so d3d11.1 doesn't matter' thing)

Yeah this seems to be a vendor specific thing where AMD historically doesn't really care about GL and Nvidia doesn't really care about D3D it's a shame.

0

Share this post


Link to post
Share on other sites

Also DX11.1 level GPUs have support for logical operations when it comes to blending things together in the frame buffer

This is one of those features that's been supported on and off for ages. Some of nVidia's DX9-level cards actually support this (only through GL though)!
I wish it had become a standard feature back then...
0

Share this post


Link to post
Share on other sites

working in 3Dspace with floats wastes memory too

 

You have set an scale of your simulation, if i have a point and i move it to this place using this vector:

-using a float if the value is too small the value will be losed, if its too big too.

(so the ability to use very big values or very small means its using some memory to do it, but the values are useless)

-using an integer you can underestand more easily your scale so you know your limits and adjust the scale of the simulation and don't waste memory

0

Share this post


Link to post
Share on other sites

For vertices and textures, we do often use integer formats to save memory though... They're only expanded to float in order to be processed in registers.

While they're in memory, they can be compact integers.

This doesn't even require a modern GPU with integer support, in DX9 you can do this.

Edited by Hodgman
0

Share this post


Link to post
Share on other sites

So then in general if the GPU could have the same support for ints like it has for floats, the ints would be more used, and thats why i ask that question, 

it looks like the reason for why it uses floats has to do with the way GPUs started, isnt it?

0

Share this post


Link to post
Share on other sites

The way it used to work:

* We store ints and floats (as necessary) in RAM.

* The processor implements float registers only to save complexity, so shader math must be float.

* Int data is converted to float when loaded, and back to int when stored.

 

The way it works now:

* As above, but we also have int registers, so we can do integer math (bitwise operations, etc) in shaders now.

0

Share this post


Link to post
Share on other sites

"to save complexity"

Is that lose of complexity caused by the use of less bits?(a 16bit int uses 16 bits, a 16 bit float uses 11 bits--> fraction part with sign--->the important part) 

Edited by lomateron
0

Share this post


Link to post
Share on other sites

No it's more of a RISC vs CISC issue.

If you only have one data type, then your processor can get away with a smaller instruction set, which means you can build them out of less transistors, which means you can fit more of them onto a chip and/or produce them cheaper/smaller.

 

16bit floats use 16bits, otherwise they wouldn't be 16-bit floats!

0

Share this post


Link to post
Share on other sites

I think it has to do with the things we do for rendering a game. In order to render an object we need a way to represent it's position on the screen. However we don't know when we are writing our code what the resolution of the user's screen is. It makes sense to represent that screen in a common coordiante system and abstract the screen size away. We also need a way to represent the game world's coordinates, which can cover any range your game calls for. Once we have a coordiante system we need a way to move things around, rotate them, scale them, skew them, and whatever other transforms you may come up with. Once you examine all of these requirements and have a rudimentary understanding of vectors and matrices you realize that's really the only way to go to keep things generalized enough to cover majority use cases while still being efficient enough to process at high speed.

 

I remember my days of stubbornly clinging to integer coordinate systems, came up with all sorts of wild things to cover transform in arbitrary order. I facepalmed hard when I finally accepted a challenge to learn how to work with vectors and matrices, tons of code replaced by just a couple hundred lines that could be reused over and over again.

0

Share this post


Link to post
Share on other sites

Floating point hardware is just inevitable in todays GPUs which offer such a great degree of programmability. The transistor budget is there and it is seems imposible to me to make robust physics simulations or generally sophisticated shaders with integer hardware, without important limitations. To overcome these limitations, one would find himself inventing hacks trying to approach floating point behaviour, and would end up with slower and less robust code. Suddenly floating point hardware would seem like a no brainer...

0

Share this post


Link to post
Share on other sites

The whole "wasting memory" angle is completely bogus; memory usage is a very poor arbiter of performance and it's sometimes even the case that burning a little extra memory will get you an order of magnitude more performance. That's not "wasting"; it's tradeoff (and a damn good one too). It's not the 1970s any more.

It's bogus because the central assertion is false -- GPUs aren't based around storing floats in RAM. Most of your textures should be 8bit int, and most of your vertices 16bit int or half-float.
You will have terrible performance if you convert all your textures and vertex attributes to float, due to bandwidth becoming a bottleneck.
1

Share this post


Link to post
Share on other sites

Half float vertex and pixel formats should be standard as of DX10 and GL3.

In DX9 they're supported from roughly GeForce 5(FX) and Radeon X800 onwards.

In GL2, support is a bit harder to come by for some reason.

 

If you want 16-bit vertex attributes and half-float isn't supported though, you can always try 16-bit integers and a scaling value in the shader.

Edited by Hodgman
1

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!


Register a new account

Sign in

Already have an account? Sign in here.


Sign In Now
Sign in to follow this  
Followers 0