GPU precision

Started by
5 comments, last by AndyTX 18 years, 4 months ago
I've been trying to find any information that I can on how GPUs (NVIDIA and ATI) keep floating point precision, because from what I gather, they do not use standard IEEE floating point specifications. Could anyone give me some insight into how floating point precision is managed on GPUs or point me to where I can find out more? Thanks.

I know only that which I know, but I do not know what I know.
Advertisement
Calculations on NV can be run in 16 bit or 32 bit mode, depending on whether you use half or float types. IIRC the 32 bit is IEEE compliant, and the 16 bit is similar to IEEE. ATI always runs their calculations at 24 bits internally.

Floating point buffers are stored in IEEE floats for 32 bits per channel, or IEEE-ish floats for 16 bits per channel. (I think.)
SlimDX | Ventspace Blog | Twitter | Diverse teams make better games. I am currently hiring capable C++ engine developers in Baltimore, MD.
I don't think any of them are completely IEEE correct. The many rules over NAN/INF/QNAN etc... tend to be glossed over a little bit [wink]

-- The rest is D3D-specific, but I'd imagine it'd still be fairly relevant --

Have a look at the D3DFORMAT enumerations page - it lists some basic information on how the datatypes are arranged - s10e5 etc...

As a future note, this is one of the things that they're trying to define more strictly in D3D10 - so it's hopefully going to get better rather than worse [smile]

hth
Jack

<hr align="left" width="25%" />
Jack Hoxley <small>[</small><small> Forum FAQ | Revised FAQ | MVP Profile | Developer Journal ]</small>

Check out NV_half_float for some info on NVIDIA's 16-bit implementation. GPU Gems 2 also has a chapter on GPGPU stuff that goes 16/32-bit floats for both NVIDIA and ATI.

Regarding 32-bit floats, I think that NVIDIA's support specials and perhaps denorms, but ATI's do not. In 16-bit, I'm SURE that NVIDIA's support both denorms and specials, but again I don't think that ATI's do.

Of course my ATI info hasn't been updated to the X1x00 series, so you might want to check the programming guide for those.
Yes, NV cards support specials in both half and float, not sure about denorms though. I guess my question, more specifically, is whether the actualy floating point math is done differently than CPU floating point math. Is there a rounding point that might cause different results between the two processors?

I know only that which I know, but I do not know what I know.
Quote:Original post by Daishi
I guess my question, more specifically, is whether the actualy floating point math is done differently than CPU floating point math. Is there a rounding point that might cause different results between the two processors?

Yes, from what I've read into the subject this is the *big* sticking point. The CPU and GPU do floating point operations, but you can't guarantee that the results will be the same for the same operation(s).

Even if you could guarantee it for a particular chipset, you couldn't guarantee it across multiple chips even from the same IHV, much less universally across all available GPU's [smile]

The GPU's tend to optimize for performance and for graphics (where even intermediary results are likely to be in a fairly "sensible" range), whereas a CPU being general purpose has to cover all possible calculation such that it doesn't cut so many corners (unless you start changing the FPU control word like D3D does).

hth
Jack

<hr align="left" width="25%" />
Jack Hoxley <small>[</small><small> Forum FAQ | Revised FAQ | MVP Profile | Developer Journal ]</small>

I suspect you could find more info at GPGPU.org.

This topic is closed to new replies.

Advertisement