Jump to content
  • Advertisement
Sign in to follow this  
Lord_Vader

OpenGL About Space

This topic is 4268 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

How big is and what precision has the OpenGL space? Has it single float precisionn namely 32 bits in total, a mantissa of 23 bits, an exponent of 8 bits and 1 bit for the sign? In other words what is the max posible the distance between 2 objects and more important what is the precision of the space? What the above has to do with the Depth Buffer? How many bits has the Depth Buffer?(NVDIA GeForce FX 5200 ). I read somewhere that for values zFar/zNear, log2(zFar/zNear) bits of precision are lost.

Share this post


Link to post
Share on other sites
Advertisement
Quote:
Original post by Lord_Vader
How big is and what precision has the OpenGL space?
Has it single float precision namely 32 bits in total, a mantissa of 23 bits, an exponent of 8 bits and 1 bit for the sign?

In other words what is the max posible the distance between 2 objects and more important what is the precision of the space?

What the above has to do with the Depth Buffer?
How many bits has the Depth Buffer?(NVDIA GeForce FX 5200 ).
I read somewhere that for values zFar/zNear, log2(zFar/zNear) bits of precision are lost.

OpenGL has *tons* of options, and you can input values in a huge variety of datatypes, each of which has its own resolution and range and limitations. However, for practical purposes as run on modern video cards, you are correct to assume f32 (single-precision floating point) with the format you mentioned is the correct range to assume as *default*. Virtually all computations in modern 3D video cards are performed in f32 (single-precision floating-point). However, a great many programs store vertex/texture/vector/other properties in whatever common integer or floating-point format fits nicely into structures that are multiples of 32-bytes (usually). So often RGBA color is stored as four u08 elements (u08 = unsigned 8-bit integer), texture-coordinates are stored as four u16 elements (u16 = unsigned 16-bit integer), height-fields are stored as one u08 element, vectors are usually stored as three or four f32 elements, but for some purposes they are stored as f16 or s16 or u16 values.

However, whenever any of these values are accessed from the VBO/vertex-buffer or normal-map or texture-map, they are converted to f32 before they are stored in the GPU register set --- and computations are performed in f32 except in special cases where this simply doesn't make sense.

As for the depth buffer, each video card [driver] supports limited formats. The most common practice for awhile was to pack a 24-bit [unsigned or signed] integer depth PLUS an 8-bit integer stencil value into one 32-bit value. However, this practice is gradually shifting towards f32 for depth buffer, and a separate 8-bit stencil buffer if necessary. However, this transition will take several years probably.

What you may need to know to feel comfortable is this. Usually you need not be concerned about the storage format when you program except ONE TIME - when you choose your data-structures (vertex-structure, normal-map, texture-maps, etc) and when you write the code that tells the API/GPU/driver what the format of each component is. Once you do this, the rest of your code can more-or-less just think of each element as f32 format. This is especially true in OpenGL, but also in DX to a large extent.

To understand all the subtle znear/zfar issues, you probably cannot avoid studying AT GREAT LENGTH the view/NDC/projection matrices you choose (which most people just copy out of some documentation somewhere). The simple "tips" you read about this issue may be enough for most programmers, but *nobody* has written a completely thorough, comprehensible discussion of this topic that I have ever seen (though authors may *imagine* they did). So to totally answer those questions you probably need to write a program that feeds sets of values through transformations by your chosen/candidate matrices - then plot the consequences (resolution over the whole range, non-linearities, etc). Most authors and programmers would rather not "go there" - it is painful. But yes, somebody should write a very thorough paper on that someday. Maybe you? :-)

Share this post


Link to post
Share on other sites
Thanks,
I asked this question because I found some articles and applications about generating Real-Scale planetary objects,
but as I saw none of them was "real" Real-scale.
When we talk about real scale(for relatively small planets) we mean something among the height of a person ~1meter and the distance of Earth-Moon
=300,000,000meters which means that if we dont want to follow other methods a 28-29bit precision is needed something that, as you said is way too much from the capabilities of the depth buffer...

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!