OpenGL precision for large coordinates

Started by
5 comments, last by V-man 15 years, 5 months ago
I'm writing an OpenGL application that requires double precision coordinates. While the pipeline offers appropriate methods (Vertex3d etc.), it seems like the double I'm giving as argument is being cast to a float anyway by OpenGL (or anything with lower precision). I'm not really understanding why the pipeline accepts doubles when it's limited at float-precision (to make the GPU do the cast ?). Now as for the solution I could come up with 2 different ones, an easy and hard solution. Note that my problem isn't the range of the coordinates that are being seen through the viewport, but the coordinates of the position of the camera and objects are getting so high that they lose too much precision when being cast to a float (any values above 2,000,000 at least). The easy solution would be to simply leave my camera at or near the origin (not adjusting the projection matrix as I am now), and computationally translating all coordinates of everything that needs to be rendered to the camera position. This would bring the coordinates close to (0,0) and solve the loss of precision floating points are suffering at high values. The downside of this solution is that on every iteration every coordinate needs to be computationally translated (and cast to a float), as the camera is constantly moving. A second solution would be to only chose a new axis once the camera gets too far from the origin, then calculate new coordinates for everything, store them seperately in the memory, and use these new coordinates to draw objects, until the camera exceeds the point where floating point doesn't deliver sufficient precision, then recalculate coordinates again. Obviously this would increase performance compared to the first solution, but it might not be worth the trouble. Any ideas, comments, or experiences about dealing with this kind of issue in OpenGL ?
Advertisement
Chris Thorne has written a paper in regards to this issue.

The NASA World Wind folks have discussed this issue here and here.

X3D has this to say.

I've written about it at the Insight3D blog.

I hope this give you some ideas.






Quote:Original post by Crestfall
I'm writing an OpenGL application that requires double precision coordinates. While the pipeline offers appropriate methods (Vertex3d etc.), it seems like the double I'm giving as argument is being cast to a float anyway by OpenGL (or anything with lower precision). I'm not really understanding why the pipeline accepts doubles when it's limited at float-precision (to make the GPU do the cast ?


Wishful thinking for 64-bit GPUs?

Quote:Original post by jsderon
Chris Thorne has written a paper in regards to this issue.

The NASA World Wind folks have discussed this issue here and here.

X3D has this to say.

I've written about it at the Insight3D blog.

I hope this give you some ideas.


Thanks a bunch, I actually searched around quite a bit but couldn't find any easy solution. I figured I was just doing something wrong, apparently it's more of a real problem to think about than I thought it was.

What I'd still like to know if there's any difference between using Vertex3d and using Vertex3f with the same arguments but explicitly casted to a float. If not, what's the point of having a Vertex3d ...
I am featuring infinite terrain in my current engine.
Basically all objects what 2 sets of coordiantes

1. 3x float for its relative position within the chunk it belongs to
2. 2x int that identifies sectors

this system allows for terrains of the size
+- 8.796.093.022.208,0

which corresponds to
+- 219.902.325.555,19998 kilometers
with the precision known from ego shooters
http://www.8ung.at/basiror/theironcross.html
Quote:Original post by Crestfall

Thanks a bunch, I actually searched around quite a bit but couldn't find any easy solution. I figured I was just doing something wrong, apparently it's more of a real problem to think about than I thought it was.

What I'd still like to know if there's any difference between using Vertex3d and using Vertex3f with the same arguments but explicitly casted to a float. If not, what's the point of having a Vertex3d ...


glVertex3d will cast the values to floats. I imagine this was done for the convenience of the user whose data might be in doubles. Likewise, the other varieties of that command for integers and shorts will cast values to floats.
Don't use Doubles and other unsupported stuff

http://www.opengl.org/wiki/index.php/Common_Mistakes#Unsupported_formats_.231
Sig: http://glhlib.sourceforge.net
an open source GLU replacement library. Much more modern than GLU.
float matrix[16], inverse_matrix[16];
glhLoadIdentityf2(matrix);
glhTranslatef2(matrix, 0.0, 0.0, 5.0);
glhRotateAboutXf2(matrix, angleInRadians);
glhScalef2(matrix, 1.0, 1.0, -1.0);
glhQuickInvertMatrixf2(matrix, inverse_matrix);
glUniformMatrix4fv(uniformLocation1, 1, FALSE, matrix);
glUniformMatrix4fv(uniformLocation2, 1, FALSE, inverse_matrix);

This topic is closed to new replies.

Advertisement