Sign in to follow this  
Crestfall

OpenGL OpenGL precision for large coordinates

Recommended Posts

I'm writing an OpenGL application that requires double precision coordinates. While the pipeline offers appropriate methods (Vertex3d etc.), it seems like the double I'm giving as argument is being cast to a float anyway by OpenGL (or anything with lower precision). I'm not really understanding why the pipeline accepts doubles when it's limited at float-precision (to make the GPU do the cast ?). Now as for the solution I could come up with 2 different ones, an easy and hard solution. Note that my problem isn't the range of the coordinates that are being seen through the viewport, but the coordinates of the position of the camera and objects are getting so high that they lose too much precision when being cast to a float (any values above 2,000,000 at least). The easy solution would be to simply leave my camera at or near the origin (not adjusting the projection matrix as I am now), and computationally translating all coordinates of everything that needs to be rendered to the camera position. This would bring the coordinates close to (0,0) and solve the loss of precision floating points are suffering at high values. The downside of this solution is that on every iteration every coordinate needs to be computationally translated (and cast to a float), as the camera is constantly moving. A second solution would be to only chose a new axis once the camera gets too far from the origin, then calculate new coordinates for everything, store them seperately in the memory, and use these new coordinates to draw objects, until the camera exceeds the point where floating point doesn't deliver sufficient precision, then recalculate coordinates again. Obviously this would increase performance compared to the first solution, but it might not be worth the trouble. Any ideas, comments, or experiences about dealing with this kind of issue in OpenGL ?

Share this post


Link to post
Share on other sites
Quote:
Original post by Crestfall
I'm writing an OpenGL application that requires double precision coordinates. While the pipeline offers appropriate methods (Vertex3d etc.), it seems like the double I'm giving as argument is being cast to a float anyway by OpenGL (or anything with lower precision). I'm not really understanding why the pipeline accepts doubles when it's limited at float-precision (to make the GPU do the cast ?


Wishful thinking for 64-bit GPUs?

Share this post


Link to post
Share on other sites
Quote:
Original post by jsderon
Chris Thorne has written a paper in regards to this issue.

The NASA World Wind folks have discussed this issue here and here.

X3D has this to say.

I've written about it at the Insight3D blog.

I hope this give you some ideas.


Thanks a bunch, I actually searched around quite a bit but couldn't find any easy solution. I figured I was just doing something wrong, apparently it's more of a real problem to think about than I thought it was.

What I'd still like to know if there's any difference between using Vertex3d and using Vertex3f with the same arguments but explicitly casted to a float. If not, what's the point of having a Vertex3d ...

Share this post


Link to post
Share on other sites
I am featuring infinite terrain in my current engine.
Basically all objects what 2 sets of coordiantes

1. 3x float for its relative position within the chunk it belongs to
2. 2x int that identifies sectors

this system allows for terrains of the size
+- 8.796.093.022.208,0

which corresponds to
+- 219.902.325.555,19998 kilometers
with the precision known from ego shooters

Share this post


Link to post
Share on other sites
Quote:
Original post by Crestfall

Thanks a bunch, I actually searched around quite a bit but couldn't find any easy solution. I figured I was just doing something wrong, apparently it's more of a real problem to think about than I thought it was.

What I'd still like to know if there's any difference between using Vertex3d and using Vertex3f with the same arguments but explicitly casted to a float. If not, what's the point of having a Vertex3d ...


glVertex3d will cast the values to floats. I imagine this was done for the convenience of the user whose data might be in doubles. Likewise, the other varieties of that command for integers and shorts will cast values to floats.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Announcements

  • Forum Statistics

    • Total Topics
      628401
    • Total Posts
      2982467
  • Similar Content

    • By test opty
      Hi all,
       
      I'm starting OpenGL using a tut on the Web. But at this point I would like to know the primitives needed for creating a window using OpenGL. So on Windows and using MS VS 2017, what is the simplest code required to render a window with the title of "First Rectangle", please?
       
       
    • By DejayHextrix
      Hi, New here. 
      I need some help. My fiance and I like to play this mobile game online that goes by real time. Her and I are always working but when we have free time we like to play this game. We don't always got time throughout the day to Queue Buildings, troops, Upgrades....etc.... 
      I was told to look into DLL Injection and OpenGL/DirectX Hooking. Is this true? Is this what I need to learn? 
      How do I read the Android files, or modify the files, or get the in-game tags/variables for the game I want? 
      Any assistance on this would be most appreciated. I been everywhere and seems no one knows or is to lazy to help me out. It would be nice to have assistance for once. I don't know what I need to learn. 
      So links of topics I need to learn within the comment section would be SOOOOO.....Helpful. Anything to just get me started. 
      Thanks, 
      Dejay Hextrix 
    • By mellinoe
      Hi all,
      First time poster here, although I've been reading posts here for quite a while. This place has been invaluable for learning graphics programming -- thanks for a great resource!
      Right now, I'm working on a graphics abstraction layer for .NET which supports D3D11, Vulkan, and OpenGL at the moment. I have implemented most of my planned features already, and things are working well. Some remaining features that I am planning are Compute Shaders, and some flavor of read-write shader resources. At the moment, my shaders can just get simple read-only access to a uniform (or constant) buffer, a texture, or a sampler. Unfortunately, I'm having a tough time grasping the distinctions between all of the different kinds of read-write resources that are available. In D3D alone, there seem to be 5 or 6 different kinds of resources with similar but different characteristics. On top of that, I get the impression that some of them are more or less "obsoleted" by the newer kinds, and don't have much of a place in modern code. There seem to be a few pivots:
      The data source/destination (buffer or texture) Read-write or read-only Structured or unstructured (?) Ordered vs unordered (?) These are just my observations based on a lot of MSDN and OpenGL doc reading. For my library, I'm not interested in exposing every possibility to the user -- just trying to find a good "middle-ground" that can be represented cleanly across API's which is good enough for common scenarios.
      Can anyone give a sort of "overview" of the different options, and perhaps compare/contrast the concepts between Direct3D, OpenGL, and Vulkan? I'd also be very interested in hearing how other folks have abstracted these concepts in their libraries.
    • By aejt
      I recently started getting into graphics programming (2nd try, first try was many years ago) and I'm working on a 3d rendering engine which I hope to be able to make a 3D game with sooner or later. I have plenty of C++ experience, but not a lot when it comes to graphics, and while it's definitely going much better this time, I'm having trouble figuring out how assets are usually handled by engines.
      I'm not having trouble with handling the GPU resources, but more so with how the resources should be defined and used in the system (materials, models, etc).
      This is my plan now, I've implemented most of it except for the XML parts and factories and those are the ones I'm not sure of at all:
      I have these classes:
      For GPU resources:
      Geometry: holds and manages everything needed to render a geometry: VAO, VBO, EBO. Texture: holds and manages a texture which is loaded into the GPU. Shader: holds and manages a shader which is loaded into the GPU. For assets relying on GPU resources:
      Material: holds a shader resource, multiple texture resources, as well as uniform settings. Mesh: holds a geometry and a material. Model: holds multiple meshes, possibly in a tree structure to more easily support skinning later on? For handling GPU resources:
      ResourceCache<T>: T can be any resource loaded into the GPU. It owns these resources and only hands out handles to them on request (currently string identifiers are used when requesting handles, but all resources are stored in a vector and each handle only contains resource's index in that vector) Resource<T>: The handles given out from ResourceCache. The handles are reference counted and to get the underlying resource you simply deference like with pointers (*handle).  
      And my plan is to define everything into these XML documents to abstract away files:
      Resources.xml for ref-counted GPU resources (geometry, shaders, textures) Resources are assigned names/ids and resource files, and possibly some attributes (what vertex attributes does this geometry have? what vertex attributes does this shader expect? what uniforms does this shader use? and so on) Are reference counted using ResourceCache<T> Assets.xml for assets using the GPU resources (materials, meshes, models) Assets are not reference counted, but they hold handles to ref-counted resources. References the resources defined in Resources.xml by names/ids. The XMLs are loaded into some structure in memory which is then used for loading the resources/assets using factory classes:
      Factory classes for resources:
      For example, a texture factory could contain the texture definitions from the XML containing data about textures in the game, as well as a cache containing all loaded textures. This means it has mappings from each name/id to a file and when asked to load a texture with a name/id, it can look up its path and use a "BinaryLoader" to either load the file and create the resource directly, or asynchronously load the file's data into a queue which then can be read from later to create the resources synchronously in the GL context. These factories only return handles.
      Factory classes for assets:
      Much like for resources, these classes contain the definitions for the assets they can load. For example, with the definition the MaterialFactory will know which shader, textures and possibly uniform a certain material has, and with the help of TextureFactory and ShaderFactory, it can retrieve handles to the resources it needs (Shader + Textures), setup itself from XML data (uniform values), and return a created instance of requested material. These factories return actual instances, not handles (but the instances contain handles).
       
       
      Is this a good or commonly used approach? Is this going to bite me in the ass later on? Are there other more preferable approaches? Is this outside of the scope of a 3d renderer and should be on the engine side? I'd love to receive and kind of advice or suggestions!
      Thanks!
    • By nedondev
      I 'm learning how to create game by using opengl with c/c++ coding, so here is my fist game. In video description also have game contain in Dropbox. May be I will make it better in future.
      Thanks.
  • Popular Now