OK ... here''s a problem
I''m writing a space sim, and i''m using my own 96bit data type to store galactic coordinates - which gives a range of thousands of millions of light years with a resolution of 1 millimeter!
But i can''t get my head around how to convert these coordinates into something that can be used by OpenGL - which ultimately converts everything to floats. I''ll give you an example of what i mean -
The number 1584552368456 will be rounded to 1.584552e12 = 1584552000000
Therefore if you were to fly long distances, the positional accuracy of any given object will decrease. This is CRAP!
One solution would be to alter the positions of all objects relative to the user - ie subract the user''s coordinates from the coordinates of all other objects. But this would be very slow, especially for thousands of stars, each with their own planetary systems. And bearing in mind i''m using my own data type, this will be slower than normal!
Does anyone have any ideas how I can achieve the effect I''m after, without the subtraction every frame?
Please feel free to make a suggestion! :-)
The "subtraction of the camera position from every object" happens anyway when you transform your vertices by the camera matrix. If you do your own transformations, you can do the subtraction with your own datatype, which if you optimize enough, shouldn''t be that big a deal (especially if you use a good LOD scheme - you don''t need to transform stars that are light years away, just draw a point in their general direction). Then you can use more "normal" (even hardware) transform for the rest.