Coordinate translation in huge world
Hi all,
I'm visualizing a huge terrain using DirectX. The scale of my world is 1 meter for 1 coordinate unit. Because the float data type is precise only around 7 digits, how to accomodate the coordinates of my terrain vertices, which can span for more than 7 digits?
I think of an idea to dynamically translating the world reference coordinate so that my objects coordinate won't be too large. What I don't know is how to perform this kind of translation. Or is there any other idea?
Thanks a lot.
Quote:Original post by budipro
Hi all,
I'm visualizing a huge terrain using DirectX. The scale of my world is 1 meter for 1 coordinate unit. Because the float data type is precise only around 7 digits, how to accomodate the coordinates of my terrain vertices, which can span for more than 7 digits?
I think of an idea to dynamically translating the world reference coordinate so that my objects coordinate won't be too large. What I don't know is how to perform this kind of translation. Or is there any other idea?
Thanks a lot.
I'm not sure I understand you correctly. You have a data set or model that has 1 vertex for every square meter, which spans a total size of more than 7 digits? That would mean your terrain is 10,000,000 meter long (and wide). That's a dataset size of 100,000,000,000,000 vertices. You must have some harddisk :)
Obviously, this can never work. You must mean something else. Probably a 1 meter precision is not required in your case.
Tom
with digits he means the digits behind the comma "0.1234567"
in quake engines the size is 72.0f for player height
128 is 2.5-3 meters
i once had a problem performing some plain equation because with increased values the precision of floats decreases
doubles should help a little but
another solution would be to operate in the local space of a octree node and at runtime when you render the scene you translate the scene according to the players position
in quake engines the size is 72.0f for player height
128 is 2.5-3 meters
i once had a problem performing some plain equation because with increased values the precision of floats decreases
doubles should help a little but
another solution would be to operate in the local space of a octree node and at runtime when you render the scene you translate the scene according to the players position
Prehaps you could split your world into zones. These would be squared areas (or cubes), with a set size and evenly spaced (prehaps 1km x 1km each?). You could link this in with some kind of stream loading system, so you only have the zone you are in and the adjacent zones loaded into memory.
Richard.
Richard.
Quote:Original post by budipro
The scale of my world is 1 meter for 1 coordinate unit. Because the float data type is precise only around 7 digits, how to accomodate the coordinates of my terrain vertices, which can span for more than 7 digits?
Well, I came to this problem in a similar way the guys from Dungeon Siege did (read the article @smr pointed, it's really good!). Only that they over-complicated the solution and they admited that.
So what you need is a new type that you will be using instead of float for data indicating position (double should be fine (I'm using uint, so I can do some nifty tricks to speed things up (but in the end I think it was not worth it))).
Terrain representation is quite problematic, as you probably don't want to keep each vertex position as "expanded position". You can divide it into square chunks, and give each chunk its expanded position. And all internal chunk data will be keeped as local in floats (so you can use the data directly in rendering process). While accessing the terrain data (with expanded position) you would have to find the chunk and localize the position of interest first, then use standard accessing methods (as calculating terrain normal under entity, for example).
Generally, each object in the world should have its expanded position, but all other data should be in standard form (floats or whatever you like) in local object coordinates. Well, just like we all do when it comes to using 3D models.
Rendering:
Before ANYTHING is send to the gpu, or any matrix is constructed, all positions being used should be localized accoording to the camera position. This also includes ligth sources and all possible special position-form parameters used in shaders, for example.
Simply put, you need more bits per position. Splitting the world into regular chunks would actually have this effect - you'd go from representing positions using just a float (32bits) to uint grid index + float (64bits).
Thanks for all the replies.
I had read the article about Dungeon Siege, and the part about The Precision Issue is the problem that I talk about. But I don't quite understand how to solve it after reading it.
What is meant by: localized according to the camera position? Is this mean we just subtract a position with the position of camera?
I had read the article about Dungeon Siege, and the part about The Precision Issue is the problem that I talk about. But I don't quite understand how to solve it after reading it.
Quote:
Before ANYTHING is send to the gpu, or any matrix is constructed, all positions being used should be localized accoording to the camera position. This also includes ligth sources and all possible special position-form parameters used in shaders, for example.
What is meant by: localized according to the camera position? Is this mean we just subtract a position with the position of camera?
The way dungeon siege solved this problem was to have nodes of geometry which represented their vertices in an internal coordinate space. Then each node had a vector to the nodes around it.
Thus to establish one world space for rendering one would start with the node the camera was in. Say that this node was "0,0,0". Then for each link to another node add the translation vector and set the center position for the new node.
Basically, starting with (0,0,0) if this node contained a link to a node 5 units to the right, you would go to that node and tell it that its center is (5,0,0) and continue this out with the surrounding nodes. The final world coordinates of vertices can be formed by adding the position of the node to the relative vertex positions from the center of the node. This way there is no one world space, the world space and with it, the precision, moves around with the user.
Hope that clarified a bit.
Thus to establish one world space for rendering one would start with the node the camera was in. Say that this node was "0,0,0". Then for each link to another node add the translation vector and set the center position for the new node.
Basically, starting with (0,0,0) if this node contained a link to a node 5 units to the right, you would go to that node and tell it that its center is (5,0,0) and continue this out with the surrounding nodes. The final world coordinates of vertices can be formed by adding the position of the node to the relative vertex positions from the center of the node. This way there is no one world space, the world space and with it, the precision, moves around with the user.
Hope that clarified a bit.
As crazedfool mentioned, you have to find where the center of the spatial system will be every time you are rendering. This could be camera position, I'm doing just that in my engine. But I found it being non-intuitive, so I think it would be better to have special points in the world, 'candidates' for the "temporary center". Beginning of nearest node(zone, area, chunk, whatever you call it) should be excelent.
Now, what do I mean by localizing. Example:
Now, what do you in Render function?
- if you got standard (3 x float) vector
- if you got extended (say, 3 x double) vector
First case:
In second case, we got temporary center, which we know is not too far away from the object (otherwise we would not be rendering it):
The result of the substraction is cast from double to float, but it is small, so we do not loose precision.
Generally, you have to change rendering methods, so they were taking temporary center as a parameter.
You got to set the view matrix with a similar technique, and that's all.
Cheers.
~def
Now, what do I mean by localizing. Example:
class Entity{public: Model3D *m_pModel; Vector3 m_Pos; // ...public: void Render(void);};
Now, what do you in Render function?
- if you got standard (3 x float) vector
- if you got extended (say, 3 x double) vector
First case:
void Render(void){ matrix m = matrix::translation(m_Pos); g_WorldStack.Push(); g_WorldStack.MultMatrix(m); // (maybe also an orientation matrix) { m_pModel->Render(); }; g_WorldStack.Pop();};
In second case, we got temporary center, which we know is not too far away from the object (otherwise we would not be rendering it):
void Render(const Vector3 & center){ Vector3F posRelative = m_Pos - center; matrix m = matrix::translation(posRelative); g_WorldStack.Push(); g_WorldStack.MultMatrix(m); // (maybe also an orientation matrix) { m_pModel->Render(); }; g_WorldStack.Pop();};
The result of the substraction is cast from double to float, but it is small, so we do not loose precision.
Generally, you have to change rendering methods, so they were taking temporary center as a parameter.
You got to set the view matrix with a similar technique, and that's all.
Cheers.
~def
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement