Planet Size Sphere Double Precision Issue

Started by
3 comments, last by phatgreen 7 years, 10 months ago

I generate a unit sphere that represents a sphere about the size of earth (around 6.7 million meters) after being multiplied by the scaling matrix (which contains the actual radius of the sphere so I can use the sphere for many other things).

1.0f is equal to 1 meter in my project.

I've recorded this video displaying my problem.

When I get close to the sphere, so that that I'm moving around near the surface at 1 meter a second or slower, I have these jumping issues.

There are already several things that I am doing:

- I store all positions of objects on the CPU with doubles (the camera and the sphere)

- I treat the camera's position as the origin of the world when it comes to rendering so I convert every object's world coordinates to being relative to the camera before I build their world matrix each frame (This also means my view matrix is built with the camera being at (0,0,0) also)

I do not know where my problem is and I have tried searching a lot of different places.

I'm converting all my float input data that goes into my shaders into doubles, doing the necessary transformations and calculations with doubles on the gpu just to see if that gets rid of the issue but it doesn't.

I pretty certain that my issue has something to do when I transform a vertice in my shader from Object space (Very small numbers because it is a unit sphere in object space) to World Space which goes to actual planet size of millions of meters where the precision is taken away from the smaller distances

I am stumped.

Advertisement

There are already several things that I am doing:

- I store all positions of objects on the CPU with doubles (the camera and the sphere)

Storing your camera position and sphere origin as doubles is not enough if you have a single mesh representing a gigantic object. Is your vertex data also being stored double precision? After you apply your scaling to the sphere, are you left with 32bit or 64bit vertex positions? It is not typically necessary to store mesh data as doubles if you keep your object size in check.

I treat the camera's position as the origin of the world when it comes to rendering so I convert every object's world coordinates to being relative to the camera before I build their world matrix each frame (This also means my view matrix is built with the camera being at (0,0,0) also)

Camera at the origin is just called view space or camera space or eye space, so call it one of those. Don't confuse it with world space by mixing up the terminology. The trick is indeed to send matrices to the GPU in view space, instead of world space (as most tutorials and textbooks will demonstrate). This involves some extra calculation on the CPU side, but it's really not a problem.

Some helpful reading on the topic: http://blogs.agi.com/insight3d/index.php/2008/09/03/precisions-precisions/

And a snippet of my code that handles building of the matrices in eye space to be sent to shaders.


dmat4 modelToWorld = thisItem.toWorld * nodeTransform;

// typical matrices sent to GPU, jitters far from origin
//mat4 modelView(viewMat * modelToWorld);
//mat4 mvp(projMat * modelView);
//mat4 normalMat(transpose(inverse(mat3(modelView))));

// transform world space to eye space on CPU in double precision, then send single to GPU
dmat4 modelViewWorld(viewMat * modelToWorld);

dvec4 nodeTranslationWorld(modelToWorld[0][3], modelToWorld[1][3], modelToWorld[2][3], 1.0);
vec3 nodeTranslationEye(nodeTranslationWorld * modelViewWorld);

mat4 modelViewEye(modelViewWorld);
modelViewEye[0][3] = nodeTranslationEye.x;
modelViewEye[1][3] = nodeTranslationEye.y;
modelViewEye[2][3] = nodeTranslationEye.z;

mat4 mvp(projMat * modelViewEye);
mat4 normalMat(transpose(inverse(mat3(modelViewEye))));

I'm converting all my float input data that goes into my shaders into doubles, doing the necessary transformations and calculations with doubles on the gpu just to see if that gets rid of the issue but it doesn't.

You really don't need to resort to double precision on the GPU side to solve the problem. The problem is likely in the way you are representing your sphere... I guess as a single mesh with no internal "scene graph" of its own. Consider breaking up your single large sphere into smaller patches that each fit within single precision range.

My vertex data is being stored with single precision. The scaling happens when I multiply the vertex in Object Space by the World Matrix which contains the Scale, Rotation, and Translation of the object, which all occurs on the GPU, in my domain shader in my case, in single precision.

I don't think the camera being the origin qualifies as being view space because the view matrix not only accounts for the camera's position, but also the right vector, up vector, and look vector of the camera. when you multiply a vertex in World Space by that view matrix, you get a vertex in View Space. My world space origin happens to be the camera instead of some random spot when I do stuff on the GPU.

Allow me to elaborate a bit more on what I am doing.

I send the vertex data of the vertices that make up my unit sphere to the GPU. Before I apply the world matrix, I feed the coordinates of each vertex to a noise function and I offset the vertex based on that to get a height differences.

I think what I'm going to try is every frame, on the CPU, convert all those vertices of the unit sphere from Object Space into World space (relative to the usual origin, not the camera), in doubles, then convert all their origins from being relative to the usual origin to having the camera be the origin in floats, so the ones close to the camera will have good precision. The problem though, after the performance hits considered of all those matrix multiplications on the CPU, is that on the GPU I'll eventually need the vertices I submitted in Object Space so I can offset them correctly but I already did a little bit of thinking on that and I know how I'll manage that.

I will do some testing and report back here.

Thanks for the reply.

My vertex data is being stored with single precision. The scaling happens when I multiply the vertex in Object Space by the World Matrix which contains the Scale, Rotation, and Translation of the object, which all occurs on the GPU, in my domain shader in my case, in single precision.

I don't think the camera being the origin qualifies as being view space because the view matrix not only accounts for the camera's position, but also the right vector, up vector, and look vector of the camera. when you multiply a vertex in World Space by that view matrix, you get a vertex in View Space. My world space origin happens to be the camera instead of some random spot when I do stuff on the GPU.

Fair enough on the viewspace point. I think I even call it "eye" space in my code to distinguish it, since the transform is only different from worldspace in translation and not in rotation, so not truly viewspace as you said. Eye space is still a bit of a misnomer, perhaps camera space makes the most sense.

Short of doing all of that, if you simply scale in double precision in the domain shader it would be a step in the right direction.

My vertex data is being stored with single precision. The scaling happens when I multiply the vertex in Object Space by the World Matrix which contains the Scale, Rotation, and Translation of the object, which all occurs on the GPU, in my domain shader in my case, in single precision.

I don't think the camera being the origin qualifies as being view space because the view matrix not only accounts for the camera's position, but also the right vector, up vector, and look vector of the camera. when you multiply a vertex in World Space by that view matrix, you get a vertex in View Space. My world space origin happens to be the camera instead of some random spot when I do stuff on the GPU.

Fair enough on the viewspace point. I think I even call it "eye" space in my code to distinguish it, since the transform is only different from worldspace in translation and not in rotation, so not truly viewspace as you said. Eye space is still a bit of a misnomer, perhaps camera space makes the most sense.

Short of doing all of that, if you simply scale in double precision in the domain shader it would be a step in the right direction.

Okay now I've tried doing double precision of everything in the domain shader.

Heres the code..

9wgfM1D.png

I still get the jerking. This is interesting because now I have no idea what is causing the jerk because I would think if all these calculations are done with doubles and I only return my final clip space value as a float it would stop the jerking because no precision that is significant would be lost.

I did try converting converting all the mesh vertices from being in object space to being in world space on the CPU before sending them to the GPU, but that did not stop the wobbling. That was also very strange because those vertices close to the camera had precision going into the GPU so I'm uncertain again why I had issues.

I prefer the method you suggested of simply scaling in double precision in the domain shader which is what I'm trying now and what I talked about first in this post over the more CPU heavy method that I can't even get to work.

So at this point I am just trying to find why the jerk is still happening, I am not currently looking for an efficient way, I just want to find the source, as in, which specific multiplication when you are dealing with large meshes is the main operation where the precision is lost. I thought it would be any operation that has to do with the Scale matrix because that is where I store the large radius of this sphere, but I as I said above, I've altered the domain shader to do all those matrix multiplications and vector transformations with doubles.

Also one final note, in the picture I linked I use a function I wrote called "mul_4x4d_4x4d" for my double matrix multiplication and "mul_4d_4x4d" for my vector transformation because I don't know if Microsofts version of those (mul()) actually supports double precision, so I just make it do it just in case, please correct me if I'm wrong though.

EDIT: I just realized this. How do I even know that the GPU is actually doing all this in doubles? Does my card have to support it and if it doesn't it just defaults to floats?

This topic is closed to new replies.

Advertisement