Terrain mapping - more earthquake when closer to ground

Started by
35 comments, last by Green_Baron 4 years, 7 months ago

Ok, I updated my code by switching to all 64-bit vertices, etc... Converted to 32-bit floats for rendering/vertices buffer with 32-bit model/view/projection matrix. Distorted is now gone but vertices are still shaky.  LOD determination is now stable.  Rendering is now faster and smoother than before.  I believe some issues inside GPU space due to float errors through shader processing (limitation of 32-bit floats).   A few hundred meters above ground, they look stable but landing on ground (a few meters above) is becoming shaky or "earthquake effect".

Advertisement

Do some basic divide and conquer:

  • Change clipping so you only see close, don't worry if you cant see distant terrain, you are just trying to confirm it is not related to depth precision.
  • Don't generate the colour from a texture, just set it based on the texture coordinates or something. This is to confirm it is not a sampling precision issue.
  • Render in wireframe so you can see if verticies are moving or if its the texture sample that is moving around.
  • Modify your camera code so it can only pan and not move (position it close to the surface). This is just to make sure the camera physics ins't the problem.

 

I wrote this 10 years ago, so its going to be a bit primitive but I talk about some issues you may face:

http://www.malleegum.com/timspages/Misc/Mega.html

 

Ok, I will work on it about clipping test tomorrow.  My camera moved and panned very smooth by using my keyboard as movement controls. Here are my video clips to watch.

Update: I decided to test clipping.  I set 1.0 to far and see nothing as result.  Good.  I commented a line out in fragment shader code and saw nothing as result. Should I set any simple color to see it as test?

https://youtu.be/-GHSpStO5Pc

https://youtu.be/pAnQTJtRrC8

I still don't get how you generate vertice values. I don't mean how they are stored but what they are before applying the matrix operations. The videos implies you are using a grid of different sized chunks? Do you apply a world space operation to make the values near the camera small and with a better choice of units like 10 meters?

You can definitely render a world with 6500km radius at 1 meter triangle size using 32 bit floating points but you have to have close terrain fit the best range for floating point precision.

It doesn't matter if distant terrain is 6000.0000km or 6000.0001km if its 6000Km from the camera.

From https://www.h-schmidt.net/FloatConverter/IEEE754.html

6500.0000 as 6500.0000

6500.0001 as 6500.0000

6500.0006 as 6500.00048828125  

 

 

41 minutes ago, TeaTreeTim said:

You can definitely render a world with 6500km radius at 1 meter triangle size using 32 bit floating points but you have to have close terrain fit the best range for floating point precision.

I realize you make awesome stuff, but how this ? Positions 1m apart at a size of 6,500,000 have the same representation in 32bit float with 7 digits precision. Jittering can originate from multiplying these huge values with small values of rotation and translation in the transformation matrices. Rendering relative to eye will only help if the original representation of positions is large enough to represent these values with some slack.

The lod in the videos suffers from popping and cracks, even outside of lod level transitions. I too can't say from the two basic shaders if the view matrix used has been stripped from translation and the vertex positions have been corrected by the camera position to simulate something like relative to eye (which would probably bring some calmness to the "dynamic" display, besides the necessary adjustment of the depth planes). But i think this is not the case and even with this technique 10 or even 100m is the best one can get with single precision. The video is jittering at km scale and vertices seem a km apart, not meters, judging from the textures. And yet it is step dancing.

As you say, i too think, it would help knowing what exactly is going on. Yesterday i found an old thread in here over a similar problem, but i haven't saved the link ...

OK, I figured out why...  Yes, I assigned tiles at 6371 km from origin.  I tried normalized coordinate from planet origin and did not resolve that problem.  I tested both on IEEE 754 float convertor link above and both failed due to float errors.  That is a float-related problem.  I think that a problem is applying world space to planet origin due to float errors.

In Scene::render


prm.mproj = glm::perspective(glm::radians(OFS_DEFAULT_FOV), float(gl.getWidth()) / float(gl.getHeight()), DIST_NEAR, DIST_FAR);
prm.mview = glm::transpose(glm::toMat4(prm.crot));

In TerrainManager::render


prm.model = glm::translate(glm::transpose(prm.obj.orot), prm.obj.cpos);
prm.mvp = prm.mproj * prm.mview * prm.model;

I now found that problem above because it is using local world space relative to planet center (origin). For possible solution, I have to split model (world space) from view and projection matrix and have to apply world space to each tile center (origin) to render.  I will try that and see what happens...

(planet position + tile position) - camera position.

See ?

I have experimented with this a few months ago (in my blog is an example). You can use the planet's core as the origin, this is even advisable as other calculations will be greatly simplified.

One solution is called rendering relative to eye (rte): use double precision on the cpu for world coordinates of objects and the camera, convert doubles to two floats, a high and a low part and pass them in as two vertex attributes to the shader. Camera position is likewise converted and passed in. In the vertex shader, deduct the camera coordinates from the world coordinates high and low separately (a few additions, i let you go on a research for DSFUN90, if you find nothing i can help out with a little code). As the view matrix, use the regular view matrix but stripped from the translation part (use the upper left 3*3 matrix in case of glm).

So you can render rte without much ado and have almost full double precision (a little loss because of the conversion) in the world coordinates. You'll still be limited by the single precision in your field of view and the depth problem is not solved with this.

(Source: Cozzy/Ring: 3D Engine Design for Virtual Globes)

Am very interested in other approaches !

Well, I just split model from view and project matrix and ran it.  Good news!  Dynamic shaky was gone!!!  Now static errors occurs.  I will now work on per-tile matrix.   Yes, I have a book called "3D Engine Design for Virtual Globes".

 


#version 420

// vertex buffer objects
layout (location=0) in vec3 vPosition;
layout (location=1) in vec3 vNormal;
//layout (location=2) in vec3 vColor;
layout (location=2) in vec2 vTexCoord;

uniform mat4 gWorld;
uniform mat4 gViewProj;

out vec4 myColor;
out vec2 texCoord;

void main()
{
    gl_Position = gViewProj * gWorld * vec4(vPosition, 1.0);
    myColor = vec4(0.7, 0.7, 0.7, 1.0); // vec4(vColor, 1.0);
    texCoord = vTexCoord;
}

prm.model = glm::translate(glm::transpose(prm.obj.orot), prm.obj.cpos);
prm.mvp = prm.mproj * prm.mview;

uint32_t mwLoc = glGetUniformLocation(pgm->getID(), "gWorld");
glUniformMatrix4fv(mwLoc, 1, GL_FALSE, glm::value_ptr(prm.model));
uint32_t mvpLoc = glGetUniformLocation(pgm->getID(), "gViewProj");
glUniformMatrix4fv(mvpLoc, 1, GL_FALSE, glm::value_ptr(prm.mvp));

 

I switched to all 64-bit transform matrix on my code (CPU side) and converted to 32-bit matrix for rendering.  Also I implemented per-tile world matrix before rendering each tile. 


void TerrainTile::setWorldMatrix(renderParameter &prm)
{
	int    nlat = 1 << lod;
	int    nlng = 2 << lod;
	double lat = PI * double(ilat) / double(nlat);
	double lng = PI*2 * (double(ilng) / double(nlng)) - PI;
	
	double dx = /* prm.obj.orad * */ sin(lat) * cos(lng);
	double dy = /* prm.obj.orad * */ cos(lat);
	double dz = /* prm.obj.orad * */ sin(lat) * -sin(lng);

	// Determine offsets from object center for tile center
	prm.dtWorld = prm.obj.orot;
	prm.dtWorld[3][0] = dx*prm.obj.orot[0][0] + dy*prm.obj.orot[0][1] + dz*prm.obj.orot[0][2] + prm.obj.cpos.x;
	prm.dtWorld[3][1] = dx*prm.obj.orot[1][0] + dy*prm.obj.orot[1][1] + dz*prm.obj.orot[1][2] + prm.obj.cpos.y;
	prm.dtWorld[3][2] = dx*prm.obj.orot[2][0] + dy*prm.obj.orot[2][1] + dz*prm.obj.orot[2][2] + prm.obj.cpos.z;
}

I tried dtWorld with radius size and ran it.  Planet is exploding all directions!  I had commented 'prm.obj.orad' out and it was back to normal.  I brought camera a few centimeters above ground.  Everything are very stable (no shaky) but 32-bit float errors are clearly visible.  Terrain is not flat.  When I move camera around, terrain jumps up and down like blocky movement.  I am still figuring why terrain is not flat but looks like saw-like wave but camera movement and orientation is very smooth.  I tried per-tile matrix (tile center is origin) for higher accuracy but it did not work.  I am looking for another solution for tile center as origin for world matrix.

On 9/9/2019 at 11:41 AM, Green_Baron said:

See ?

I have experimented with this a few months ago (in my blog is an example). You can use the planet's core as the origin, this is even advisable as other calculations will be greatly simplified.

Where is your blog that you mention?  I have a book of "3D Engine Design for Virtual Globes" book.

This topic is closed to new replies.

Advertisement