• Advertisement
Sign in to follow this  

3D Finite difference normal calculation for sphere

Recommended Posts

Hi everyone,

I'm currently adapting a planar quadtree terrain system to handle spherical terrain (i.e. I want to render planets as opposed to just an endless terrain stretching in the X-Z plane).

A the moment, I use this algorithm to calculate the normal for a given point on the terrain from the height information (the heights are generated using simplex noise):

	public Vector3 GetNormalFromFiniteOffset(float x, float z, float sampleOffset)
        {
            float hL = GetHeight(x - sampleOffset, z);
            float hR = GetHeight(x + sampleOffset, z);
            float hD = GetHeight(x, z - sampleOffset);
            float hU = GetHeight(x, z + sampleOffset);
	            Vector3 normal = new Vector3(hL - hR, 2, hD - hU);
            normal.Normalize();
	            return normal;
        }
	

The above works fine for my planar quadtree, but of course it won't work for spherical terrain because it assumes the Y direction is always up. I guess I need to move the calculation into the plane which lies at a tangent on the sphere for the point I'm evaluating. I was wondering if anyone knows of a good or proven way to transform this calculation for any point on a sphere, or if there's a better way to calculate the normal for a point on a sphere which has been radially displaced using a noise-based height field?

I'm using XNA by the way. Thanks very much for looking!

Share this post


Link to post
Share on other sites
Advertisement

A quick addition - I tried creating my own adaptation:

	public Vector3 GetNormalFromFiniteOffset(Vector3 location, float sampleOffset)
        {
            Vector3 normalisedLocation = Vector3.Normalize(location);
            Vector3 arbitraryUnitVector = Math.Abs(normalisedLocation.Y) > 0.999f ? Vector3.UnitX : Vector3.UnitY;
	            Vector3 tangentVector1 = Vector3.Cross(arbitraryUnitVector, normalisedLocation);
            tangentVector1.Normalize();
            Vector3 tangentVector2 = Vector3.Cross(tangentVector1, normalisedLocation);
            tangentVector2.Normalize();
	            float hL = GetHeight(location - tangentVector1*sampleOffset);
            float hR = GetHeight(location + tangentVector1*sampleOffset);
            float hD = GetHeight(location - tangentVector2*sampleOffset);
            float hU = GetHeight(location + tangentVector2*sampleOffset);
	            Vector3 normal = 2*normalisedLocation + (hL - hR)*tangentVector1 + (hD - hU)*tangentVector2;
            normal.Normalize();
	            return normal;
        }
	

I can't test it yet, but I wonder if anyone thinks this looks like a decent approach, or are there obvious issues with how I'm doing this?

Thanks again!

Share this post


Link to post
Share on other sites

I assume you get discontinuities because of the arbitary tangents. It's impossible to move a orientation over the surface of a sphere without singularities.

If you use normals at discrete offsets (e.g. per vertex or per face) the dicontinuity might not matter, but if yo do it per pixel i expect a visible seam.

In the latter case the solution is to consider the singularities when calculating the normal - similar to a cubemap where in the corner you need to sample from 3 instead 4 texels, and projected angles become 120 instead 90 degrees. (The orthogonality you assume in your code breaks at the corners.)

Share this post


Link to post
Share on other sites

Thanks for the reply - yes, I thought about what would happen if I used the same arbitrary vectors for every position, which is why I choose between the X direction or the Y direction depending on the Y component of the position. I was thinking this would ensure that the tangents always form an orthonormal basis around the point on the sphere.

it will result in different tangents for different positions, but I was thinking this wouldn't matter as long as I was taking samples across two perpendicular directions in the tangent plane.

Share this post


Link to post
Share on other sites

The tangents are orthogonal, but if you would rotate them slightly, you get slightly different results because you sample from different neighbouring positions. And this is what happens when the branch alternates between X and Y, you get large discontinuities in tangent directions at the branch point, and this can cause a visible seam.

The problem is unavoidable - you can't cover a sphere with a net of orthonormal isolines without introducing singularities, so you can't calculate a orthonormal basis around the sphere without discontinuity in orientation. You could however trick the issue by ensuring the jumps in orientation are always 90 degrees. (Might be more efficient than special cases for edges / corners, but the math won't be super easy.)

You would be surprised how much impact this problem has on topics like surface parametrization, remeshing etc.

But as said chances are that in practice you won't notice the seam - i'm just nit picking. See how your solution works for you...

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Advertisement
  • Advertisement
  • Popular Tags

  • Advertisement
  • Popular Now

  • Similar Content

    • By Nimmagadda Subba Rao
      Hi,
         I am a CAM developer working with C++ and C# for the past 5 years. I started working on DirectX from past 6 months. I developed a touch screen control viewer using Direct2D. I am working on 3D viewer currently. I am very slow with working on Direct3D. I want to be a gaming developer. As i am new to this i want to know what are the possibilities to explore in this area. How to start developing gaming engines? Is it through tutorials? I heard suggestions from my friends that going for an MS helps. I am not sure on which path to choose. Is it better to go for higher studies and start exploring? I am currently working in India. I want to go to Canada and settle there. Are there any good universities there to learn about graphics programming? Sorry if I am asking too many questions but i want to know the options to choose to get ahead. 
    • By _RoboCat_
      Hi,
      Can anyone point me into good direction how to resolve this?
      I have flat mesh made from many quads (size 1x1 each) each split into 2 triangles. (made procedural)
      What i want to achieve is : "merge" small quads into bigger ones (show on picture 01), English is not my mother language and my search got no result... maybe i just form question wrong.
      i have array[][] where i store "map" information, for now i'm looking for blobs of same value in it -> and then for each position i create 1 quad. and on end create mesh from all.
      is there any good algorithm for creating mesh between random points on same plane? less triangles better. Or "de-tesselate" this to bigger/less triangles/quads?
      Also i would like to find "edges" and create "faces" between edge points (picture 02 shows what i want to achieve).
      No need for whole code, just if someone can point me in good direction would be nice.
      Thanks


    • By Karol Plewa
      Hi, 
       
      I am working on a project where I'm trying to use Forward Plus Rendering on point lights. I have a simple reflective scene with many point lights moving around it. I am using effects file (.fx) to keep my shaders in one place. I am having a problem with Compute Shader code. I cannot get it to work properly and calculate the tiles and lighting properly. 
       
      Is there anyone that is wishing to help me set up my compute shader?
      Thank you in advance for any replies and interest!
    • By PhillipHamlyn
      Hi
      I have a procedurally generated tiled landscape, and want to apply 'regional' information to the tiles at runtime; so Forests, Roads - pretty much anything that could be defined as a 'region'. Up until now I've done this by creating a mesh defining the 'region' on the CPU and interrogating that mesh during the landscape tile generation; I then add regional information to the landscape tile via a series of Vertex boolean properties. For each landscape tile vertex I do a ray-mesh intersect into the 'region' mesh and get some value from that mesh.

      For example my landscape vertex could be;
      struct Vtx { Vector3 Position; bool IsForest; bool IsRoad; bool IsRiver; } I would then have a region mesh defining a forest, another defining rivers etc. When generating my landscape veretexes I do an intersect check on the various 'region' meshes to see what kind of landscape that vertex falls within.

      My ray-mesh intersect code isn't particularly fast, and there may be many 'region' meshes to interrogate, and I want to see if I can move this work onto the GPU, so that when I create a set of tile vertexes I can call a compute/other shader and pass the region mesh to it, and interrogate that mesh inside the shader. The output would be a buffer where all the landscape vertex boolean values have been filled in.

      The way I see this being done is to pass in two RWStucturedBuffer to a compute shader, one containing the landscape vertexes, and the other containing some definition of the region mesh, (possibly the region might consist of two buffers containing a set of positions and indexes). The compute shader would do a ray-mesh intersect check on each landscape vertex and would set the boolean flags on a corresponding output buffer.

      In theory this is a parallelisable operation (no one landscape vertex relies on another for its values) but I've not seen any examples of a ray-mesh intersect being done in a compute shader; so I'm wondering if my approach is wrong, and the reason I've not seen any examples, is because no-one does it that way. If anyone can comment on;
      Is this a really bad idea ? If no-one does it that way, does everyone use a Texture to define this kind of 'region' information ? If so - given I've only got a small number of possible types of region, what Texture Format would be appropriate, as 32bits seems really wasteful. Is there a common other approach to adding information to a basic height-mapped tile system that would perform well for runtime generated tiles ? Thanks
      Phillip
    • By GytisDev
      Hello,
      without going into any details I am looking for any articles or blogs or advice about city building and RTS games in general. I tried to search for these on my own, but would like to see your input also. I want to make a very simple version of a game like Banished or Kingdoms and Castles,  where I would be able to place like two types of buildings, make farms and cut trees for resources while controlling a single worker. I have some problem understanding how these games works in the back-end: how various data can be stored about the map and objects, how grids works, implementing work system (like a little cube (human) walks to a tree and cuts it) and so on. I am also pretty confident in my programming capabilities for such a game. Sorry if I make any mistakes, English is not my native language.
      Thank you in advance.
  • Advertisement