Vertex Textures and Per Pixel Lightning

Started by
8 comments, last by Ivo Leitao 16 years, 10 months ago
Hi ! I'm working on a tiled terrain engine and I'm using vertex textures to displace the geometry. Since i'm using vertex textures i only need to send a position in the 0 to 1 range to the vertex shader because i'm scaling and offseting each tile in there. Everything is working fine but now i want to add per pixel lightning. I understant that i have two main options: - Object Space Normal Mapping - Tangent Space Normal Mapping From what i have read Tangent Space Normal Mapping is the way to go (correct me if i'm wrong). The problem now is that i'm not sending the terrain normals to the vertex shader and i need them to implement Tangent Space Normal Mapping. Two solutions (more?): - Calculate the normals in the vertex shader (slow?) - Send an adicional map with the normals (three textures then: the elevation texture, the normal map, and the terrain normal map) Wich one is the better or there's more solutions to this problem that i'm not aware ? Tnks in advance [Edited by - Ivo Leitao on June 14, 2007 10:09:08 AM]
Advertisement
As far as I know, then you use Tangent Space normal mapping when you normals are defined in your object's local coordinates. If you have your normals in world space, just use a pre-calculated normal map and use the normals directly in the pixel shader.

Are you displacing you vertices in the vertexshader? or do you have a predefined heightmap and then use that to create patches at load? I couldn't really understand that.

regards/thallishI don't care if I'm known, I'd rather people know me
Quote:Original post by thallish
As far as I know, then you use Tangent Space normal mapping when you normals are defined in your object's local coordinates. If you have your normals in world space, just use a pre-calculated normal map and use the normals directly in the pixel shader.

Are you displacing you vertices in the vertexshader? or do you have a predefined heightmap and then use that to create patches at load? I couldn't really understand that.


I have an heightmap with the elevations, a normal map that i have built with the nvidia photoshop plugin and a difuse map and I'm displacing the geometry with the heightmap. Since i'm not passing the normals to the vertex shader your sugestion is to build another map (the precalculated normal map that you mention) with the normals in world space ?
Well you still have to pass the face normals to the vertex shader, and just pass them straight to the pixel shader. Then in the pixel shader you displace the face normal with the normal you get from the normal map.

But I need to get this right, are you building your terrain on the fly in the vertex shader? or do you create the mesh at load time?

If you are creating the mesh by displacing the vertices each frame then you will have to calculate the normals in the vertex shader.

Otherwise you calculate the vertex normals at creation of the mesh and then they are available in the vertex shader, where you just pass them on.

regards/thallishI don't care if I'm known, I'd rather people know me
Quote:Original post by thallish
Well you still have to pass the face normals to the vertex shader, and just pass them straight to the pixel shader. Then in the pixel shader you displace the face normal with the normal you get from the normal map.

But I need to get this right, are you building your terrain on the fly in the vertex shader? or do you create the mesh at load time?

If you are creating the mesh by displacing the vertices each frame then you will have to calculate the normals in the vertex shader.

Otherwise you calculate the vertex normals at creation of the mesh and then they are available in the vertex shader, where you just pass them on.


No i'm building the terrain on the fly. Saying this in another way i'm not passing individual height values per vertex to vertex shader. I' only sending x and z values between 0 and 1. The y value (height) is calculated each frame in the vertex shader using tex2D and the x and z values are transformed with shader parameters (witch i have called bias and scale). The main advantage is that the vertex buffer only contains vertices by tilesize. For example for a tile of 5x5 i only send 25 vertices to the vertex shader.

So i have a very small vertex buffer that is not associated with specific height values and i what would like to know is if i can avoid sending the face normals by calculating them in the vertex shader or by other method...

[Edited by - Ivo Leitao on June 14, 2007 2:32:40 PM]
you have two possibilities:
1) use a normal map in the pixel shader. Create this normal map with the same size of the heightmap, and use it as you usually would do with vertex normals.

2) Compute the normals in the vertex shader. This would be a bit more expensive, but is more matematically correct. In a nutshell, you have to read values from the heightmaps of the vertices on the right and below the current vertex, something like this:

float OneOverHeightmapSize = 1.0 / 512; // change this according to the heightmap sizefloat elevation = tex2Dlod(heightmap, float4(coords, 0, 0)).r; n2 = tex2Dlod(heightmap, float4(coords.x, coords.x + OneOverHeightmapSize , 0, 0)).r;n3 = tex2Dlod(heightmap, float4(coords.x + OneOverHeightmapSize, coords.y, 0, 0)).r;			float3 tangent  = float3(0.05, n2 - elevation, 0.0  );float3 binormal = float3(0.0,  n3 - elevation, 0.05);		OUT.Normal = -normalize(cross(tangent, binormal));	


This should do the trick.

For better results, you can combine the two tecniques, blending vertex normals and the normal map directly in the pixel shader, using linear interpolation based on camera distance:
// CameraPos is...well, just the camera position :D// IN.worldPos contains the vertex position in world space// 230.0 is just an arbiratry value, based on my world scale system :)float3 n1 = 2*(tex2D(normal_map, IN.UV) - 0.5);	float3 V = CameraPos - IN.worldPos;float Vl = saturate(length(V) / 230.0);float3 N = lerp(n1, IN.Normal, float3(Vl, Vl, Vl));	


Hope this helps!
Quote:Original post by b3rs3rk
you have two possibilities:
1) use a normal map in the pixel shader. Create this normal map with the same size of the heightmap, and use it as you usually would do with vertex normals.

2) Compute the normals in the vertex shader. This would be a bit more expensive, but is more matematically correct. In a nutshell, you have to read values from the heightmaps of the vertices on the right and below the current vertex, something like this:

*** Source Snippet Removed ***

This should do the trick.

For better results, you can combine the two tecniques, blending vertex normals and the normal map directly in the pixel shader, using linear interpolation based on camera distance:
*** Source Snippet Removed ***

Hope this helps!


Humm so you recommend a normal calculation in the vertex shader. What is the tradeback in terms of performance ?

tradebacks? well, computing normals in the vertex shader means that you have to do a total of 3 vertex texture fetches (1 for the elevation + 2 for the near points).

You know, vertex texture fetches are "expensive", but it all depends on the hardware you're using.
You could always write two code paths, using both tecniques :)

Quote:Original post by b3rs3rk
tradebacks? well, computing normals in the vertex shader means that you have to do a total of 3 vertex texture fetches (1 for the elevation + 2 for the near points).

You know, vertex texture fetches are "expensive", but it all depends on the hardware you're using.
You could always write two code paths, using both tecniques :)


Ok thank you for all the help. I think i will follow your advice and implement the two code paths.

One more question, this should be simple but i'm not getting it:
How do i build this map of vertex normals ? I only need to iterate over the heightfield calculating the weighted vertex normals and storing the normalized result in a texture right ? Also this weighted vertex normals need to be calculated in world space right ?

Tnks in advance

This topic is closed to new replies.

Advertisement