Sign in to follow this  
Ivo Leitao

Building a object space normal map

Recommended Posts

Ivo Leitao    172
How can i build a object space normal map from a height field ? I simply don't understand how can i do it. I didn't find any tools that can do it. I have tried almost every tool available: polybump, melody, xNormal, nvidia plugin, GIMP plugin and many more. All of this progrmas generate only tangent space normal maps (at least they are bluish in look). I'm i missing something here ? Tnks for any help...this is driving me crazy :-(

Share this post


Link to post
Share on other sites
darrenc182    393
I think an Object Space Normal Map is the same as a tangent space normal map. I believe the one difference between object space normal mapping and tangent space normal mapping is the space the normal mapping is rendered in, either object space or tangent space. That said one tool that you can use to take a height map and create a normal map from could be a adobe Photoshop plug-in from nVidia that will convert a height map to a normal map. This plug-in can be found on the nVidia website, although you may need to do some digging to find it

Share this post


Link to post
Share on other sites
darrenc182    393
Here is the normal map plug-in I was talking about.

http://developer.nvidia.com/object/photoshop_dds_plugins.html

the download link is at the bottom of the page

Share this post


Link to post
Share on other sites
MattWhite06    146
Theres an excellent "shortcut" for this presented by Jason Shankel in Games Programming Vol. 3.

To summarise, the normal for each pixel of the heightmap can be found by simple convolution, with the normal being the vector [(h3 - h1), (h4 - h2), 2] (taken straight from the gem), where h1,h2,h3 and h4 are the neighbouring pixels height values as thus...


(P being the pixel location of the heightmap)
h2
|
h3 - P - h1
|
h4


This vector can then be normalised to create the required normal. The main advantages of this is that it can be done during the pre-rendering stage to produce a normalmap of the heightmap, and also that it is very fast should you require dynamic terrain (height values that change), and thus can be used to find normals on the fly.

For pixels at the edges, the "missing" values can be either taken from the relative pixels on the other edge of the heightmap, and thus tiling your terrain, or can be set to the same height as P for non-tiled terrain.

Pseudo-code for this is something like this... (assuming your heightmap is a 8-bit greyscale image)


BYTE heightmap[WIDTH][HEIGHT];
float H1, H2, H3, H4;
Vector normalmap[WIDTH][HEIGHT];

for(int y=0;y<HEIGHT;y++)
{
for(int x=0;x<WIDTH;x++)
{
if(x<WIDTH-1)
H1 = (float)(heightmap[X+1][Y] / 255.0f);
else
H1 = (float)(heightmap[X][Y] / 255.0f);

if(x>0)
H3 = (float)(heightmap[X-1][Y] / 255.0f);
else
H3 = (float)(heightmap[X][Y] / 255.0f);

if(y<HEIGHT-1)
H2 = (float)(heightmap[X][Y+1] / 255.0f);
else
H2 = (float)(heightmap[X][Y] / 255.0f);

if(y>0)
H4 = (float)(heightmap[X][Y-1] / 255.0f);
else
H4 = (float)(heightmap[X][Y] / 255.0f);

normalmap[X][Y] = Normalize( new Vector( (H3-H1), (H4-H2), 2.0f ));
}
}




Hope that helps!

Share this post


Link to post
Share on other sites
MattWhite06    146
Oh, just to note too that [(h3 - h1), (h4 - h2), 2] works if your terrains "up" is the positive Z axis. For a up direction in the positive Y axis, use [(h3 - h1), 2, (h4 - h2)] instead.

Share this post


Link to post
Share on other sites
Ivo Leitao    172
[quote]Original post by MattWhite06
Oh, just to note too that [(h3 - h1), (h4 - h2), 2] works if your terrains "up" is the positive Z axis. For a up direction in the positive Y axis, use [(h3 - h1), 2, (h4 - h2)] instead.[/quote

So, I just need to store each normal as a texture converting each element to the range 0 to 255 right ?
Another question is how can i use it in my vertex and pixel shaders... In my terrain (this is for a terrain engine) I'm not sending the vertex normal to the shader. As i have one light positioned in world space I know that with a tangent space normal map i would need the TBN matrix to convert the light to tangent space to do the lightning calculations.
As this is an object space normal map my doubt is if i need to make any transformation of light vector and if yes how can i do it.

By the the way tnks a lot for all the answers

Share this post


Link to post
Share on other sites
MattWhite06    146
Yeah. To convert the normal (or infact ANY normalised vector) to a 0-255 RGB format, you can simply add 1.0 to each element (XYZ) then divide by two. Because a normalised vector's elements will always be between -1 and 1, this operation will rescale them to between 0 and 1. You can then typecast them to the BYTE format and store then as 3 BYTEs and thus an RGB format (a normalmap texture). To covert from RGB to a vector, the reverse operation can be performed (multiply by 2 then subtract 1.0).

This normalmap can then be used in by your shaders in various ways. Either pass the normal directly to the vertex shader through the vertex declaration, or if you do create a normalmap texture, pass it to the pixel shader via texture coordinates. The way i would do it would be to find the Light - VertexPos vector per vertex in your vertex shader. You can then normalise it and encode it (as mentionend previously) into a set of the vertex's texture coords (or COLOR property if free). In a second set of the vertice's texture coords, store the vertice's pos (X and Z) divided by the terrain's actual size. This will give you 2 values between 0 and 1 which represent which texel to sample from the normalmap for the vertex. In the pixel shader, decode the toLight vector and the sampled normal from normalmap texture, using this second set of texture coords. You can then dotproduct both these vectors for your lighting etc.

I'll try some sample code to explain this...

Vertex Shader

float3 lightPos;
float2 terrainSize; //width and height

float3toLight = normalize(lightPos - VertexPos);
toLight += float3(1.0,1.0,1.0);
toLight *= 0.5;
Output.Tex0 = toLight;

Output.Tex1.x = VertexPos.x / terrainSize.x;
Output.Tex1.y = VertexPos.z / terrainSize.z;



Pixel Shader

sampler2d normalMap;

float3 toLight = Input.Tex0;
toLight *= 2.0;
toLight -= float3(1.0, 1.0, 1.0);

float3 normal = tex2d(normalMap, Input.Tex1);
normal *= 2.0;
normal -= float3(1.0, 1.0, 1.0);

//
// renormalise due to linear interpolation probs
normal = normalize(normal);
toLight = normalize(toLight);

Output.color = dot(normal, toLight); // Or a much better lighting calc :)



There are many variations and optimizations for this, but hope it gets the jist of the idea across.

And yes, because you're storing the normals in object space, you don't need to worry about a TBN matrix. Tangent/texture space normals are useful for objects which rotate, as the orientation of the faces can change. Because we don't usually rotate terrain, object space is sufficent, and faster, for storing the normals, as it skips the TBN matrix part completely.

Anyways, hope this makes sense and helps.

Share this post


Link to post
Share on other sites
Ivo Leitao    172
Quote:
Original post by MattWhite06
Yeah. To convert the normal (or infact ANY normalised vector) to a 0-255 RGB format, you can simply add 1.0 to each element (XYZ) then divide by two. Because a normalised vector's elements will always be between -1 and 1, this operation will rescale them to between 0 and 1. You can then typecast them to the BYTE format and store then as 3 BYTEs and thus an RGB format (a normalmap texture). To covert from RGB to a vector, the reverse operation can be performed (multiply by 2 then subtract 1.0).

This normalmap can then be used in by your shaders in various ways. Either pass the normal directly to the vertex shader through the vertex declaration, or if you do create a normalmap texture, pass it to the pixel shader via texture coordinates. The way i would do it would be to find the Light - VertexPos vector per vertex in your vertex shader. You can then normalise it and encode it (as mentionend previously) into a set of the vertex's texture coords (or COLOR property if free). In a second set of the vertice's texture coords, store the vertice's pos (X and Z) divided by the terrain's actual size. This will give you 2 values between 0 and 1 which represent which texel to sample from the normalmap for the vertex. In the pixel shader, decode the toLight vector and the sampled normal from normalmap texture, using this second set of texture coords. You can then dotproduct both these vectors for your lighting etc.

I'll try some sample code to explain this...

Vertex Shader
*** Source Snippet Removed ***

Pixel Shader
*** Source Snippet Removed ***

There are many variations and optimizations for this, but hope it gets the jist of the idea across.

And yes, because you're storing the normals in object space, you don't need to worry about a TBN matrix. Tangent/texture space normals are useful for objects which rotate, as the orientation of the faces can change. Because we don't usually rotate terrain, object space is sufficent, and faster, for storing the normals, as it skips the TBN matrix part completely.

Anyways, hope this makes sense and helps.


Cannot thank you enough, this was exactly what i was looking for. Many tnks

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this