GLSL , Calculate Normals in Vertex and Fragment Shaders

Started by
5 comments, last by mynameisnafe 11 years, 4 months ago
Hi all.

I've been posting quite a lot recently, so thanks for not losing patience with me.

Heres the quo:

I have, in C++, created a 1D array of vertices with a nested for loop, where x = x, z = z, and y = 0. This is my terrain grid.

In the Vertex shader, I use a heightmap to offset the y value.
In the Pixel shader, I use the same calculation on the same texture for height based colour.

Height based colour/texture is limited without a lot of blending, and quite a few textures.
I would instead prefer to implement slope based texturing.

This requires each fragment to have a normal, so that it may be tested in calculating the slope value - I understand the concepts of this.

Once this is working, Blinn-Phong lighting can be considered (as I will have normal info)

The problem here, is that I have no normals for my mesh.

My understanding is that I can calculate the normal of a vertex by looking at its neighbours to determine the rotation of the triangle, and therefore the normal of the triangle. It is probably worth mentioning - I am familiar with the concept of normals being averaged where a vert shares tris. I'd like to do this to my normals where relevant.

Please could somebody point me in the direction of a nice, KISS, tutorial or explanation of how this is acheived? I am using shader model #330.

I would prefer to do the calculations per fragment as opposed to per vertex so that implementing phong shading is a matter of just working it out. If I am getting confused here please inform me.

Thank you muchly in advance.

Nathan
Advertisement
This is a variant of bump mapping. See a solution for this at http://stackoverflow.com/questions/5281261/generating-a-normal-map-from-a-height-map.

Use google, and you'll find lots of good answers for these things.

If you calculate normals in the CPU, instead of the shader, the common way is to have one normal for every vertex, even if the three normals are the same for all vertices in a triangle. This extra cost is smaller if you use indexed drawing.
[size=2]Current project: Ephenation.
[size=2]Sharing OpenGL experiences: http://ephenationopengl.blogspot.com/

If you calculate normals in the CPU, instead of the shader, the common way is to have one normal for every vertex, even if the three normals are the same for all vertices in a triangle. This extra cost is smaller if you use indexed drawing.


I am indeed using indexed drawing, however my terrain doesn't become terrain 'til it hit's the vertex shader, so that's where normals need to be calculated.

My tutor has said about sampling neighbouring pixels from the heightmap texture, calculating tangent and bitangent positions, and therefore calculating position of the triangle.. anybody know about this?

I get the theory, but I don't know what's meant by tangent and bitangent.. off to google I go..?
Hi again,

I've been googling around and I've implemented this, but it's not working.. any ideas?

I'm trying to implement some slope-based texturing, before lighting.

In the vertex shader, I generate the normal:


//
// Calculate normal of the vertex
//
float step = 1/1024; //where 1024 is texture width, i.e. heightmap width
//calculate the neigbour positions
vec4 prev_x = vec4(texCoord.x, 0.0, texCoord.y, 1.0);
vec4 prev_z = vec4(texCoord.x, 0.0, texCoord.y, 1.0);
vec4 next_x = vec4(texCoord.x, 0.0, texCoord.y, 1.0);
vec4 next_z = vec4(texCoord.x, 0.0, texCoord.y, 1.0);
prev_x.x -= step; prev_z.z -= step;
next_x.x += step; next_z.z += step;
//calculate neighbour heights / positions, with scaling
vec4 q;
q = texture2D(hmap_texture, prev_x.xz); prev_x.y = ( (q.x + q.y + q.z) / 3.0 ) * y_scale;
q = texture2D(hmap_texture, next_x.xz); next_x.y = ( (q.x + q.y + q.z) / 3.0 ) * y_scale;
q = texture2D(hmap_texture, prev_z.xz); prev_z.y = ( (q.x + q.y + q.z) / 3.0 ) * y_scale;
q = texture2D(hmap_texture, next_z.xz); next_z.y = ( (q.x + q.y + q.z) / 3.0 ) * y_scale;
//apply xz scaling
prev_x.xz *= xz_scale; next_x.xz *= xz_scale;
prev_z.xz *= xz_scale; next_z.xz *= xz_scale;
//
// we now have four neighbouring vertices, positions calculated. Now we need a normal.
vec3 tangent = next_x - prev_x;
vec3 bitangent = next_z - prev_z;
vec3 normal = normalize( cross(tangent, bitangent) );


In the fragment shader, I test the normal:



// The slope for this pixel is simply 1 - the input normal Y ( in Direct X this is as simple as float slope = 1 - input.normal.y )
float gradient = 1 - vertexNormal.x;

if(gradient <= 0.2) {
height_colour = sand_colour;
}

if(gradient < 0.7 && gradient > 0.2) {
height_colour = rock_colour;
}


if(gradient > 0.7) {
height_colour = grass_colour;
}


I saw this way of testing the normal here: http://www.rastertek.com/tertut14.html

However I am guessing which axis of the normal to test, in my code, as obv DirectX has its axis twisted.

Please help?

Also, if anyone knows how draw the normal, that would be sweet!

Thanks again

Nathan
You might consider doing this in a geometry shader instead. This is available in 330 and would enable you to calculate normals on the fly from your final positions that you're actually going to draw with, so it's also amenable to any hypothetical future LoD schemes you may consider implementing.

Direct3D has need of instancing, but we do not. We have plenty of glVertexAttrib calls.


Posted Today, 04:17 PM
You might consider doing this in a geometry shader instead. This is available in 330 and would enable you to calculate normals on the fly from your final positions that you're actually going to draw with, so it's also amenable to any hypothetical future LoD schemes you may consider implementing.


I also think this would be wise, from the sound of it, however I have no experience of geometry shaders on any API, and it kind of goes beyond the scope of my coursework - this is for uni. Nice idea though, for sure.

I think my issue is with the normal I generate.

vec3 normal = normalize( cross(tangent, bitangent) );

this must be transformed by a few matrices before I ship it off to the fragment shader: the inverse transpose of the modelMatrix/worldMarix, and the viewMatrix.
So now I need to get these matrices into the shader - I'll update as soon as I've implemented a UBO, and hence won't be passing in one pre-calulated ModelViewProjectionMatrix.

Would I calculate the normals for this terrain in world coords or eye coords, given the terrain model matrix is the ID matrix, considering lighting?

To aid my debugging, how does one draw a line of a vector (i.e.) the normal, in a fragment shader?

Thanks again
Sorry to bump, and second forget about drawing the normals.
I've figured I'm in world space as my model matrix for the terrain is the ID matrix = world matrix.

I'm pretty sure they're teaching us this for LoD techniques, but keeping it simple for us in terms of setting up shaders.

Heres my complete vertex shader, which uses heightmap coords to look up the y of a given vertex, before the given vertex is scaled. This process is done to the actual vertex we pass out, and to the neighbour vertices before they are used in the normal calculation.

The heightmap is 1024*1024 pixels, the grid is 1024*1024 verts.

Please help me just find the error. At some point in the future I'll write a geometry shader I promise, but for now, working on the vertex shader is all I've got.


#version 330
//
// This demo uses a texture to provide a basic heightmap, with scaling features.
//
// It also uses bump mapping techniques (?) to calculate the interpolated normal of a vertex
// - given its neighbouring vertices' positions in xyz space, to calculate a tangent and bitangent,
// from which a normalized cross can be deduced.
//
uniform mat4 worldMatrix;
uniform mat4 invTransposeWorldMatrix;
uniform mat4 viewProjMatrix;

uniform CameraMatricesBlock {
mat4 worldMx;
mat4 viewMx;
mat4 projectionMx;
} cam;
uniform sampler2D hmap_texture;
uniform float y_scale;
uniform float xz_scale;
//
// input vertex packet
//
layout (location=0) in vec4 vertexPos;
layout (location=1) in vec2 vertexTexCoord;
//
// output vertex packet
//
out vec2 texCoord;
out vec3 vertexNormal;
//-------------------------------------------------------------------
void ScaleVertex(inout vec4 vertex, in float _xz_scale, in float _y_scale){
vertex.x *= _xz_scale;
vertex.z *= _xz_scale;
vertex.y *= _y_scale;
}
//-------------------------------------------------------------------
void OffsetVertexXPosition(inout vec4 vertex, in float offsetAmount){
vertex.x += offsetAmount;
}
//-------------------------------------------------------------------
void OffsetVertexZPosition(inout vec4 vertex, in float offsetAmount){
vertex.z += offsetAmount;
}
//-------------------------------------------------------------------
void SetYFromHeightMap(inout vec4 vertex, in vec2 textureCoordinate){
vec4 pixel_colour = texture2D(hmap_texture, textureCoordinate);

float yOffset = ( ( pixel_colour.x + pixel_colour.y + pixel_colour.z ) / 3.0 );

vertex.y = yOffset;
}


//
// main
//
void main(void) {

// Yoink
texCoord = vertexTexCoord;

//
// Calculate vertex coordinates for this vertex and its neighbouring vertices
//
//
// Now we set up the X and Z components of our vertex, + the neighbour vertices
// At first, initialise them all to the postition of this vertex, so we can offset them easily.
float step = 1 / 1024; //where 1024 is texture width, i.e. heightmap width
vec4 v_Pos = vertexPos;
vec4 prev_vertex_x = vertexPos;
vec4 prev_vertex_z = vertexPos;
vec4 next_vertex_x = vertexPos;
vec4 next_vertex_z = vertexPos;
//next we offset our neighbours in their relative directions
OffsetVertexXPosition(prev_vertex_x, -(step) );
OffsetVertexXPosition(next_vertex_x, step );
OffsetVertexZPosition(prev_vertex_z, -(step) );
OffsetVertexZPosition(next_vertex_z, step );
//
// now we know the xz components of all our vertices, we can sample
// the heightmap texture at their texture coordinate to ascertain their Y component value.
//
//
// Our actual out vertex's y component
//
SetYFromHeightMap(v_Pos, texCoord);

// Neighbours on X
vec2 _tc = texCoord;
_tc.x = prev_vertex_x.x;
_tc.y = prev_vertex_x.z;
SetYFromHeightMap( prev_vertex_x, _tc );
_tc.x = next_vertex_x.x;
_tc.y = next_vertex_x.z;
SetYFromHeightMap( next_vertex_x, _tc );
// Neighbours on Z
_tc.x = prev_vertex_z.x;
_tc.y = prev_vertex_z.z;
SetYFromHeightMap( prev_vertex_z, _tc );
_tc.x = next_vertex_z.x;
_tc.y = next_vertex_z.z;
SetYFromHeightMap( next_vertex_z, _tc );
//
// Apply xz and y scaling to all vertices
//
// Our actual vertex
ScaleVertex(v_Pos, xz_scale, y_scale);
// Our neighbours on X
ScaleVertex(prev_vertex_x, xz_scale, y_scale);
ScaleVertex(next_vertex_x, xz_scale, y_scale);
// Our neighbours on Z
ScaleVertex(prev_vertex_z, xz_scale, y_scale);
ScaleVertex(next_vertex_z, xz_scale, y_scale);
//
// We now have four neighbouring vertices, positions calculated, and scaled.
// Now we need a normal, from the bitangent and tangent of these neighbour positions.
//
vec3 tangent;
tangent = next_vertex_x - prev_vertex_x;

//tangent.x = next_vertex_x.x - prev_vertex_x.x;
//tangent.y = next_vertex_x.y - prev_vertex_x.y;
//tangent.z = next_vertex_x.z - prev_vertex_x.z;
vec3 bitangent;
bitangent = next_vertex_z - prev_vertex_z;
//bitangent.x = next_vertex_x.x - prev_vertex_x.x;
//bitangent.y = next_vertex_x.y - prev_vertex_x.y;
//bitangent.z = next_vertex_x.z - prev_vertex_x.z;

vec3 normal;
normal = normalize( cross(tangent, bitangent) );
//normal.x = ( tangent.y * bitangent.z ) - ( tangent.z * bitangent.y );
//normal.y = ( tangent.z * bitangent.x ) - ( tangent.x * bitangent.z );
//normal.z = ( tangent.x * bitangent.y ) - ( tangent.y * bitangent.x );
//normal = normalize( normal );

//
//
// Output
//
// texCoord - the texture coordinate, i.e, the vertex position with no z component.
// vertexNormal - the vertex's normal
// glPosition - the vertex position

vertexNormal = invTransposeWorldMatrix * vec4( normal, 1.0 );
gl_Position = worldMatrix * viewProjMatrix * v_Pos; // set the position for the fragment shader
}

This topic is closed to new replies.

Advertisement