recalculating normals in geometry shader

Started by
7 comments, last by idinev 14 years, 5 months ago
I've been messing around with displacement maps and other types of vertex movement in the vertex shader. The problem i keep on running into is that once the vertices are changed the normals are no longer correct. So i figured it wouldn't be too difficult to recalculate the normals in the geometry shader since you have access to multiple vertices that make up the face. here's the geometry shader that i created to do this, it acts on GL_TRIANGLES as an input. and gl_Position isn't multiplied by gl_ModelViewProjectionMatrix until it's output by the geometry shader.

#version 120
#extension GL_EXT_geometry_shader4 : enable

varying vec3 eyeSpaceNormal; //output normal for frag shader


void main( void )
{
 eyeSpaceNormal = gl_NormalMatrix * normalize(cross(gl_PositionIn[1].xyz - gl_PositionIn[0].xyz, gl_PositionIn[2].xyz - gl_PositionIn[0].xyz)); //calculate normal for this face

 for (int i = 0;i<gl_VerticesIn;i++)
 {
  gl_Position =gl_ModelViewProjectionMatrix * gl_PositionIn; //multiply position by MVP matrix;
  EmitVertex();
 }
 EndPrimitive();
}



This works fine but it only calculates the normal once per face so you get a faceted look. I thought of two possible solutions for this: a) you could use TRIANGLES_ADJACENCY as the input for the geometry shader and then this way you would have access to the adjacent vertices. You could then calculate the normals for the adjacent faces and then average these together to get smooth normals. I haven't been able to do this, beyond the opengl specs there is almost no info on geometry shaders available on the web. This also seems to be a lot of work especially since my models use tangent and bitangent information which i also need to recalculate. b) My second idea was to rotate the original normals (which are already smoothed) using the info from the new normal that was calculated. You could also rotate the tangent and bitangent vectors using this. But honestly I studied graphic design so my math skills are not up to par and i'm not even sure if this would work. here's what i've come up with so far but i'm getting weird results:

#version 120
#extension GL_EXT_geometry_shader4 : enable

in vec3 normalIn[]; //original normals
varying vec3 eyeSpaceNormal; // fixed normals for frag shader

void main( void )
{
 vec3 p1 = (gl_PositionIn[1]-gl_PositionIn[0]).xyz;
 vec3 p2 = (gl_PositionIn[2]-gl_PositionIn[0]).xyz;
 vec3 X =  normalize(p1);
 vec3 Y =  normalize(cross(p1,p2));
 vec3 Z =  cross(X,Y);
 mat3 rot = gl_NormalMatrix * mat3(X.xyz,Y.xyz,Z.xyz); //rotation matrix

 for (int i = 0;i<gl_VerticesIn;i++)
 {
  gl_Position =gl_ModelViewProjectionMatrix * gl_PositionIn;
  eyeSpaceNormal = normalIn * rot; // multiply original normal by rotation matrix
  EmitVertex();
 }
 EndPrimitive();
}



Part of the problem seems to be that every other polygons normal is flipped, probably because of the winding, i also think that multiplying the rot matrix by the NormalMatrix might not be working but i really have no clue at this point any help would be awesome. [Edited by - sgsrules on November 9, 2009 5:30:34 PM]
Advertisement
Your first solution is okay, although you're computing one normal per primitive and you should compute one normal per every vertex.
So you should compute your normal inside the for loop, not outside it.

I'm not sure if you can write it like this (also with this solution you need to make sure that your model contains JUST triangles and no other primitive), as I still don't have too much experiences with geometry shaders (but I'm learning them :P).
#version 120#extension GL_EXT_geometry_shader4 : enablevarying vec3 eyeSpaceNormal;void main( void ){	eyeSpaceNormal = gl_NormalMatrix * normalize(cross(gl_PositionIn[1].xyz - gl_PositionIn[0].xyz, gl_PositionIn[2].xyz - gl_PositionIn[0].xyz));	gl_Position = gl_ModelViewProjectionMatrix * gl_PositionIn[0];	EmitVertex();	eyeSpaceNormal = gl_NormalMatrix * normalize(cross(gl_PositionIn[2].xyz - gl_PositionIn[1].xyz, gl_PositionIn[0].xyz - gl_PositionIn[1].xyz));	gl_Position = gl_ModelViewProjectionMatrix * gl_PositionIn[1];	EmitVertex();	eyeSpaceNormal = gl_NormalMatrix * normalize(cross(gl_PositionIn[3].xyz - gl_PositionIn[2].xyz, gl_PositionIn[1].xyz - gl_PositionIn[2].xyz));	gl_Position = gl_ModelViewProjectionMatrix * gl_PositionIn[2];	EmitVertex();	EndPrimitive();}

As I'm not on PC that is able to debug some geometry shaders, I can't test this if it really emits good vertices and distributing eyeSpaceNormal per every vertex and not per primitive.
This MIGHT work, I can't make myself sure if it will work. Just give it a try, it might work.

My current blog on programming, linux and stuff - http://gameprogrammerdiary.blogspot.com

It's pointless computing the normals inside the loop, you still get the same normal calculation since you're using the same three points of that face. I did it once outside the loop to speed things up.
You could sample the displacement map around the current vertex, to estimate the normal from that in the vertex-shader.
Using the adjacency-information would probably provide a reasonable approximation as you say (you get four surrounding vertices for each vertex), though there's no guarantee there aren't additional triangles sharing a vertex, apart from those you know about from the adjacency input.
Depending on how expensive you're willing to make the shader, you could combine the two methods, using adjacent vertices to estimate what area of the displacement map should be sampled.

(I'm not sure what GLSL version is necessary to do that, but I assume it should be available in the geometry shader).
I'm sure the adjacency-information would work but i'm trying to keep the computations down. Calculating all the extra normals and then averaging them and doing the same thing for the bitangents and tangents would be pretty costly. Plus is till havent's been able to successfully compile something using triangles with adjacency as an input.

I guess what i mostly want to try to be able to do is rotate the existing TBN (tangents,bitangents,normals) using some sort of matrix if it's possible. I've already got all the smoothed TBN stuff precalculated on the cpu. I'd be nice if i could adjust it using a rotation matrix once per polygon.
Exactly why do you need this, can't you combine the normal and displacement maps so that the changed normal is already present in the normal-map?
With all displacement mapping I've seen the displacement is calculated from the normal-map, to get the actual geometry to match the normal, which is already faking the deformation caused by the displacement map.
I might be mis-understanding what you are trying to do. =)
I'm not just using this for displacement maps but also on different types of object warps and deformations. I also use the TBN vectors for other types of effects, not just normal mapping.
With an arbitrary deformation of vertices, where each vertex can be connected to an unknown number of other vertices, there's not really any easy way to do this in a shader. If you deform by a mathematical function, then you could use that to re-calculate the normals. Google gave me for example http://http.developer.nvidia.com/GPUGems/gpugems_ch42.html, and http://www.ozone3d.net/tutorials/mesh_deformer.php. If you instead deform by a displacement map, then sampling that to approximate a local function could work in the same way.
If your deformations allow more or less random displacements of different vertices, then you probably can't use a matrix, since the matrix would be different for each vertex, and calculating that matrix would require the same work as re-calculating the normals (adjacent surfaces etc).
Do it in 3-pass.
First, render to a RGBA32f FBO with 1 target. Here you calculate vertex displacements. Can use transform-feedback instead, too. Otherwise cast the data to a DynVBO1. (data: struct{ vec4 position; }; )
Second pass: start rendering in another RGBA32f target, that has at least NumVerts pixels. Clear to black. Switch to additive-blending mode. A geometry shader has triangles as input, formed solely by the DynVBO1. The geom shader calculates the triangle-normal and generates 3 primitives: 3 1-pixel points. Aside from the gl_Position for those points (that is at coords that map to the given vtx-ID on the FBO), a varying "vec4 normalAccum=vec4(nx,ny,nz,1)" is sent to vtx-shader, which directly sends it to frag-shader for gl_FragColor.
This all simply makes the gpu do the scatter-write for you.
Now, cast that RGBA32f target to DynVBO2 (data: struct{ vec4 accumNormal; }; )
accumNormal is not normalized, you can either normalize it in another pass, or simply have your next vtx-shaders that will use DynVBO1 and DynVBO2 normalize it.

Attach DynVBO1 for vtx-position, DynVBO2 for vtx-normal, and StaticVBO1 for the static data with 1-14 attribs. Inside the frag-shader, you will easily recompute the TNB out of the normal and dFdx/dFdy.


You can merge pass1 and pass2, if FBO-switching is more expensive than recalculating the vertex-displacements ~3 times.

This topic is closed to new replies.

Advertisement