Sign in to follow this  
sgsrules

OpenGL recalculating normals in geometry shader

Recommended Posts

sgsrules    124
I've been messing around with displacement maps and other types of vertex movement in the vertex shader. The problem i keep on running into is that once the vertices are changed the normals are no longer correct. So i figured it wouldn't be too difficult to recalculate the normals in the geometry shader since you have access to multiple vertices that make up the face. here's the geometry shader that i created to do this, it acts on GL_TRIANGLES as an input. and gl_Position isn't multiplied by gl_ModelViewProjectionMatrix until it's output by the geometry shader.
#version 120
#extension GL_EXT_geometry_shader4 : enable

varying vec3 eyeSpaceNormal; //output normal for frag shader


void main( void )
{
 eyeSpaceNormal = gl_NormalMatrix * normalize(cross(gl_PositionIn[1].xyz - gl_PositionIn[0].xyz, gl_PositionIn[2].xyz - gl_PositionIn[0].xyz)); //calculate normal for this face

 for (int i = 0;i<gl_VerticesIn;i++)
 {
  gl_Position =gl_ModelViewProjectionMatrix * gl_PositionIn[i]; //multiply position by MVP matrix;
  EmitVertex();
 }
 EndPrimitive();
}



This works fine but it only calculates the normal once per face so you get a faceted look. I thought of two possible solutions for this: a) you could use TRIANGLES_ADJACENCY as the input for the geometry shader and then this way you would have access to the adjacent vertices. You could then calculate the normals for the adjacent faces and then average these together to get smooth normals. I haven't been able to do this, beyond the opengl specs there is almost no info on geometry shaders available on the web. This also seems to be a lot of work especially since my models use tangent and bitangent information which i also need to recalculate. b) My second idea was to rotate the original normals (which are already smoothed) using the info from the new normal that was calculated. You could also rotate the tangent and bitangent vectors using this. But honestly I studied graphic design so my math skills are not up to par and i'm not even sure if this would work. here's what i've come up with so far but i'm getting weird results:
#version 120
#extension GL_EXT_geometry_shader4 : enable

in vec3 normalIn[]; //original normals
varying vec3 eyeSpaceNormal; // fixed normals for frag shader

void main( void )
{
 vec3 p1 = (gl_PositionIn[1]-gl_PositionIn[0]).xyz;
 vec3 p2 = (gl_PositionIn[2]-gl_PositionIn[0]).xyz;
 vec3 X =  normalize(p1);
 vec3 Y =  normalize(cross(p1,p2));
 vec3 Z =  cross(X,Y);
 mat3 rot = gl_NormalMatrix * mat3(X.xyz,Y.xyz,Z.xyz); //rotation matrix

 for (int i = 0;i<gl_VerticesIn;i++)
 {
  gl_Position =gl_ModelViewProjectionMatrix * gl_PositionIn[i];
  eyeSpaceNormal = normalIn[i] * rot; // multiply original normal by rotation matrix
  EmitVertex();
 }
 EndPrimitive();
}



Part of the problem seems to be that every other polygons normal is flipped, probably because of the winding, i also think that multiplying the rot matrix by the NormalMatrix might not be working but i really have no clue at this point any help would be awesome. [Edited by - sgsrules on November 9, 2009 5:30:34 PM]

Share this post


Link to post
Share on other sites
Vilem Otte    2938
Your first solution is okay, although you're computing one normal per primitive and you should compute one normal per every vertex.
So you should compute your normal inside the for loop, not outside it.

I'm not sure if you can write it like this (also with this solution you need to make sure that your model contains JUST triangles and no other primitive), as I still don't have too much experiences with geometry shaders (but I'm learning them :P).

#version 120
#extension GL_EXT_geometry_shader4 : enable

varying vec3 eyeSpaceNormal;


void main( void )
{
eyeSpaceNormal = gl_NormalMatrix * normalize(cross(gl_PositionIn[1].xyz - gl_PositionIn[0].xyz, gl_PositionIn[2].xyz - gl_PositionIn[0].xyz));
gl_Position = gl_ModelViewProjectionMatrix * gl_PositionIn[0];
EmitVertex();

eyeSpaceNormal = gl_NormalMatrix * normalize(cross(gl_PositionIn[2].xyz - gl_PositionIn[1].xyz, gl_PositionIn[0].xyz - gl_PositionIn[1].xyz));
gl_Position = gl_ModelViewProjectionMatrix * gl_PositionIn[1];
EmitVertex();

eyeSpaceNormal = gl_NormalMatrix * normalize(cross(gl_PositionIn[3].xyz - gl_PositionIn[2].xyz, gl_PositionIn[1].xyz - gl_PositionIn[2].xyz));
gl_Position = gl_ModelViewProjectionMatrix * gl_PositionIn[2];
EmitVertex();

EndPrimitive();
}


As I'm not on PC that is able to debug some geometry shaders, I can't test this if it really emits good vertices and distributing eyeSpaceNormal per every vertex and not per primitive.
This MIGHT work, I can't make myself sure if it will work. Just give it a try, it might work.

Share this post


Link to post
Share on other sites
sgsrules    124
It's pointless computing the normals inside the loop, you still get the same normal calculation since you're using the same three points of that face. I did it once outside the loop to speed things up.

Share this post


Link to post
Share on other sites
Erik Rufelt    5901
You could sample the displacement map around the current vertex, to estimate the normal from that in the vertex-shader.
Using the adjacency-information would probably provide a reasonable approximation as you say (you get four surrounding vertices for each vertex), though there's no guarantee there aren't additional triangles sharing a vertex, apart from those you know about from the adjacency input.
Depending on how expensive you're willing to make the shader, you could combine the two methods, using adjacent vertices to estimate what area of the displacement map should be sampled.

(I'm not sure what GLSL version is necessary to do that, but I assume it should be available in the geometry shader).

Share this post


Link to post
Share on other sites
sgsrules    124
I'm sure the adjacency-information would work but i'm trying to keep the computations down. Calculating all the extra normals and then averaging them and doing the same thing for the bitangents and tangents would be pretty costly. Plus is till havent's been able to successfully compile something using triangles with adjacency as an input.

I guess what i mostly want to try to be able to do is rotate the existing TBN (tangents,bitangents,normals) using some sort of matrix if it's possible. I've already got all the smoothed TBN stuff precalculated on the cpu. I'd be nice if i could adjust it using a rotation matrix once per polygon.

Share this post


Link to post
Share on other sites
Erik Rufelt    5901
Exactly why do you need this, can't you combine the normal and displacement maps so that the changed normal is already present in the normal-map?
With all displacement mapping I've seen the displacement is calculated from the normal-map, to get the actual geometry to match the normal, which is already faking the deformation caused by the displacement map.
I might be mis-understanding what you are trying to do. =)

Share this post


Link to post
Share on other sites
sgsrules    124
I'm not just using this for displacement maps but also on different types of object warps and deformations. I also use the TBN vectors for other types of effects, not just normal mapping.

Share this post


Link to post
Share on other sites
Erik Rufelt    5901
With an arbitrary deformation of vertices, where each vertex can be connected to an unknown number of other vertices, there's not really any easy way to do this in a shader. If you deform by a mathematical function, then you could use that to re-calculate the normals. Google gave me for example http://http.developer.nvidia.com/GPUGems/gpugems_ch42.html, and http://www.ozone3d.net/tutorials/mesh_deformer.php. If you instead deform by a displacement map, then sampling that to approximate a local function could work in the same way.
If your deformations allow more or less random displacements of different vertices, then you probably can't use a matrix, since the matrix would be different for each vertex, and calculating that matrix would require the same work as re-calculating the normals (adjacent surfaces etc).

Share this post


Link to post
Share on other sites
idinev    236
Do it in 3-pass.
First, render to a RGBA32f FBO with 1 target. Here you calculate vertex displacements. Can use transform-feedback instead, too. Otherwise cast the data to a DynVBO1. (data: struct{ vec4 position; }; )
Second pass: start rendering in another RGBA32f target, that has at least NumVerts pixels. Clear to black. Switch to additive-blending mode. A geometry shader has triangles as input, formed solely by the DynVBO1. The geom shader calculates the triangle-normal and generates 3 primitives: 3 1-pixel points. Aside from the gl_Position for those points (that is at coords that map to the given vtx-ID on the FBO), a varying "vec4 normalAccum=vec4(nx,ny,nz,1)" is sent to vtx-shader, which directly sends it to frag-shader for gl_FragColor.
This all simply makes the gpu do the scatter-write for you.
Now, cast that RGBA32f target to DynVBO2 (data: struct{ vec4 accumNormal; }; )
accumNormal is not normalized, you can either normalize it in another pass, or simply have your next vtx-shaders that will use DynVBO1 and DynVBO2 normalize it.

Attach DynVBO1 for vtx-position, DynVBO2 for vtx-normal, and StaticVBO1 for the static data with 1-14 attribs. Inside the frag-shader, you will easily recompute the TNB out of the normal and dFdx/dFdy.


You can merge pass1 and pass2, if FBO-switching is more expensive than recalculating the vertex-displacements ~3 times.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Similar Content

    • By povilaslt2
      Hello. I'm Programmer who is in search of 2D game project who preferably uses OpenGL and C++. You can see my projects in GitHub. Project genre doesn't matter (except MMO's :D).
    • By ZeldaFan555
      Hello, My name is Matt. I am a programmer. I mostly use Java, but can use C++ and various other languages. I'm looking for someone to partner up with for random projects, preferably using OpenGL, though I'd be open to just about anything. If you're interested you can contact me on Skype or on here, thank you!
      Skype: Mangodoor408
    • By tyhender
      Hello, my name is Mark. I'm hobby programmer. 
      So recently,I thought that it's good idea to find people to create a full 3D engine. I'm looking for people experienced in scripting 3D shaders and implementing physics into engine(game)(we are going to use the React physics engine). 
      And,ye,no money =D I'm just looking for hobbyists that will be proud of their work. If engine(or game) will have financial succes,well,then maybe =D
      Sorry for late replies.
      I mostly give more information when people PM me,but this post is REALLY short,even for me =D
      So here's few more points:
      Engine will use openGL and SDL for graphics. It will use React3D physics library for physics simulation. Engine(most probably,atleast for the first part) won't have graphical fron-end,it will be a framework . I think final engine should be enough to set up an FPS in a couple of minutes. A bit about my self:
      I've been programming for 7 years total. I learned very slowly it as "secondary interesting thing" for like 3 years, but then began to script more seriously.  My primary language is C++,which we are going to use for the engine. Yes,I did 3D graphics with physics simulation before. No, my portfolio isn't very impressive. I'm working on that No,I wasn't employed officially. If anybody need to know more PM me. 
       
    • By Zaphyk
      I am developing my engine using the OpenGL 3.3 compatibility profile. It runs as expected on my NVIDIA card and on my Intel Card however when I tried it on an AMD setup it ran 3 times worse than on the other setups. Could this be a AMD driver thing or is this probably a problem with my OGL code? Could a different code standard create such bad performance?
    • By Kjell Andersson
      I'm trying to get some legacy OpenGL code to run with a shader pipeline,
      The legacy code uses glVertexPointer(), glColorPointer(), glNormalPointer() and glTexCoordPointer() to supply the vertex information.
      I know that it should be using setVertexAttribPointer() etc to clearly define the layout but that is not an option right now since the legacy code can't be modified to that extent.
      I've got a version 330 vertex shader to somewhat work:
      #version 330 uniform mat4 osg_ModelViewProjectionMatrix; uniform mat4 osg_ModelViewMatrix; layout(location = 0) in vec4 Vertex; layout(location = 2) in vec4 Normal; // Velocity layout(location = 3) in vec3 TexCoord; // TODO: is this the right layout location? out VertexData { vec4 color; vec3 velocity; float size; } VertexOut; void main(void) { vec4 p0 = Vertex; vec4 p1 = Vertex + vec4(Normal.x, Normal.y, Normal.z, 0.0f); vec3 velocity = (osg_ModelViewProjectionMatrix * p1 - osg_ModelViewProjectionMatrix * p0).xyz; VertexOut.velocity = velocity; VertexOut.size = TexCoord.y; gl_Position = osg_ModelViewMatrix * Vertex; } What works is the Vertex and Normal information that the legacy C++ OpenGL code seem to provide in layout location 0 and 2. This is fine.
      What I'm not getting to work is the TexCoord information that is supplied by a glTexCoordPointer() call in C++.
      Question:
      What layout location is the old standard pipeline using for glTexCoordPointer()? Or is this undefined?
       
      Side note: I'm trying to get an OpenSceneGraph 3.4.0 particle system to use custom vertex, geometry and fragment shaders for rendering the particles.
  • Popular Now