Jump to content
  • Advertisement
Sign in to follow this  
TheStudent111

Calculating normals for Cube

This topic is 775 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I have been trying to implement diffuse lighting, but have been having trouble calculating the normal's. This is what I have so far.



    const GLfloat vertices[] =

    {      

      

       -0.5f,  0.5f, 0.0f,  // Top Left

       -0.5f, -0.5f, 0.0f,  // Bottom Left

       0.5f, -0.5f, 0.0f,  // Bottom Right

      0.5f,  0.5f, 0.0f,  // Top Right

    

      -0.5f,  0.5f, -1.0f,  // Top Left     (back)

      -0.5f, -0.5f, -1.0f,  // Bottom Left  (back)

       0.5f, -0.5f, -1.0f,  // Bottom Right (back)

       0.5f,  0.5f, -1.0f   // Top Right    (back)    

    };





    

    const GLfloat color[] =

    {

        0.0f, 0.0f, 1.0f,

        0.0f, 0.0f, 1.0f,

        0.0f, 0.0f, 1.0f,

        0.0f, 0.0f, 1.0f,



        0.0f, 0.0f, 1.0f,

        0.0f, 0.0f, 1.0f,

        0.0f, 0.0f, 1.0f,

        0.0f, 0.0f, 1.0f

    };





    const GLuint indices[] =

    {

        0, 1, 3,

        1, 2, 3,

    

        4, 5, 6,

        4, 6, 7,



        4, 5, 1,

        1, 0, 4,



        3, 6, 2,

        7, 6, 3,



        7, 4, 3,

        4, 0, 3,



        1, 2, 5,

        2, 5, 6



    };



    

    glm::vec3 V1 = glm::vec3(vertices[3] - vertices[9], vertices[4] - vertices[10], vertices[5] - vertices[11]);

    glm::vec3 V2 = glm::vec3(vertices[6] - vertices[9], vertices[7] - vertices[10], vertices[8] - vertices[11]);



    glm::vec3 V3 = glm::vec3(V1.y * V2.z - V1.z * V2.y, V1.z * V2.x - V1.x * V2.z, V1.x * V2.y - V1.y * V2.x);



    float Magnitude = glm::length(V3);



    glm::vec3 nV = glm::vec3(V3.x / Magnitude, V3.y / Magnitude, V3.z / Magnitude);



    const int SZ = 3;

    GLfloat normalizedVec[SZ];

    normalizedVec[0] = nV.x;

    normalizedVec[1] = nV.y;

    normalizedVec[2] = nV.z;

 

From my understanding in order to find a normal, one has to

* Find two vectors (edges of the face/triangle).

* Use cross product to find the perpendicular vector

* Normalize the perpendicular vector

* Then send the vector to OpenGL

 

Here is how its being sent

        // Normal Vertex

        glBindBuffer(GL_ARRAY_BUFFER, VBO[2]);

        glBufferData(GL_ARRAY_BUFFER, sizeof(normalizedVec), normalizedVec, GL_STATIC_DRAW);

        glVertexAttribPointer(NORMALS_POS, 3, GL_FLOAT, GL_FALSE, 0, nullptr);

        glEnableVertexAttribArray(NORMALS_POS);

Then here is the vertex shader followed be the fragment shader.

#version 400 core



layout (location = 0) in vec3 VertexPosition;

layout (location = 1) in vec4 color;

layout (location = 2) in vec3 VertexNormal



out vec4 out_color;

out vec3 LightIntensity;



uniform vec4 LightPosition;

uniform vec3 Kd;

uniform vec3 Ld;



uniform mat4 ModelViewMatrix;

uniform mat3 NormalMatrix;

uniform mat4 ProjectionMatrix;



uniform mat4 rotation_matrix;

uniform mat4 projection_matrix;

uniform mat4 PVM;



void main()

{    

    // Convert normal and position to eye coords

    vec3 tnorm     = normalize(NormalMatrix * VertexNormal);

    vec4 eyeCoords = ModelViewMatrix * vec4(VertexPosition, 1.0);



    vec3 s = normalize(vec3(LightPosition - eyeCoords));



    // The diffuse shading equation

    LightIntensity = Ld * Kd * max(dot(s, tnorm), 0.0);



    out_color = color;

    gl_Position =  PVM * rotation_matrix * vec4(position.x, position.y, position.z, 1.0f);



}

Fragment Shader

#version 400 core



layout (location = 0) out vec4 FragColor;



in vec3 LightIntensity;

in  vec4 out_color;

out vec4 OUTPUT_COLOR;





void main()

{

    Output_Color = out_color;

    FragColor = vec4(LightIntensity, 1.0);

}

 

I'm not exactly sure if I'm calculating the normal's correctly?

Share this post


Link to post
Share on other sites
Advertisement

In principle that is how you determine the surface normal of a triangle, yes. However, there are several things to consider.

 

  1. OpenGL vertex shaders work on a per vertex basis (hence the name) not per triangle. That means you have to provide one normal for each vertex. You seem to generate only one. This leaves the rest of the normals undefined.
  2. You use indexed rendering, there are only 8 vertices in your cube and they are shared among its faces. That means even if you had one normal for each vertex, they would be shared as well. This can be fine in some cases (smooth surfaces with averaged normals), but since cubes usually have sharp edges this is not what you want. You want 24 vertices, each with their own normal. That is, have six quads (one for each cube face) of 4 vertices each. There just isn't much benefit from rendering a simple cube using indices.
  3. The winding order determines the direction of the normal vector. So make sure all triangle vertices are correctly ordered.

 

And if you want my advice as a pragmatist, don't bother with this kind of calculations. Let a 3d modelling software of your choice deal with it and just write an importer (and an exporter plugin if necessary).

Share this post


Link to post
Share on other sites

As Kolrabi points out, if you want normals then you need to have more vertices (4 per side). When you have more vertices it is quite trivial by comparison to also manually define your normals since the normals for each face will be the same. They will also be axis aligned to so they will be very easy to define e.g. (1, 0, 0), (0, 1, 0), (0, 0, -1) etc.

 

As Kolrabi says you could write an importer but depending on your final goal I think that might end up being overkill. Cubes are quite trivial and I think it is worth you making one in code. Many game engines offer basic cube primitives rather than expect someone to go off and model one up and import it. It is also a good learning opportunity for you.

 

To make life easier when defining your vertices you can do something like this:

struct Vector
{
    Vector():x(0), y(0), z(0) {}
    Vector(GLfloat xIn, GLfloat yIn, GLfloat zIn):x(xIn), y(yIn), z(zIn) {}
    GLfloat x, y, z;
};
struct Vertex
{
    Vector position;
    Vector normal;
};

Vertex vertices[24];

Vector normal(0, 0, 1)
vertices[0].position = Vector(-0.5f, 0.5f, 0.0f);
vertices[0].normal = normal

// when accessing for opengl you do something like:
GLfloat* verts = (GLfloat*)vertices;
// Your position and normals are now 'interlaced' though so you'd have to account for that when
// setting up your buffers. You can get the 'stride' with sizeof(Vertex) and similar thing
// for getting the normal offset (void*)sizeof(Vector). There may be some subtleties in terms of
// memory layout/allignment that I am unaware of but I've yet to have any issues with this 
// approach and it makes life so much easier.

Share this post


Link to post
Share on other sites

So, as was previously mentioned, you have two different types of normals: face normals (which you were talking about) and vertex normals (what the shader actually uses). You calculate the face normals (1 for each face) as you mentioned outside of the shader, but that info is of limited use. However, if you average the face normals of the 3 faces touching each vertex, you can calculate a vertex normal for each of the 8 vertices in a cube.

 

The next problem is that with smooth shading this will attempt to turn your cube into a ball with all around poor results.

 

Back before shaders, the fixed function pipeline would let you choose between flat shading and smooth shading. Smooth shading uses vertex normals to help hide the fact that the models are made up of triangles. It interpolates the values of the three vertices in the rasterizer while it is specifying the pixels it feeds to the fragment shader. So, the normal values are interpolated (averaged) across the face of triangle and so are color values and such. This gives you an interpolated "pixel" normal in the fragment shader so that each pixel has a normal. This is what makes the triangles smooth out and appear more rounded and you need vertex normals for this to work. Otherwise, you would have one normal for the entire face.

 

In your case, you have a cube; you want it to appear faceted. You want what is essentially a face normal rather than a vertex normal. There are a couple of ways to achieve this with modern shaders.

 

First, the easy way: just use more vertices. If none of the faces share vertices, then you can put face normals into each triangle in place of the vertex normals. If all the vertex normals share directions, then there is nothing to average or interpolate. The average of 3, 3, and 3 is 3.

 

This works, but it's pretty wasteful in that you use a lot more vertices. Still, I haven't figure out a way to use triangle fans or triangle strips with a full model with 3,000 polys in it. So, it may not be as crazy as it sounds.

 

There is a more efficient way though. But before I go into that, let me cover the most complicated way that actually will let you use 8 vertices and flat shade them. First off, I have an example in HLSL somewhere of how to calculate a face normal inside the shader. In my Gouraud shading video, I show a flat shader off. Unfortunately, I kind of breeze over it, not getting into the flat shading code. I mostly just show it to show the difference before I get into Gouraud smooth shading. But the actual HLSL code should be available on my website under the HLSL example. That should be all the code from the videos including the flat shader.

 

I'm not 100% certain that has the complex version. It probably does since that was an XNA example and XNA used DX9 with Shader Model 3 (I think it was), which forced you to use the complicated method of calculating a face normal. I think it's done in the pixel/fragment shader. And as I recall, you are being sneaky and asking for the values of the pixels next to the current pixel and then creating two vector and a cross product to produce a normal. I believe you ask for the position, in world, of the adjacent pixels to the one you are calculating. That gives you 3 points in space where you can use the dot product method to calculate a normal for a single pixel. Since the surface of the triangle actually is flat, that should result in flat shading. It works anyway. This was a really ugly way of doing it, but it was the only way in that Shader Model without creating more vertices. This method does allow you to produce a flat cube with 8 vertices. I don't fully remember it or where I got the code from, but the code for it is probably in my HLSL example in the Flat.fx shader code.

 

Now in MonoGame or DX11, you can use Shader Model 5. I think you can do that in more recent versions of OGL as well. In Shader Model 5, you can add an additional flag/word to your vertex normal definition that is something like "nointerpolate", and it will turn interpolation off, with the result being that it is flat shaded.

 

https://msdn.microsoft.com/en-us/library/windows/desktop/bb509668%28v=vs.85%29.aspx

 

I believe GLSL has the same kind of nointerpolation modifers. This is dependent on the Shader Model, not GLSL or HLSL. So, as long as your version is modern, it should be an option to turn off interpolation for the vertex normal resulting in flat shading. That's just a switch, either on or off. There's nothing in your code other than that to change. I believe it's calculating a face normal behind the scenes for you. You can send it smooth shading vertex normals and it will come out flat shaded anyway. This makes it super easy to flat shade. And for faceted objects like cubes, this is what you want.

Edited by BBeck

Share this post


Link to post
Share on other sites

I wrote my own implementation of a generic cube in c++ and directx recently and i stumbled upon the same problem (shared vertices for 3 faces, thus edge smoothing). Now i know that i can simply triple my cube's vertices and hence triple the amount of normals.

 

I was wondering if there is a way of doing the same thing whilst saving on "memory" for the triple amount of vertices and normals in the vertex buffer. I know though that this surplus is, in this case not noticable performancewise.

 

Could one create a vertex buffer for cubes specifically in which each vertex has its position and 3 normals and then make the vertex shader use the "right" normal for the current face, depending on the face's indices?

 

Edit: Took  me so long to finish this post that I did not read BBeck's comment before.

Edited by Moongoose

Share this post


Link to post
Share on other sites

I was wondering if there is a way of doing the same thing whilst saving on "memory" for the triple amount of vertices and normals in the vertex buffer. I know though that this surplus is, in this case not noticable performancewise.


For flat-shading you can add a geometry shader that calculates the normals per-triangle.  You save the memory but at the expense of more ALU ops on the GPU and, of course, having the geometry shader stage active (which will likely perform worse than just burning the extra memory and being done with it).

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!