Sign in to follow this  
Megamorph

OpenGL Normal buffer troubles again... Strange... Any advice?

Recommended Posts

Hello, everyone. I am trying to render out a simple GL_FLAT cube using a vertex buffer as shown here: The cube I'm trying to render. The vertices I'm using are in the following array, which is fine (and it renders fine):
GLfloat vertices[24] =  {0.5,  0.5,  0.5,
                        -0.5,  0.5,  0.5,
                        -0.5, -0.5,  0.5,
                         0.5, -0.5,  0.5, 
                         0.5, -0.5, -0.5,
                         0.5,  0.5, -0.5,
                        -0.5,  0.5, -0.5,
                        -0.5, -0.5, -0.5};

The index array, which is also fine, is as follows:
GLubyte indices[24] =  {0,1,2,3,   // 24 indices
			0,3,4,5,
			0,5,6,1,
			1,6,7,2,
			7,4,3,2,
			4,7,6,5};

The normals array is being calculated automatically (for each face here, not each vertex, since the model is faceted, not smooth). It also seems to be fine:
normals 0x0012fcbc float [18] [0] 0.00000000 float [1] 0.00000000 float [2] 1.0000000 float [3] 1.0000000 float [4] 0.00000000 float [5] 0.00000000 float [6] 0.00000000 float [7] 1.0000000 float [8] 0.00000000 float [9] -1.0000000 float [10] 0.00000000 float [11] 0.00000000 float [12] 0.00000000 float [13] -1.0000000 float [14] 0.00000000 float [15] 0.00000000 float [16] 0.00000000 float [17] -1.0000000 float
And here is the code I have for the actual rendering:
//...
glTranslatef(0.0f, 0.0f, -3.0f);
glRotatef(45.0f, 0.0f, 1.0f, 0.0f);
// bind vertex and index buffers
g_VBE.glBindBufferARB(GL_ARRAY_BUFFER_ARB, vertBuffer);
g_VBE.glBindBufferARB(GL_ELEMENT_ARRAY_BUFFER_ARB, indBuffer);
glEnableClientState(GL_VERTEX_ARRAY);		// activate vertex coords array
    glVertexPointer(3, GL_FLOAT, 0, 0);		// last param is offset, not ptr

    // bind the normals buffer
    g_VBE.glBindBufferARB(GL_ARRAY_BUFFER_ARB, normBuffer);
    glEnableClientState(GL_NORMAL_ARRAY);
    glNormalPointer(GL_FLOAT, 0, 0); 

    glDrawElements(GL_QUADS, 24, GL_UNSIGNED_BYTE, 0);
  glDisableClientState(GL_NORMAL_ARRAY); // deactivate normals array
glDisableClientState(GL_VERTEX_ARRAY);  // deactivate vertex array
//...

I have a light set up at 0.0f, 0.0f, 1.0f, 1.0f and it seems to be working fine. It is right behind the camera, and since the cube is right in front, it should be dead on. However, here is what I'm seeing (this is the cube turned to 45 degree angle): The cube I'm rendering. As you can see, the two sides aren't lit equally bright, which means either the normals or something about the way I'm using them is messed up. If I try to do a 360-degree turnaround, the faces are not lit the same, and the lighting changes from face to face in a weird way. Any suggestions?

Share this post


Link to post
Share on other sites
Is the light directional or point? (or spot?)
If the cube is not 45° rotated, then that's how it's supposed to look.

I'm not into OpenGL (D3D guy here), but check the normals are being transformed (rotated as well), and that the light position isn't being transformed.

Quote:

If I try to do a 360-degree turnaround, (...) and the lighting changes from face to face in a weird way.


That could be either the flat shading artifact, or specular lighting turned on

Cheerio
Dark Sylinc

Share this post


Link to post
Share on other sites
It seems that you position your light in the wrong space(camera vs world space). A position of (0,0,1) would be behind your camera only if your light position is given in camera space. If your position is given in world space it will be most likely transformed by your MV matrix (translate->rotation) too, which will result in your light being in front of your cube and would only light one side of it.

Quicktest: Put your light at (1,0,1) or (-1,0,1), this should shade the visible faces equally if I'm right with my assumption.

--
Ashaman


Share this post


Link to post
Share on other sites
I wish that were true [bawling], but I've already tested for it...
When I set the light to -1,0,1 or 1,0,1 the picture doesn't change (the sides are lit uneven, pretty much the same). The light positioning isn't included in the glPush/glPopMatrix block with the cube rendering, so the cube is translated & rotated separate from the rest of the scene.

Quote:
Is the light directional or point? (or spot?)

By default in OpenGL it is of point/omni type.

Also, when the cube is rotating a full 360, I'm getting this one side lit brightly, and the side opposite to it also becomes lit brightly when it becomes illuminated, but the two other sides stay dim... Which is weird[attention]

Here, see for yourself:
Executable

As a testament, here is the cube at some point in the rotation looked at from the top/front view with the gluLookAt function:
Cube from top/front.

As you can see, not only are the side faces sporadically lit, but the top face is also lit, which shouldn't be happening.

I also changed my code now a bit (I thought I had some problems with the offset of the buffers in memory or, perhaps, was specifying the buffer to be overwritten or something like that):

/* offset is defined as (char *)pointer - (char *)NULL in OpenGL,
so we do the reverse to convert offset to pointer.*/

#define BUFFER_OFFSET(i) ((i) + (char *)NULL)

...
//sets up the objects for rendering
void SetupScene()
{
//...

//generate normals
GenerateNormalArray(vertices, 24, indices, normals, GL_QUADS);

geometry = calloc(42,sizeof(GLfloat));
memcpy(geometry, vertices, 24*(sizeof(GLfloat)));
memcpy(&(geometry[24]), normals, 18 * sizeof(GLfloat));

g_VBE.glGenBuffersARB(2, &geomBuffer);
g_VBE.glBindBufferARB(GL_ARRAY_BUFFER_ARB, geomBuffer);
g_VBE.glBufferDataARB(GL_ARRAY_BUFFER_ARB, (42*sizeof(GLfloat)), geometry, GL_STREAM_DRAW_ARB);
free(geometry);

g_VBE.glGenBuffersARB(1, &indBuffer);
g_VBE.glBindBufferARB(GL_ELEMENT_ARRAY_BUFFER_ARB, indBuffer);
g_VBE.glBufferDataARB(GL_ELEMENT_ARRAY_BUFFER_ARB, sizeof(indices), indices, GL_STREAM_DRAW_ARB);
}
/*Renders one frame*/
void Render()
{
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
gluLookAt(0.0, 1.5, 0.0, 0.0, 0.0, -3.0, 0.0, 1.0, 0.0);
glPushMatrix();

glTranslatef(0.0f, 0.0f, -3.0f);
glRotatef(angle, 0.0f, 1.0f, 0.0f);

// bind vertex and index buffers
g_VBE.glBindBufferARB(GL_ARRAY_BUFFER_ARB, geomBuffer);
g_VBE.glBindBufferARB(GL_ELEMENT_ARRAY_BUFFER_ARB, indBuffer);
glVertexPointer(3, GL_FLOAT, 0, BUFFER_OFFSET(0)); // last param is offset, not ptr
glNormalPointer(GL_FLOAT, 0, BUFFER_OFFSET(24*sizeof(GLfloat))); // bind the normals buffer

glEnableClientState(GL_VERTEX_ARRAY); // activate vertex coords array
glEnableClientState(GL_NORMAL_ARRAY); // activate nomral coords array

// draw 6 quads using offset of index array
glDrawElements(GL_QUADS, 24, GL_UNSIGNED_BYTE, 0);

glDisableClientState(GL_VERTEX_ARRAY); // deactivate vertex array
glDisableClientState(GL_NORMAL_ARRAY); // deactivate normal array
glPopMatrix();

angle += 0.2;
if (angle >= 360.0f)
{
angle = 0.0f;
}
glLoadIdentity();
glFlush();

/* Bring back buffer to foreground
*/

SwapBuffers(g_hdc);
}





[Edited by - Megamorph on June 10, 2009 9:33:36 AM]

Share this post


Link to post
Share on other sites
try removing the
glTranslatef(0.0f, 0.0f, -3.0f);
glRotatef(45.0f, 0.0f, 1.0f, 0.0f);

i.e position the cube at 0,0,0 in identity space

Share this post


Link to post
Share on other sites
Quote:

try removing the
glTranslatef(0.0f, 0.0f, -3.0f);
glRotatef(45.0f, 0.0f, 1.0f, 0.0f);

i.e position the cube at 0,0,0 in identity space


I did now... Lighting is just as messed up as it was... And how was that supposed to help me, exactly? I think we've established that the light wasn't rotating with the cube, or at least that it wasn't the real problem.

I don't mean to be mean to people who are trying to help me, I think I just watched too much House M.D., so my sarcasm center is a bit overexcited. My frustration center is also overstimulated...

And is anyone going to be willing to look at the code rather than the pictures? I mean, I know it's such a pretty cube, but [disturbed]...

I read out from the VBO to debug (using the glMapBufferARB function) and I got the same result that I put in, meaning that the data that is loaded appears correct, unless I'm missing something.

Share this post


Link to post
Share on other sites
Quote:
And how was that supposed to help me, exactly?

always simplify the problem as much as possible

u have 18normals, yet 24verts

Share this post


Link to post
Share on other sites
Quote:
always simplify the problem as much as possible


I agree, that's why I started out with a cube. But changing coordinates of the whole object doesn't really help with the lighting / rendering issues...

Quote:
u have 18normals, yet 24verts


LOL, I was waiting for someone to bring that up... How ironic.

I have 8 verts (24 coords), which compose a cube with 6 faces. 6 quads need 6 normals, each normal is represented by 3 floats, 6 * 3 yeilds 18, hence the normals array contains 18 floats.

Correct me if I'm wrong here, but since we're going for a flat rendering, those should be face normals - not vertex normals. In this decision I was going off some earlier gamedev.net post about some other normals issue and the sugguestion therein, as well as tutorials, such as this one, which use old-style GLbegin/GLend blocks to render a cube with normals (scroll down).

Although that does arouse suspicion... How would I then render a smooth-shaded mesh (how does OpenGL know whether to expect vertex normals or face normals?)

Perhaps I'm wrong and all I need is to calculate 8 vertex normals instead. In this case, would a vertex normal then = (normals of all adjacent faces)/(number of all adjacent faces)?

Share this post


Link to post
Share on other sites
Since you want hard edges, each face of the cube is made of 4 unique verts, and no two faces share any verts. Remember that a vert is a unique combination of position+normal (and any other attributes you add in the future like uv, color, etc).

You only have one index buffer, and that indexes into an array of positions and normals with the same index. 6 indices per face * 6 faces = 36 indices. 4 verts per face * 6 faces = 24 verts.

That means your index array should be sized at 36 (assuming triangle list), and your position and normal array should have 24 entries (24 float3's = 72 floats). Yes, your positions and normals will be duplicated, but each of your verts will be unique.

An array of positions and an array of normals of different sizes doesn't make sense if you only have one index buffer.

Smooth edges share more verts, so you wouldn't need 24 unique verts to describe a cube, you could do it with 8.

Share this post


Link to post
Share on other sites
Well if you're doing Face normals, then surely you only need 6 normals, one for each face.

Also, as OpenGL is a state machine, normals can be per-Vertex or per-Face. Using Immediate mode as an example.

This will be Per-Face Normals

glNorma3f(0.0, 0.0, 1.0);
glBegin(GL_QUADS);
glVertex3f(0.0, 0.0, 0.0);
glVertex3f(0.0, 1.0, 0.0);
glVertex3f(1.0, 1.0, 0.0);
glVertex3f(1.0, 0.0, 0.0);
glEnd();

This will be Per-Vertex Normals

glBegin(GL_QUADS);
glNorma3f(0.0, 0.0, 1.0);
glVertex3f(0.0, 0.0, 0.0);
glNorma3f(0.0, 1.0, 1.0);
glVertex3f(0.0, 1.0, 0.0);
glNorma3f(1.0, 0.0, 1.0);
glVertex3f(1.0, 1.0, 0.0);
glNorma3f(1.0, 1.0, 0.0);
glVertex3f(1.0, 0.0, 0.0);
glEnd();



The numbers are arbitrary, it's the function calls that are important in this example.

Share this post


Link to post
Share on other sites
Quote:
Original post by Megamorph
Correct me if I'm wrong here, but since we're going for a flat rendering, those should be face normals - not vertex normals. In this decision I was going off some earlier gamedev.net post about some other normals issue and the sugguestion therein, as well as tutorials, such as this one, which use old-style GLbegin/GLend blocks to render a cube with normals (scroll down).

Although that does arouse suspicion... How would I then render a smooth-shaded mesh (how does OpenGL know whether to expect vertex normals or face normals?)

Perhaps I'm wrong and all I need is to calculate 8 vertex normals instead. In this case, would a vertex normal then = (normals of all adjacent faces)/(number of all adjacent faces)?


Yes, you are wrong. When using "face" normals, you really specify the same normal for all the vertices in the face. When using OpenGL immediate mode (glVertex/glNormal), it just uses the last specified normal for all your vertices. OpenGL does not have any magic to recognize whether you are using face or vertex normals.

GL_FLAT vs. GL_SMOOTH lighting only affects the way the computed lighting values are interpolated across the pixels in the same face, and has nothing to do with face vs. vertex normals.

For VBO's/Vertex arrays you need to have one normal per vertex, if there are identical normals, then just copy the same normal to all vertices that share the normal. The same goes for all the other vertex attributes (texture coordinates, etc).

So, for a cube you need 24 vertices that are made by combining the 8 vertex coordinates with the 6 normal vectors (and maybe 4 texture coordinates). If you have the vertices, vertex indices (8 vertices + 36 indices) and normals + normal indices (6 normals + 36 indices), you can whip up an algorithm that combines them into the 24 vertex+normal combinations and 36 indices. These can then be optionally triangle stripped and vertex cache locality optimized at this point. This data is then fed to the VBO + index buffer and then rendered.

If you're using C++, std::map (or std::set) can prove helpful when writing the algorithm for interleaving multiple "planes" of vertex data + indices into one big interleaved vertex data+indices batch that's ready for your GPU to chew on.

-riku

Share this post


Link to post
Share on other sites
Quote:

Correct me if I'm wrong here, but since we're going for a flat rendering, those should be face normals - not vertex normals.


In fact, you are quite right about GL_FLAT - It can indeed be used to render face normals without duplicating too much data.
But that doesn't work with your memory layout. You just need to make sure that the first vertex in each quad has the normal you want to use.
In your example, you use the first normal (0,0,1) for the first three faces, so that doesn't work.
Either way, the memory you use for the normals has to have the same number of elements as the vertex array. You need 8 elements in both - Even if they aren't used for shading!

Share this post


Link to post
Share on other sites
Huh, that's a neat idea.

Thanks everyone for the advice.

To clarify...

So, LtJax, if I'm following you right, I just need to make sure I'm using the correct normals - 8 vertex normals instead of 6 face normals?

Or should I have a separate set of indices to denote which vertex should use which normal, as riku sugguested? And if so, how does that work? (I was not familiar that OpenGL even supported normal-indexing.)

Or, perhaps, I should go with the most basic implication with no complex algorithms and use 24 normals and 24 vertices, one normal per vertex?

Share this post


Link to post
Share on other sites
OpenGL does not support seperate indices for normals. If you wanna do that, you have to draw in immediate mode.
You will need 8 normals, yes - but this is mostly a technical requirement when working with buffers. The sizes have to match!
When you are using flat shading, only the first normal in each quad ever gets used. You can basically "abuse" this to get face normals. You will never render 2 of those 8 normals tho.
So what you need to do is add 2 dummy normals and rearrage the indices so that the first index in each quad points to the vertex with the normal you want to use for the whole face.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Forum Statistics

    • Total Topics
      627741
    • Total Posts
      2978887
  • Similar Content

    • By DelicateTreeFrog
      Hello! As an exercise for delving into modern OpenGL, I'm creating a simple .obj renderer. I want to support things like varying degrees of specularity, geometry opacity, things like that, on a per-material basis. Different materials can also have different textures. Basic .obj necessities. I've done this in old school OpenGL, but modern OpenGL has its own thing going on, and I'd like to conform as closely to the standards as possible so as to keep the program running correctly, and I'm hoping to avoid picking up bad habits this early on.
      Reading around on the OpenGL Wiki, one tip in particular really stands out to me on this page:
      For something like a renderer for .obj files, this sort of thing seems almost ideal, but according to the wiki, it's a bad idea. Interesting to note!
      So, here's what the plan is so far as far as loading goes:
      Set up a type for materials so that materials can be created and destroyed. They will contain things like diffuse color, diffuse texture, geometry opacity, and so on, for each material in the .mtl file. Since .obj files are conveniently split up by material, I can load different groups of vertices/normals/UVs and triangles into different blocks of data for different models. When it comes to the rendering, I get a bit lost. I can either:
      Between drawing triangle groups, call glUseProgram to use a different shader for that particular geometry (so a unique shader just for the material that is shared by this triangle group). or
      Between drawing triangle groups, call glUniform a few times to adjust different parameters within the "master shader", such as specularity, diffuse color, and geometry opacity. In both cases, I still have to call glBindTexture between drawing triangle groups in order to bind the diffuse texture used by the material, so there doesn't seem to be a way around having the CPU do *something* during the rendering process instead of letting the GPU do everything all at once.
      The second option here seems less cluttered, however. There are less shaders to keep up with while one "master shader" handles it all. I don't have to duplicate any code or compile multiple shaders. Arguably, I could always have the shader program for each material be embedded in the material itself, and be auto-generated upon loading the material from the .mtl file. But this still leads to constantly calling glUseProgram, much more than is probably necessary in order to properly render the .obj. There seem to be a number of differing opinions on if it's okay to use hundreds of shaders or if it's best to just use tens of shaders.
      So, ultimately, what is the "right" way to do this? Does using a "master shader" (or a few variants of one) bog down the system compared to using hundreds of shader programs each dedicated to their own corresponding materials? Keeping in mind that the "master shaders" would have to track these additional uniforms and potentially have numerous branches of ifs, it may be possible that the ifs will lead to additional and unnecessary processing. But would that more expensive than constantly calling glUseProgram to switch shaders, or storing the shaders to begin with?
      With all these angles to consider, it's difficult to come to a conclusion. Both possible methods work, and both seem rather convenient for their own reasons, but which is the most performant? Please help this beginner/dummy understand. Thank you!
    • By JJCDeveloper
      I want to make professional java 3d game with server program and database,packet handling for multiplayer and client-server communicating,maps rendering,models,and stuffs Which aspect of java can I learn and where can I learn java Lwjgl OpenGL rendering Like minecraft and world of tanks
    • By AyeRonTarpas
      A friend of mine and I are making a 2D game engine as a learning experience and to hopefully build upon the experience in the long run.

      -What I'm using:
          C++;. Since im learning this language while in college and its one of the popular language to make games with why not.     Visual Studios; Im using a windows so yea.     SDL or GLFW; was thinking about SDL since i do some research on it where it is catching my interest but i hear SDL is a huge package compared to GLFW, so i may do GLFW to start with as learning since i may get overwhelmed with SDL.  
      -Questions
      Knowing what we want in the engine what should our main focus be in terms of learning. File managements, with headers, functions ect. How can i properly manage files with out confusing myself and my friend when sharing code. Alternative to Visual studios: My friend has a mac and cant properly use Vis studios, is there another alternative to it?  
    • By ferreiradaselva
      Both functions are available since 3.0, and I'm currently using `glMapBuffer()`, which works fine.
      But, I was wondering if anyone has experienced advantage in using `glMapBufferRange()`, which allows to specify the range of the mapped buffer. Could this be only a safety measure or does it improve performance?
      Note: I'm not asking about glBufferSubData()/glBufferData. Those two are irrelevant in this case.
    • By xhcao
      Before using void glBindImageTexture(    GLuint unit, GLuint texture, GLint level, GLboolean layered, GLint layer, GLenum access, GLenum format), does need to make sure that texture is completeness. 
  • Popular Now