Sign in to follow this  
newbird

OpenGL Update Normals using the Matrix

Recommended Posts

Hi everyone, I have a simple problem I was hoping someone could help me with. I have a simple OpenGL program where I need to recompute the normals every frame to point toward the camera. I'm using the well-known ArcBall algorithm for rotating the OpenGL object using mouse input. In my display function I translate the object away from the origin such that I can see the entire object. I then rotate the object according to the 4x4 ArcBall matrix via: float* matrix=Spaceball.GetMatrix(); glMultMatrixf(matrix); Now for every point in my scene, I need to assign the x,y,z values of the new normal for that point (so that it continues to point toward the camera despite having been rotated). I'm sure it's a simple set of equations, but it currently escapes me on how to do this. Can anyone figure out how to assign the normals to point toward the camera by essentially using this modelview matrix? Below is my display function which attempts to use the inverse transpose of this matrix to find the new normal but it is obviously wrong and the actual solution may be simpler: void glutDisplay(void) { int i; unsigned short* pos; float SpaceballInvT[16]; float normalDir[4]; glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glPolygonMode(GL_FRONT_AND_BACK, GL_FILL); glPushMatrix(); /* translate camera to the coordinates provided by eye array */ glTranslatef (-eye[0], -eye[1], -(eye[2]-zoom*eye[2])); glMultMatrixf(Spaceball.GetMatrix()); Spaceball.GetInvMatrix(SpaceballInvT); TransposeMatr(SpaceballInvT); Spaceball.Update(); //Update the normals for each voxel for(i=0;i<numVerts;i++) { pos = (unsigned short*)&voxels[i*10+4]; MatMult4f(normalDir, SpaceballInvT, (float)pos[0], (float)pos[1], (float)pos[2], 1.f); voxels[i*10+ 1] = (unsigned char)normalDir[0];// x-component of normal voxels[i*10+ 2] = (unsigned char)normalDir[1];// y-component of normal voxels[i*10+ 3] = (unsigned char)normalDir[2];// z-component of normal } glTranslated(-DIM_SIZE/2,-DIM_SIZE/2,-DIM_SIZE/2); VolrSsplats(voxels, numVerts, dimSize); //Draw hairy splats that show directions of the normal if(showNormals) { Disable_Splatting(); for(i=0;i<numVerts;i++) { glBegin(GL_LINES); pos = (unsigned short*)&voxels[i*10+4]; glVertex3f(pos[0],pos[1],pos[2]); glVertex3f(pos[0]+5.f*(float)voxels[i*10+1]/255.f,pos[1]+5.f*(float)voxels[i*10+2]/255.f,pos[2]+5.f*(float)voxels[i*10+3]/255.f); glEnd(); } Enable_Splatting(); } glPopMatrix(); glFlush(); glutSwapBuffers(); }

Share this post


Link to post
Share on other sites
It's the normalized negative of the vector position of that point minus the vector position of the camera.

new_normal = -(point_pos - camera_pos);
new_normal = Normalize(new_normal);

Oh, the points position has to be in worldspace. You get that by multipling the point by its transformation matrix.

Share this post


Link to post
Share on other sites
A good idea FlyingDemon, I had thought of trying that before. It seems simple and yet I'm having difficulty. I get the modelview matrix and multiply it by the original point to get the object space coordinate. However, if I try to calculate the vector from that point to the camera (commented as "insta non-render"), my screen shows up but it doesn't render anything...not even the black clear color. Perhaps you have another suggestion?

void glutDisplay(void)
{
int i;
unsigned short* pos;
float SpaceballInv[16];
float normalDir[4];

glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glPolygonMode(GL_FRONT_AND_BACK, GL_FILL);

glPushMatrix();

/* translate camera to the coordinates provided by eye array */
glTranslatef (-view.eye[0], -view.eye[1], -(view.eye[2]-view.zoom*view.eye[2]));
glMultMatrixf(Spaceball.GetMatrix());
Spaceball.Update();

glTranslated(-DIM_SIZE/2,-DIM_SIZE/2,-DIM_SIZE/2);
if(updateNormals) {
glGetFloatv (GL_MODELVIEW_MATRIX, SpaceballInv);
for(i=0;i<numVerts;i++) {
pos = (unsigned short*)&voxels[i*10+4];
MatMult4f(normalDir, SpaceballInv, (float)pos[0], (float)pos[1], (float)pos[2], 1.f);
for(j=0;j<3;j++) normalDir[j]=-1.f*(255.f*(float)pos[j] - view.eye[j]);
Normalize3(&normalDir[0]);

voxels[i*10+ 1] = (unsigned char)(255.f*normalDir[0]); // x-component of normal
voxels[i*10+ 2] = (unsigned char)(255.f*normalDir[1]); // y-component of normal
voxels[i*10+ 3] = (unsigned char)(255.f*normalDir[2]); // z-component of normal
}
}//End update normals

VolrSsplats(voxels, numVerts, dimSize);

glPopMatrix();
glFlush();
glutSwapBuffers();
}

[Edited by - newbird on June 25, 2005 2:17:05 PM]

Share this post


Link to post
Share on other sites
Make sure that your not multiplying your verts twice. You use glMultMatrix at first but later on you also multiply the verts again to get their new positions, i think.

A question...
What are you trying to do? - Have the object constantly lit on the side that points towards the camera?

Share this post


Link to post
Share on other sites
The glMultMatrix() call allows me to rotate the cubic volume that I have. The pos[i] variable is referencing each voxel's location within that volume. The glMultMatrix simply allows me to move those points to the correct location based upon the ArcBall. Nothing changes the coordinates of each voxel within the volume as these are stored in voxels[i*10+ 4,5,6].

The second "MatMult4f(normalDir, SpaceballInvT, (float)pos[0], (float)pos[1], (float)pos[2], 1.f);" for each voxel simply allows me to store the OpenGL object-space coordinates for the current voxel. Given the current transformation matrix, the new object coordinates of each voxel should be stored in the normalDir vector.

Good question as to what precisely I want to do. Sometimes I leave out the essence. I have a headlight, so the light is positioned at the same place as the camera. So yes, the side facing the camera is always lit. I had hoped to get the normals correctly pointing toward the camera at all times and then adjust the light so that I get a white specular highlight in the middle of every splat (so they appear more round rather than blending together).

Share this post


Link to post
Share on other sites
"The second "MatMult4f(normalDir, SpaceballInvT, (float)pos[0], (float)pos[1], (float)pos[2], 1.f);" for each voxel simply allows me to store the OpenGL object-space coordinates for the current voxel. Given the current transformation matrix, the new object coordinates of each voxel should be stored in the normalDir vector."


MatMult4f(normalDir, SpaceballInv, (float)pos[0], (float)pos[1], (float)pos[2], 1.f);
/*//Insta-nonrender
for(i=0;i<3;i++) normalDir[i]=-1.f*((float)pos[i] - view.eye[i]);

^^ You filled in the 'normalDir' using MatMult4f, then right after you gave them a different value? - Is that intended?

Share this post


Link to post
Share on other sites
No, that's not what is intended. Obviously I'm a moron. pos[i] should be normalDir[i], since normalDir is the transformed location of pos. Even so, I still have the same behavior as far as the window not being rendered. Also, should I be using the MODELVIEW or the PROJECTION matrix? I appreciate your patience, feel free to tell me I'm a moron and move on with your life. ^_^

Share this post


Link to post
Share on other sites
Im out of ideas( i never has any) about why its not rendering, even the clear color. Comment the entire thing out peice by peice until it starts to render again, that should give an idea.

Share this post


Link to post
Share on other sites
It's certainly the for i<3 loop that causes the program not to render. I don't know why it'd cause that, I'll inspect the values and see if maybe they're odd values that might be confusing the OpenGL state machine.

Share this post


Link to post
Share on other sites
Well let this be a lesson to you then!

Only declare variables in the scopes in which they're used...

you're blowing away the value of i from the outter loop when you do i=0;i<3

Share this post


Link to post
Share on other sites
Lol, I swear I need to just give up this coding business. Can't even stay away from reusing variables within the loop. Thanks for the obvious. Still, the i is in scope for the j<3 loop so scope doesn't matter here. But at any rate, this does nothing but set the normals into the local z direction, which is pointing directly away from the camera when it's rotated 180 degrees. Does anyone know how to get the normals to point at the camera regardless of rotation?

[Edited by - newbird on June 25, 2005 2:33:08 PM]

Share this post


Link to post
Share on other sites
Actually I think that pointing your normals towards the camera, and at the same time having the light positioned near the camera, would be almost like having no lighting on at all.

[Edited by - FlyingDemon on June 25, 2005 7:17:04 PM]

Share this post


Link to post
Share on other sites
You're right. Initially everything will pretty much reflect the same color. Which is why I would then move the light closer to the objects so we get more of an uneven reflectance/shading to emphasize the spherical nature of the objects. I find that random normals look quite good, but it still bothers me that I can't figure out how to set the normals to point toward the camera...it has to be simple.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Forum Statistics

    • Total Topics
      627735
    • Total Posts
      2978851
  • Similar Content

    • By DelicateTreeFrog
      Hello! As an exercise for delving into modern OpenGL, I'm creating a simple .obj renderer. I want to support things like varying degrees of specularity, geometry opacity, things like that, on a per-material basis. Different materials can also have different textures. Basic .obj necessities. I've done this in old school OpenGL, but modern OpenGL has its own thing going on, and I'd like to conform as closely to the standards as possible so as to keep the program running correctly, and I'm hoping to avoid picking up bad habits this early on.
      Reading around on the OpenGL Wiki, one tip in particular really stands out to me on this page:
      For something like a renderer for .obj files, this sort of thing seems almost ideal, but according to the wiki, it's a bad idea. Interesting to note!
      So, here's what the plan is so far as far as loading goes:
      Set up a type for materials so that materials can be created and destroyed. They will contain things like diffuse color, diffuse texture, geometry opacity, and so on, for each material in the .mtl file. Since .obj files are conveniently split up by material, I can load different groups of vertices/normals/UVs and triangles into different blocks of data for different models. When it comes to the rendering, I get a bit lost. I can either:
      Between drawing triangle groups, call glUseProgram to use a different shader for that particular geometry (so a unique shader just for the material that is shared by this triangle group). or
      Between drawing triangle groups, call glUniform a few times to adjust different parameters within the "master shader", such as specularity, diffuse color, and geometry opacity. In both cases, I still have to call glBindTexture between drawing triangle groups in order to bind the diffuse texture used by the material, so there doesn't seem to be a way around having the CPU do *something* during the rendering process instead of letting the GPU do everything all at once.
      The second option here seems less cluttered, however. There are less shaders to keep up with while one "master shader" handles it all. I don't have to duplicate any code or compile multiple shaders. Arguably, I could always have the shader program for each material be embedded in the material itself, and be auto-generated upon loading the material from the .mtl file. But this still leads to constantly calling glUseProgram, much more than is probably necessary in order to properly render the .obj. There seem to be a number of differing opinions on if it's okay to use hundreds of shaders or if it's best to just use tens of shaders.
      So, ultimately, what is the "right" way to do this? Does using a "master shader" (or a few variants of one) bog down the system compared to using hundreds of shader programs each dedicated to their own corresponding materials? Keeping in mind that the "master shaders" would have to track these additional uniforms and potentially have numerous branches of ifs, it may be possible that the ifs will lead to additional and unnecessary processing. But would that more expensive than constantly calling glUseProgram to switch shaders, or storing the shaders to begin with?
      With all these angles to consider, it's difficult to come to a conclusion. Both possible methods work, and both seem rather convenient for their own reasons, but which is the most performant? Please help this beginner/dummy understand. Thank you!
    • By JJCDeveloper
      I want to make professional java 3d game with server program and database,packet handling for multiplayer and client-server communicating,maps rendering,models,and stuffs Which aspect of java can I learn and where can I learn java Lwjgl OpenGL rendering Like minecraft and world of tanks
    • By AyeRonTarpas
      A friend of mine and I are making a 2D game engine as a learning experience and to hopefully build upon the experience in the long run.

      -What I'm using:
          C++;. Since im learning this language while in college and its one of the popular language to make games with why not.     Visual Studios; Im using a windows so yea.     SDL or GLFW; was thinking about SDL since i do some research on it where it is catching my interest but i hear SDL is a huge package compared to GLFW, so i may do GLFW to start with as learning since i may get overwhelmed with SDL.  
      -Questions
      Knowing what we want in the engine what should our main focus be in terms of learning. File managements, with headers, functions ect. How can i properly manage files with out confusing myself and my friend when sharing code. Alternative to Visual studios: My friend has a mac and cant properly use Vis studios, is there another alternative to it?  
    • By ferreiradaselva
      Both functions are available since 3.0, and I'm currently using `glMapBuffer()`, which works fine.
      But, I was wondering if anyone has experienced advantage in using `glMapBufferRange()`, which allows to specify the range of the mapped buffer. Could this be only a safety measure or does it improve performance?
      Note: I'm not asking about glBufferSubData()/glBufferData. Those two are irrelevant in this case.
    • By xhcao
      Before using void glBindImageTexture(    GLuint unit, GLuint texture, GLint level, GLboolean layered, GLint layer, GLenum access, GLenum format), does need to make sure that texture is completeness. 
  • Popular Now