Sign in to follow this  
Mari_p

OpenGL dark regions in some parts of the mesh

Recommended Posts

Mari_p    232
Hi friends, I'm new in OpenGL. I loaded a .3ds model in my OpenGL code. There is no texture mapped on the model (only material color). The model contains thin triangles in some regions. The problem is that these regions are appearing darker... (as if I had applied a super smooth on the surface). In 3ds max the model doesn't show these dark regions. It seems a small shading problem... but I'm sure the normals were correctly calculated. Is there some OpenGL setting I should do to avoid that? Thanks in advance.

Share this post


Link to post
Share on other sites
Mari_p    232
OK friends, here is an image about the problem. The left model was exported as a 3ds file and it was rendered using OpenGL (notice the dark regions). The right model was exported as an x-file and it was rendered using Direct3D (no shading problem here).

models.jpg

Share this post


Link to post
Share on other sites
Ademan555    361
Most likely its a case of shoddy normals. Are you using normals supplied by the 3ds model? If not, make sure you're generating "good" normals. and finnally, show us your rendering code (more importantly your setup before you render)

cheers
-Dan

Share this post


Link to post
Share on other sites
Mari_p    232
Hi Ademan555, the rendering code is pratically the same than 3DS Loader shown in Game Tutorials site.


LFace face;

for (int k=0; k < GetMesh->GetFaceCount(); k++)
{
face = GetMesh->GetFace(k);

glBegin(GL_TRIANGLES);
glNormal3f(face.normals[0].x, face.normals[0].y, face.normals[0].z);
glVertex3f(face.vertices[0].x, face.vertices[0].y, face.vertices[0].z);

glNormal3f(face.normals[1].x, face.normals[1].y, face.normals[1].z);
glVertex3f(face.vertices[1].x, face.vertices[1].y, face.vertices[1].z);

glNormal3f(face.normals[2].x, face.normals[2].y, face.normals[2].z);
glVertex3f(face.vertices[2].x, face.vertices[2].y, face.vertices[2].z);
glEnd();
}


LFace is a structure containing vertices and normals data:

struct LVector3
{
float x;
float y;
float z;
};

struct LFace
{
LVector3 vertices[3];
LVector3 normals[3];
};


The function to calculate the normals is:

void LMesh::CalcNormals()
{
LVector3 vertex;
LVector3 normal;
int i, k, j;
if (m_vertexCount <= 0)
return;
m_normalCount = m_vertexCount;
m_normals = (LVector3*) malloc(m_vertexCount*sizeof(LVector3));

for (i=0; i<m_vertexCount; i++)
{
normal.x = 0.0f;
normal.y = 0.0f;
normal.z = 0.0f;
vertex = m_vertices[i];
// find all vertices with the same coords
for (k=0; k<m_vertexCount; k++)
{
if ((fabs(vertex.x - m_vertices[k].x) < 0.0000001f) &&
(fabs(vertex.y - m_vertices[k].y) < 0.0000001f) &&
(fabs(vertex.z - m_vertices[k].z) < 0.0000001f))
{
for (j=0; j<m_triangleCount; j++)
{
if ((m_triangles[j].a == (unsigned int)k) ||
(m_triangles[j].b == (unsigned int)k) ||
(m_triangles[j].c == (unsigned int)k))
{
LVector3 a, b, n;
a = SubtractVectors(m_vertices[m_triangles[j].b], m_vertices[m_triangles[j].a]);
b = SubtractVectors(m_vertices[m_triangles[j].b], m_vertices[m_triangles[j].c]);
n = CrossProduct(b, a);
n = NormalizeVector(n);
normal = AddVectors(normal, n);
}
}
}
}
m_normals[i] = NormalizeVector(normal);

}
}

Share this post


Link to post
Share on other sites
OrangyTang    1298
It looks like you're doing a basic vertex normal by averaging all incident face normals. This is fine for smooth meshes (eg. heightmaps) but for an object like this you actually want two discrete normals for certain verts (like those around the edges of the holes). Best option is to actually export and load those normals from your editor.

Alternativly, you need to add the concept of a 'crease angle'. This basically means if two incident normals are too dissimilar (ie. you think theres a crease in the mesh) you duplicate the vert and only use the normals that are similar. This is fairly standard so google should give you a suitable algorithm.

Share this post


Link to post
Share on other sites
tolaris    288
Quote:
Original post by OrangyTang
Alternativly, you need to add the concept of a 'crease angle'. This basically means if two incident normals are too dissimilar (ie. you think theres a crease in the mesh) you duplicate the vert and only use the normals that are similar. This is fairly standard so google should give you a suitable algorithm.

This seems to be the case to me on the screenshot, the shading goes weird because the normals are smoothing faces which are pretty much at straight angle to one another, and consequently the renderer is trying to interpolate between 'in light' and 'in shadow' along what's really a single, flat polygon.

checking acos( dot_product( face_1_normal, face_2_normal )) if it's below defined threshold angle (in radians) would generally work from what i can see.

On side note, testing every vertex against every vertex to see if they share position isn't very fast... implementing some kind of hash_multimap which keeps references to the vertices, with vertex position being the map key... might speed things up a lot, the testing for each vertex would be then done against greatly reduced sub-range of the whole set.

Share this post


Link to post
Share on other sites
Dancin_Fool    785
It looks like you used a boolean object in creating that model in 3d studio's, that can have some weird effects on meshes.

Besides that, try just calculating normals per face and rendering them that way instead of calculating per vertex.

It will give you more of the look you have on the .x file rather than the smoothed look you have in the 3ds file.

Share this post


Link to post
Share on other sites
Mari_p    232
Thanks folks,

I solved the problem by importing the normals from an x-file to my OpenGL code (I wrote a code to parse the x-file to OpenGL functions). I decided to use an x-file (instead of a .3ds file) because I am already familiarized with this file format.

Another reason why I decided to import normals instead of calculating them in the code is: Usually, it is more flexible to change the smooth level in the mesh editor (3ds max) than in the code.

Thank you very much again for the help.

Share this post


Link to post
Share on other sites
tolaris    288
Quote:
Original post by Mari_p
Another reason why I decided to import normals instead of calculating them in the code is: Usually, it is more flexible to change the smooth level in the mesh editor (3ds max) than in the code.

True about the flexibility... that's why ideally for 3ds format you'd be reading the smoothing groups information stored in the file:
** Subchunks of 0x4000 - Object description Block
* Subchunks of 0x4100 - Triangular Polygon List
(..)
0x4150: Face Smoothing Group chunk.
stores: unsigned int * number of faces.
bits of the int indicate enabled smooth groups for this particular face.

... then during the check of vertices, instead of dot product of face normals, you do bitwise AND check: face_1_smoothgroup & face_2_smoothgroup ... if the result is non-zero it means faces share at least one smoothing group, and the normals for points they share should be smoothed.

The angle test on the other hand is good for other formats which make use of it, Lightwave files for example.

[edit] just for the heck of it, this thread gave me excuse to do small test how many vertex checks are done with different methods... the results for classic teapot with 5894 vertices (after unwelding):

* straight comparison (each vertex with each vertex): 34,739,236 tests
* 'smart' straight comparison (tested vertex and the target share their normals at once, vertices which already did it are skipped): 17,366,671 tests
* hash map, straight comparison: 118,354 tests
* 'smart' hash map, similar to method 2: 56,230 tests

from nearly 35 mil to 56 k... wish it's always possible to reduce workload like that :s

[Edited by - tolaris on June 12, 2005 10:22:35 AM]

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Similar Content

    • By pseudomarvin
      I assumed that if a shader is computationally expensive then the execution is just slower. But running the following GLSL FS instead just crashes
      void main() { float x = 0; float y = 0; int sum = 0; for (float x = 0; x < 10; x += 0.00005) { for (float y = 0; y < 10; y += 0.00005) { sum++; } } fragColor = vec4(1, 1, 1 , 1.0); } with unhandled exception in nvoglv32.dll. Are there any hard limits on the number of steps/time that a shader can take before it is shut down? I was thinking about implementing some time intensive computation in shaders where it would take on the order of seconds to compute a frame, is that possible? Thanks.
    • By Arulbabu Donbosco
      There are studios selling applications which is just copying any 3Dgraphic content and regenerating into another new window. especially for CAVE Virtual reality experience. so that the user opens REvite or CAD or any other 3D applications and opens a model. then when the user selects the rendered window the VR application copies the 3D model information from the OpenGL window. 
      I got the clue that the VR application replaces the windows opengl32.dll file. how this is possible ... how can we copy the 3d content from the current OpenGL window.
      anyone, please help me .. how to go further... to create an application like VR CAVE. 
       
      Thanks
    • By cebugdev
      hi all,

      i am trying to build an OpenGL 2D GUI system, (yeah yeah, i know i should not be re inventing the wheel, but this is for educational and some other purpose only),
      i have built GUI system before using 2D systems such as that of HTML/JS canvas, but in 2D system, i can directly match a mouse coordinates to the actual graphic coordinates with additional computation for screen size/ratio/scale ofcourse.
      now i want to port it to OpenGL, i know that to render a 2D object in OpenGL we specify coordiantes in Clip space or use the orthographic projection, now heres what i need help about.
      1. what is the right way of rendering the GUI? is it thru drawing in clip space or switching to ortho projection?
      2. from screen coordinates (top left is 0,0 nd bottom right is width height), how can i map the mouse coordinates to OpenGL 2D so that mouse events such as button click works? In consideration ofcourse to the current screen/size dimension.
      3. when let say if the screen size/dimension is different, how to handle this? in my previous javascript 2D engine using canvas, i just have my working coordinates and then just perform the bitblk or copying my working canvas to screen canvas and scale the mouse coordinates from there, in OpenGL how to work on a multiple screen sizes (more like an OpenGL ES question).
      lastly, if you guys know any books, resources, links or tutorials that handle or discuss this, i found one with marekknows opengl game engine website but its not free,
      Just let me know. Did not have any luck finding resource in google for writing our own OpenGL GUI framework.
      IF there are no any available online, just let me know, what things do i need to look into for OpenGL and i will study them one by one to make it work.
      thank you, and looking forward to positive replies.
    • By fllwr0491
      I have a few beginner questions about tesselation that I really have no clue.
      The opengl wiki doesn't seem to talk anything about the details.
       
      What is the relationship between TCS layout out and TES layout in?
      How does the tesselator know how control points are organized?
          e.g. If TES input requests triangles, but TCS can output N vertices.
             What happens in this case?
      In this article,
      http://www.informit.com/articles/article.aspx?p=2120983
      the isoline example TCS out=4, but TES in=isoline.
      And gl_TessCoord is only a single one.
      So which ones are the control points?
      How are tesselator building primitives?
    • By Orella
      I've been developing a 2D Engine using SFML + ImGui.
      Here you can see an image
      The editor is rendered using ImGui and the scene window is a sf::RenderTexture where I draw the GameObjects and then is converted to ImGui::Image to render it in the editor.
      Now I need to create a 3D Engine during this year in my Bachelor Degree but using SDL2 + ImGui and I want to recreate what I did with the 2D Engine. 
      I've managed to render the editor like I did in the 2D Engine using this example that comes with ImGui. 
      3D Editor preview
      But I don't know how to create an equivalent of sf::RenderTexture in SDL2, so I can draw the 3D scene there and convert it to ImGui::Image to show it in the editor.
      If you can provide code will be better. And if you want me to provide any specific code tell me.
      Thanks!
  • Popular Now