Jump to content

  • Log In with Google      Sign In   
  • Create Account

Help me with my OBJ file parsing code


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
1 reply to this topic

#1 jdub   Members   -  Reputation: 419

Like
0Likes
Like

Posted 18 October 2012 - 08:05 PM

Okay so it seems that the code which I use to parse the faces from a .obj file is buggy. When I turn back-face triangle culling on, many of the triangles which should be drawn are culled leaving gaps in my model. Here is the code [c#]:

foreach(String line in fileLines)
		    {
			    String[] rawData = line.Split(new char[] { ' ' }, StringSplitOptions.RemoveEmptyEntries);
			    if(rawData[0] == "f")
			    {
				    List<Int32> faceIndices = new List<int>();
				    if (rawData.Length == 4)
				    {
					    foreach (String vertex in rawData)
					    {
						    if (vertex == "f") //disregard the indentifier
							    continue;
						    String index = vertex.Split(new String[] { "/" }, StringSplitOptions.RemoveEmptyEntries)[0];
						    faceIndices.Add(Int32.Parse(index));
					    }
				    }
				    else if (rawData.Length == 5) // add a quad if there are four vertices in the face
				    {
					    faceIndices.AddRange(new Int32[]
					    {
						    Int32.Parse(rawData[1].Split(new String[] { "/" }, StringSplitOptions.RemoveEmptyEntries)[0]),
						    Int32.Parse(rawData[2].Split(new String[] { "/" }, StringSplitOptions.RemoveEmptyEntries)[0]),
						    Int32.Parse(rawData[3].Split(new String[] { "/" }, StringSplitOptions.RemoveEmptyEntries)[0]),
						    Int32.Parse(rawData[2].Split(new String[] { "/" }, StringSplitOptions.RemoveEmptyEntries)[0]),
						    Int32.Parse(rawData[3].Split(new String[] { "/" }, StringSplitOptions.RemoveEmptyEntries)[0]),
						    Int32.Parse(rawData[4].Split(new String[] { "/" }, StringSplitOptions.RemoveEmptyEntries)[0])
					    });
				    }
				    else
				    {
					    throw new Exception("Invalid Data!");
				    }
				    indexList.AddRange(faceIndices);
				    //caculate the face normal
				    Vector3 a = positionVectors[faceIndices[0]] - positionVectors[faceIndices[1]];
				    Vector3 b = positionVectors[faceIndices[1]] - positionVectors[faceIndices[2]];
				    a.Normalize();
				    b.Normalize();
				    Vector3 normal = Vector3.Cross(a, b);
				    normal.Normalize();
				    foreach (Int32 index in faceIndices)
					    rawNormals[index].Add(normal);
			    }
		    }

I'm guessing the problem is how I am arranging triangle/quad indices but I'm not sure how they should be ordered. Any suggestions?
J.W.

Sponsor:

#2 Bacterius   Crossbones+   -  Reputation: 9286

Like
0Likes
Like

Posted 18 October 2012 - 10:48 PM

You are parsing every face identically, thus any individual discrepancies between face normals don't come from your code but from the model itself. This happens all the time when I'm working with refraction (where the normal actually does matter), what I usually do is feed the model through some 3D program like 3ds max and "unify" the faces, which rotates faces so that all the normals face the right direction. The program doesn't know what "inside" or "outside" is, though, so you still have two possible interpretations which you can test relatively easily (import your model in your game and check the backface culling).

In your quad face code, wouldn't it be 1-2-3-1-3-4 instead? Using 1-2-3-2-3-4 wouldn't cover the whole face... although I could be misunderstanding the code.

Also, some .obj models have vertex normals explicit ("vn" keyword) and those are probably correct, so you can always use those if they are available. They are per-vertex though, not per-face.

In addition, you don't need to normalize the a and b vectors - the cross product will have the same direction regardless.

Edited by Bacterius, 18 October 2012 - 10:50 PM.

The slowsort algorithm is a perfect illustration of the multiply and surrender paradigm, which is perhaps the single most important paradigm in the development of reluctant algorithms. The basic multiply and surrender strategy consists in replacing the problem at hand by two or more subproblems, each slightly simpler than the original, and continue multiplying subproblems and subsubproblems recursively in this fashion as long as possible. At some point the subproblems will all become so simple that their solution can no longer be postponed, and we will have to surrender. Experience shows that, in most cases, by the time this point is reached the total work will be substantially higher than what could have been wasted by a more direct approach.

 

- Pessimal Algorithms and Simplexity Analysis





Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS