# [raytracing] Surface normal on triangle sometimes has the wrong direction

This topic is 2143 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

In the image above, I have put in red rectangle two pixels that are rendered black when they should not. I have made some debugging and found out that in those cases, the surface normal is pointing at the wrong direction. When I did n = -n the pixel gained its correct color (however If I do it for all pixels, all those correct will become wrong)

This is the code in which the surface normal is calculated. It is in the Triangle class (the torus is made of triangles - I import the model).

The calculation for the surface normal is at the top. Any ideas why the its direction is wrong? (it points at the opposite direction than it should).

override bool hit(const ref Ray r, float p0, float p1, ref HitInfo hitInfo)
{
// (e + td - a) . n = 0
// ... t(d.n) + e.n - a.n = 0
// we need to solve as t to find the solution
// t = (dot(a, n) - dot(e, n)) / dot(d, n);
// and because dot product is distributive,
// we can write t = dot(a-e,n) / dot(d,n)

Vector3 temp0 = b-a, temp1 = c-a;
Vector3 n = cross(temp0, temp1); // calculate the normal of the plane that the triangle is on
n.normalize();

// TODO: r.d may have to be normalized?

// if the normal and the ray are parallel, there's no intersection
float dDotN = dot(r.d, n);

// TODO: this might be causing accuracy problems. I should check it out
if( dDotN == 0 )
return false;

// find the intersection point P with the plane
const Vector3 aMinusR = a-r.e;
float t = dot(aMinusR, n) / dDotN;
Vector3 p = r.e + r.d * t;

// now we need to test if the point is inside the triangle

// 1) test if its in the negative subspace of vector ab
Vector3 ab = b-a;
Vector3 ap = p-a;
Vector3 c1 = cross(ab, ap);
c1.normalize();

Vector3 ac = c-a;
Vector3 c2 = cross(ab, ac);
c2.normalize();

immutable E = 0.01f;

if( abs(c1.x - c2.x) > E || abs(c1.y - c2.y) > E || abs(c1.z-c2.z) > E )
return false;

// 2) test if its in the negative subspace of vector bc
Vector3 bc = c-b;
Vector3 bp = p-b;
c1 = cross(bc, bp);
c1.normalize();

Vector3 ba = b-a;
c2 = cross(ba, bc);
c2.normalize();

if( abs(c1.x - c2.x) > E || abs(c1.y - c2.y) > E || abs(c1.z-c2.z) > E)
return false;

// 3) test if its in the negative subspace of vector ca
Vector3 ca = a-c;
Vector3 cp = p-c;
c1 = cross(ca, cp);
c1.normalize();

Vector3 cb = b-c;
c2 = cross(ca, cb);
c2.normalize();

if( abs(c1.x - c2.x) > E || abs(c1.y - c2.y) > E || abs(c1.z - c2.z) > E )
return false;

hitInfo.t = t;
hitInfo.hitPoint = p;
hitInfo.surfaceNormal = n;
hitInfo.hitSurface = this;
hitInfo.ray = r.d;

//import std.stdio;
//writeln("in triangle hit");

return true;
}


##### Share on other sites
Out of curiosity, are you sure the winding for the vertices in each triangle of the torus is consistent (that is, verts in each triangle should all be ordered clockwise or counter-clockwise; triangles with different windings will have flipped normals)? Also, what are the values for the vertices (is there potential precision error)? There isn't anything obviously wrong from your normal calculation, which is why I'm asking these questions. This is the code for triangles I used in my own raytracer (in case it provides any inspiration):

// When loading triangles, I set the normal as:
// Vector3f u = t.b - t.a; (note: t is the Triangle)
// Vector3f v = t.c - t.a;
// t.normal = u.cross(v);
// t.normal.normalize();

class Triangle : public Shape
{
public:
Vector3f a, b, c;
Vector3f normal; // normalized

Triangle() : Shape(Shape::E_TRIANGLE)
{
}

virtual bool intersects(const Ray& ray, float& t, Vector3f& n) const
{
// from http://softsurfer.com/Archive/algorithm_0105/algorithm_0105.htm (and Plane.intersect())
n = normal;

float num = normal.dot(a - ray.start);
float den = normal.dot(ray.direction);

if (num != 0 && den == 0)
{
return false;
}
else if (num == 0 && den == 0)
{
t = 0; // parallel in same plane
return true;
}
else
{
t = num / den;
}

Vector3f u = b - a;
Vector3f v = c - a;
Vector3f w = ray.start + ray.direction * t - a;

float uv = u.dot(v);
float wv = w.dot(v);
float vv = v.dot(v);
float wu = w.dot(u);
float uu = u.dot(u);

float s1 = (uv * wv - vv * wu) / (uv * uv - uu * vv);
float t1 = (uv * wu - uu * wv) / (uv * uv - uu * vv);

return s1 >= 0 && t1 >= 0 && (s1 + t1) <= 1;
}
};

Edited by Cornstalks

##### Share on other sites

Everything should be in right-hard coordinates (clockwise winding).

This is the triangle in question:

[Vector3(-20.4441, 9.89595, 84.9516), Vector3(-19.5101, 9.46233, 85.6803), Vector3(-21.4244, 11.3275, 87.3727)]

If I am not mistaken, this is wrong, correct?

##### Share on other sites

Out of curiosity, are you sure the winding for the vertices in each triangle of the torus is consistent (that is, verts in each triangle should all be ordered clockwise or counter-clockwise; triangles with different windings will have flipped normals)? Also, what are the values for the vertices (is there potential precision error)? There isn't anything obviously wrong from your normal calculation, which is why I'm asking these questions. This is the code for triangles I used in my own raytracer (in case it provides any inspiration):

// When loading triangles, I set the normal as:
// Vector3f u = t.b - t.a; (note: t is the Triangle)
// Vector3f v = t.c - t.a;
// t.normal = u.cross(v);
// t.normal.normalize();

class Triangle : public Shape
{
public:
Vector3f a, b, c;
Vector3f normal; // normalized

Triangle() : Shape(Shape::E_TRIANGLE)
{
}

virtual bool intersects(const Ray& ray, float& t, Vector3f& n) const
{
// from http://softsurfer.com/Archive/algorithm_0105/algorithm_0105.htm (and Plane.intersect())
n = normal;

float num = normal.dot(a - ray.start);
float den = normal.dot(ray.direction);

if (num != 0 && den == 0)
{
return false;
}
else if (num == 0 && den == 0)
{
t = 0; // parallel in same plane
return true;
}
else
{
t = num / den;
}

Vector3f u = b - a;
Vector3f v = c - a;
Vector3f w = ray.start + ray.direction * t - a;

float uv = u.dot(v);
float wv = w.dot(v);
float vv = v.dot(v);
float wu = w.dot(u);
float uu = u.dot(u);

float s1 = (uv * wv - vv * wu) / (uv * uv - uu * vv);
float t1 = (uv * wu - uu * wv) / (uv * uv - uu * vv);

return s1 >= 0 && t1 >= 0 && (s1 + t1) <= 1;
}
};


Actually I changed the floats to doubles for everything that has to do with geometry and those black pixels disappeared! So yes, they were precision errors.

1. 1
2. 2
Rutin
21
3. 3
4. 4
5. 5

• 13
• 26
• 10
• 11
• 9
• ### Forum Statistics

• Total Topics
633736
• Total Posts
3013603
×