Raytracing - circle looks "ugly"

Started by
10 comments, last by sheep19 11 years, 3 months ago
Hi, I'm implementing raytracing to become familiar with it.

So far, I create two spheres.
Sphere 1 is at (0, 0, 5), radius = 1
Sphere 2 is at (0, 10, 50), radius = 20

Camera is at (0, 0, -1)

This is the Sphere class [only the hit function is interesting]
class Sphere : Surface
{
Material material;
Vector3 center;
float radius;

this()
{
}

this(const ref Vector3 _center, float _radius)
{
center = _center;
radius = _radius;
}

bool hit(const Ray r, float p0, float p1, ref HitInfo hitInfo) const
{
Vector3 d = r.d, e = r.e, c = center;

float discriminant = dot(d, e-c) * dot(d, e-c) - dot(d, d) * (dot(e-c, e-c) - radius * radius);
if( discriminant >= 0 )
{
//float t1 = (dot(-d, e-c) + sqrt(discriminant)) / dot(d, d);
float t2 = (dot(-d, e-c) - sqrt(discriminant)) / dot(d, d);

// TODO: don't forget to change this if needed
if( t2 < p0 || t2 > p1 )
return false;

hitInfo.t = t2;
hitInfo.hitPoint = e + d * t2;
hitInfo.ray = d;
hitInfo.surfaceNormal = (hitInfo.hitPoint - c) * 2; // is this correct?
}

return discriminant >= 0; // TODO: implement
}

Box boundingBox() const
{
Vector3 min = {center.x - radius * 0.5f, center.y - radius * 0.5f, center.z - radius * 0.5f};
Vector3 max = {center.x + radius * 0.5f, center.y + radius * 0.5f, center.z + radius * 0.5f};

Box b = {min, max};

return b;
}

Vector3 shade(HitInfo hitInfo) const
{
return material.shade(hitInfo);
}
}
So, Spheres need to have a Material, that decides how they are going to be drawn.
In the simplest case, my material can be:
class SimpleColor : Material
{
private Vector3 color;

this(ubyte r, ubyte g, ubyte b)
{
color.x = r / 255.0f;
color.y = g / 255.0f;
color.z = b / 255.0f;
}

Vector3 shade(HitInfo hitInfo) const
{
return color;
}
}
So, I chose red and green colours. Everything looks fine.
[img=http://[url%3Dhttp://i50.tinypic.com/35b7b49.png]http://i50.tinypic.com/35b7b49.png[/url]]



The next step was to make the shade function a bit more complicated:
Vector3 shade(HitInfo hitInfo) const
{
Vector3 l = {0, 1, 0}; // let's say that the light is at this position. (It's hardcoded for now - just for the test)
Vector3 lightVector = l - hitInfo.hitPoint; // vector from the hit point to the light

// TODO: fix
if( hitInfo.surfaceNormal.dot(lightVector) <= 0 )
{
return color * 0;
}
else
{
return color * hitInfo.surfaceNormal.dot(lightVector);
}
}
So I just return the colour multipled by the dot product of the surface normal and the light direction vector. What I get is this:
[img=http://[url%3Dhttp://i45.tinypic.com/29zeek1.png]http://i45.tinypic.com/29zeek1.png[/url]]

Are all those normals negative? Or am I doing something wrong? (I bet I am smile.png )
Advertisement
I'm going to stick my neck out here, seeing as how I have never created a raytracer of any kind. What jumps out to me is that you are not normalizing either the normal or "lightVector".
Think about what spaces l, lightVector and hitInfo.hitPoint are in -- it might be that you actually do want to do what you're doing, but it seems unlikely to me.

a normalized l implies you want directional lighting (e.g. l is essentially at infinity for all purposes, like light from the sun), but that you take a vector between l and hitPoint implies local point lighting which exists in the same coordinate space as hitPoint.

There are probably also normalization issues, and possibly signedness as well.

throw table_exception("(? ???)? ? ???");

I didn't fully read your post, but seeing how the outline of your object is visible, but the inside isn't
A) Culling?
B) Inverted Normals?
I don't really know much about this are, but those seem to be the common issues
hitInfo.surfaceNormal = (hitInfo.hitPoint - c) * 2; // is this correct?[/quote]

That surface normal is incorrect. The correct one is:

hitInfo.surfaceNormal = normalize(hitInfo.hitPoint - c);

Or, alternatively, to potentially trade some accuracy for speed:

hitInfo.surfaceNormal = (hitInfo.hitPoint - c) / radius;

You also need to normalize the vector between the light source and the hitpoint, in your shading code.

Remember the cardinal rule of ray-tracing: if in doubt, normalize. The vectors which need to have a non-unit magnitude are few and far in between and are usually obvious.

“If I understand the standard right it is legal and safe to do this but the resulting value could be anything.”


hitInfo.surfaceNormal = normalize(hitInfo.hitPoint - c);


Are you sure? That's what my book says (Fundamentals of Computer Graphics, 3rd edition)

[quote name='Bacterius' timestamp='1355448917' post='5010425']
hitInfo.surfaceNormal = normalize(hitInfo.hitPoint - c);


Are you sure? That's what my book says (Fundamentals of Computer Graphics, 3rd edition)
[/quote]
I'm positive. I mean, assuming your hitPoint function does what I think it does.

“If I understand the standard right it is legal and safe to do this but the resulting value could be anything.”


[quote name='sheep19' timestamp='1355486778' post='5010571']
[quote name='Bacterius' timestamp='1355448917' post='5010425']
hitInfo.surfaceNormal = normalize(hitInfo.hitPoint - c);


Are you sure? That's what my book says (Fundamentals of Computer Graphics, 3rd edition)
[/quote]
I'm positive. I mean, assuming your hitPoint function does what I think it does.
[/quote]

Actually, it's the same. I normalized them both and the result is the same.

OK, now I am normalizing the vectors (I had forgotten to do so), and things are better. Much better.

But there's a small problem:

20a8hw2.png

I don't understand why this happens.

This is the new shade function:

Vector3 shade(HitInfo hitInfo) const
{
Vector3 l = {0, 1, 0}; // let's say that the light is at this position
Vector3 lightVector = l - hitInfo.hitPoint; // vector from the hit point to the light

Vector3 I = {1.0f, 1.0f, 1.0f}; // color of the light

lightVector.normalize();
hitInfo.surfaceNormal.normalize();

// TODO: fix
if( hitInfo.surfaceNormal.dot(lightVector) <= 0 )
{
return I * color * 0;
}
else
{
return I * color * hitInfo.surfaceNormal.dot(lightVector);
}
}


By the way, this is the loop in which the rays are casted:

for(int x = 0; x < SCREEN_WIDTH; ++x)
{
for(int y = 0; y < SCREEN_HEIGHT; ++y)
{
//x = SCREEN_WIDTH / 2;
//y = SCREEN_HEIGHT / 2;

Vector3 p = {(x - SCREEN_WIDTH * 0.5f) / (SCREEN_WIDTH * 0.5f), (y - SCREEN_HEIGHT * 0.5f) / (SCREEN_HEIGHT * 0.5f), 0};
Ray r = {cameraPos, p - cameraPos};

HitInfo hitInfo;
Surface closestObject; // default initialized to null
float t = float.max;

foreach(obj; scene.objects)
{
if( obj.hit(r, 0.1f, 1000, hitInfo) && hitInfo.t < t ) // hit?
{
t = hitInfo.t;
closestObject = obj;
}
}

if( closestObject !is null )
{
Vector3 color = closestObject.shade(hitInfo);

writePixel(screen, x, SCREEN_HEIGHT - 1 - y, cast(ubyte)(color.x * 255), cast(ubyte)(color.y * 255), cast(ubyte)(color.z * 255));
}

//x = y = 1000;
}
}


Oh, I forgot. Why do the spheres look like ovals?

Actually, it's the same. I normalized them both and the result is the same.

Yes, the vector you had before has the correct direction (the [font=courier new,courier,monospace]hitpoint - c[/font] part), but the length was incorrect. That's why you need to normalize it (or, divide by the radius, which is equivalent if the hitpoint is on the sphere's surface). Normalizing just sets the vector's length to 1, leaving its direction unchanged, so if two vectors have the same direction but different lengths, normalizing them both will produce the same result.

For your problem, look closely at your intersection code for every object. Notice hitInfo is a storage variable, which is trashed at every ray-sphere intersection. The actual closest intersection results are located in the variables t and closestObject. You correctly identify that in the loop, but upon shading (at line 26) you make a mistake and use hitInfo (which now contains the intersection data for the last sphere intersected, which is not necessarily the closest) instead of the correct variable t. You want to keep track of the distance in its own HitInfo structure, instead of just the distance, since you need the other stuff (intersection point, normal, etc...) too for shading. So something like this:

HitInfo current, closest;
Surface closestObject; // default initialized to null
closest.t = float.max;

foreach(obj; scene.objects)
{
if( obj.hit(r, 0.1f, 1000, current) && current.t < closest.t ) // hit?
{
closest = current;
closestObject = obj;
}
}


And then:

Vector3 color = closestObject.shade(closest);


Oh, I forgot. Why do the spheres look like ovals?[/quote]
Aspect ratio. You are rendering your 1024x768 (or whatever) bitmap as if it was a square, with (x, y) coordinates between -1 and 1, which will stretch the spheres horizontally into ovals. To take into account aspect ratio, you need to either scale x and y to account for it, like so:

float aspectRatio = (float)SCREEN_WIDTH / SCREEN_HEIGHT; // [w/h]

Vector3 p = {aspectRatio * (x - SCREEN_WIDTH * 0.5f) / (SCREEN_WIDTH * 0.5f), (y - SCREEN_HEIGHT * 0.5f) / (SCREEN_HEIGHT * 0.5f), 0};


This can be simplified but you get the idea. You can also contract the height instead of expanding the width, these are conventions depending on your definition of "aspect ratio". Not something you want to worry about at this point.

“If I understand the standard right it is legal and safe to do this but the resulting value could be anything.”

Hello again. I noticed that, when having two lights, it's like one of them over-writes the other's color (for the specular highlights). Here's what I mean:

20ku0kk.jpg

If I remove one of the two lights, it is shown correctly.

Here is my shade function. Is there anything wrong?


Vector3 shade(HitInfo hitInfo, ref Scene scene) const
	{
		Vector3 finalColor = Vector3(0.0f, 0.0f, 0.0f); // the final color
		
		// normalize the surface normal
		hitInfo.surfaceNormal.normalize();
		
		foreach(light; scene.lights)
		{
			Vector3 lightVector = light.position - hitInfo.hitPoint; // vector from the hit point to the light
		
			lightVector.normalize();
			
			HitInfo hitInfo2;
			Ray ray = {hitInfo.hitPoint, light.position - hitInfo.hitPoint};
			ray.d.normalize();
			
			if( !scene.trace(ray, hitInfo2, 0.1f) )
			{
				// diffuse shading
				if( hitInfo.surfaceNormal.dot(lightVector) > 0 )
				{
					finalColor = finalColor + light.I * color * hitInfo.surfaceNormal.dot(lightVector);

					hitInfo.ray = -hitInfo.ray;
					hitInfo.ray.normalize();
					
					// specular shading
					Vector3 H = (lightVector + hitInfo.ray) * 0.5f; // find the half vector, H
					H.normalize();
					
					float specularDotProduct = dot(hitInfo.surfaceNormal, H);
					
					if( specularDotProduct > 0.0f )
						finalColor = finalColor + light.I * std.math.pow(specularDotProduct, 10.0f);
				}
			}
			else
			{
				// no color is added, shadow is shown
			}
		}
		
		return finalColor;
	}

By the way, I have triangles now! I will write an .obj loader soon, and I will have 3D meshes!!! :)

This topic is closed to new replies.

Advertisement