• Create Account

Banner advertising on our site currently available from just \$5!

# Raytracing - circle looks "ugly"

Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

11 replies to this topic

### #1sheep19  Members   -  Reputation: 413

Like
0Likes
Like

Posted 13 December 2012 - 06:17 PM

Hi, I'm implementing raytracing to become familiar with it.

So far, I create two spheres.
Sphere 1 is at (0, 0, 5), radius = 1
Sphere 2 is at (0, 10, 50), radius = 20

Camera is at (0, 0, -1)

This is the Sphere class [only the hit function is interesting]
class Sphere : Surface
{
Material material;
Vector3 center;

this()
{
}

this(const ref Vector3 _center, float _radius)
{
center = _center;
}

bool hit(const Ray r, float p0, float p1, ref HitInfo hitInfo) const
{
Vector3 d = r.d, e = r.e, c = center;

float discriminant = dot(d, e-c) * dot(d, e-c) - dot(d, d) * (dot(e-c, e-c) - radius * radius);
if( discriminant >= 0 )
{
//float t1 = (dot(-d, e-c) + sqrt(discriminant)) / dot(d, d);
float t2 = (dot(-d, e-c) - sqrt(discriminant)) / dot(d, d);

// TODO: don't forget to change this if needed
if( t2 < p0 || t2 > p1 )
return false;

hitInfo.t = t2;
hitInfo.hitPoint = e + d * t2;
hitInfo.ray = d;
hitInfo.surfaceNormal = (hitInfo.hitPoint - c) * 2; // is this correct?
}

return discriminant >= 0; // TODO: implement
}

Box boundingBox() const
{
Vector3 min = {center.x - radius * 0.5f, center.y - radius * 0.5f, center.z - radius * 0.5f};
Vector3 max = {center.x + radius * 0.5f, center.y + radius * 0.5f, center.z + radius * 0.5f};

Box b = {min, max};

return b;
}

{
}
}

So, Spheres need to have a Material, that decides how they are going to be drawn.
In the simplest case, my material can be:
class SimpleColor : Material
{
private Vector3 color;

this(ubyte r, ubyte g, ubyte b)
{
color.x = r / 255.0f;
color.y = g / 255.0f;
color.z = b / 255.0f;
}

{
return color;
}
}

So, I chose red and green colours. Everything looks fine.
[img=http://[url%3Dhttp://i50.tinypic.com/35b7b49.png]http://i50.tinypic.com/35b7b49.png[/url]]

The next step was to make the shade function a bit more complicated:
Vector3 shade(HitInfo hitInfo) const
{
Vector3 l = {0, 1, 0}; // let's say that the light is at this position. (It's hardcoded for now - just for the test)
Vector3 lightVector = l - hitInfo.hitPoint; // vector from the hit point to the light

// TODO: fix
if( hitInfo.surfaceNormal.dot(lightVector) <= 0 )
{
return color * 0;
}
else
{
return color * hitInfo.surfaceNormal.dot(lightVector);
}
}

So I just return the colour multipled by the dot product of the surface normal and the light direction vector. What I get is this:
[img=http://[url%3Dhttp://i45.tinypic.com/29zeek1.png]http://i45.tinypic.com/29zeek1.png[/url]]

Are all those normals negative? Or am I doing something wrong? (I bet I am )

### #2MarkS  Prime Members   -  Reputation: 975

Like
0Likes
Like

Posted 13 December 2012 - 06:48 PM

I'm going to stick my neck out here, seeing as how I have never created a raytracer of any kind. What jumps out to me is that you are not normalizing either the normal or "lightVector".

### #3Ravyne  GDNet+   -  Reputation: 9633

Like
0Likes
Like

Posted 13 December 2012 - 07:15 PM

Think about what spaces l, lightVector and hitInfo.hitPoint are in -- it might be that you actually do want to do what you're doing, but it seems unlikely to me.

a normalized l implies you want directional lighting (e.g. l is essentially at infinity for all purposes, like light from the sun), but that you take a vector between l and hitPoint implies local point lighting which exists in the same coordinate space as hitPoint.

There are probably also normalization issues, and possibly signedness as well.

Edited by Ravyne, 13 December 2012 - 07:16 PM.

### #4uglybdavis  Members   -  Reputation: 1047

Like
0Likes
Like

Posted 13 December 2012 - 07:31 PM

I didn't fully read your post, but seeing how the outline of your object is visible, but the inside isn't
A) Culling?
B) Inverted Normals?

### #5Bacterius  Crossbones+   -  Reputation: 9886

Like
0Likes
Like

Posted 13 December 2012 - 07:35 PM

hitInfo.surfaceNormal = (hitInfo.hitPoint - c) * 2; // is this correct?

That surface normal is incorrect. The correct one is:

hitInfo.surfaceNormal = normalize(hitInfo.hitPoint - c);

Or, alternatively, to potentially trade some accuracy for speed:

hitInfo.surfaceNormal = (hitInfo.hitPoint - c) / radius;

You also need to normalize the vector between the light source and the hitpoint, in your shading code.

Remember the cardinal rule of ray-tracing: if in doubt, normalize. The vectors which need to have a non-unit magnitude are few and far in between and are usually obvious.

Edited by Bacterius, 13 December 2012 - 07:37 PM.

The slowsort algorithm is a perfect illustration of the multiply and surrender paradigm, which is perhaps the single most important paradigm in the development of reluctant algorithms. The basic multiply and surrender strategy consists in replacing the problem at hand by two or more subproblems, each slightly simpler than the original, and continue multiplying subproblems and subsubproblems recursively in this fashion as long as possible. At some point the subproblems will all become so simple that their solution can no longer be postponed, and we will have to surrender. Experience shows that, in most cases, by the time this point is reached the total work will be substantially higher than what could have been wasted by a more direct approach.

- Pessimal Algorithms and Simplexity Analysis

### #6sheep19  Members   -  Reputation: 413

Like
0Likes
Like

Posted 14 December 2012 - 06:06 AM

hitInfo.surfaceNormal = normalize(hitInfo.hitPoint - c);

Are you sure? That's what my book says (Fundamentals of Computer Graphics, 3rd edition)

### #7Bacterius  Crossbones+   -  Reputation: 9886

Like
0Likes
Like

Posted 14 December 2012 - 06:15 AM

hitInfo.surfaceNormal = normalize(hitInfo.hitPoint - c);

Are you sure? That's what my book says (Fundamentals of Computer Graphics, 3rd edition)

I'm positive. I mean, assuming your hitPoint function does what I think it does.

Edited by Bacterius, 14 December 2012 - 06:20 AM.

The slowsort algorithm is a perfect illustration of the multiply and surrender paradigm, which is perhaps the single most important paradigm in the development of reluctant algorithms. The basic multiply and surrender strategy consists in replacing the problem at hand by two or more subproblems, each slightly simpler than the original, and continue multiplying subproblems and subsubproblems recursively in this fashion as long as possible. At some point the subproblems will all become so simple that their solution can no longer be postponed, and we will have to surrender. Experience shows that, in most cases, by the time this point is reached the total work will be substantially higher than what could have been wasted by a more direct approach.

- Pessimal Algorithms and Simplexity Analysis

### #8sheep19  Members   -  Reputation: 413

Like
0Likes
Like

Posted 14 December 2012 - 06:59 AM

hitInfo.surfaceNormal = normalize(hitInfo.hitPoint - c);

Are you sure? That's what my book says (Fundamentals of Computer Graphics, 3rd edition)

I'm positive. I mean, assuming your hitPoint function does what I think it does.

Actually, it's the same. I normalized them both and the result is the same.

OK, now I am normalizing the vectors (I had forgotten to do so), and things are better. Much better.

But there's a small problem:

I don't understand why this happens.

This is the new shade function:
Vector3 shade(HitInfo hitInfo) const
{
Vector3 l = {0, 1, 0}; // let's say that the light is at this position
Vector3 lightVector = l - hitInfo.hitPoint; // vector from the hit point to the light

Vector3 I = {1.0f, 1.0f, 1.0f}; // color of the light

lightVector.normalize();
hitInfo.surfaceNormal.normalize();

// TODO: fix
if( hitInfo.surfaceNormal.dot(lightVector) <= 0 )
{
return I * color * 0;
}
else
{
return I * color * hitInfo.surfaceNormal.dot(lightVector);
}
}


By the way, this is the loop in which the rays are casted:
for(int x = 0; x < SCREEN_WIDTH; ++x)
{
for(int y = 0; y < SCREEN_HEIGHT; ++y)
{
//x = SCREEN_WIDTH / 2;
//y = SCREEN_HEIGHT / 2;

Vector3 p = {(x - SCREEN_WIDTH * 0.5f) / (SCREEN_WIDTH * 0.5f), (y - SCREEN_HEIGHT * 0.5f) / (SCREEN_HEIGHT * 0.5f), 0};
Ray r = {cameraPos, p - cameraPos};

HitInfo hitInfo;
Surface closestObject; // default initialized to null
float t = float.max;

foreach(obj; scene.objects)
{
if( obj.hit(r, 0.1f, 1000, hitInfo) && hitInfo.t < t ) // hit?
{
t = hitInfo.t;
closestObject = obj;
}
}

if( closestObject !is null )
{

writePixel(screen, x, SCREEN_HEIGHT - 1 - y, cast(ubyte)(color.x * 255), cast(ubyte)(color.y * 255), cast(ubyte)(color.z * 255));
}

//x = y = 1000;
}
}


Oh, I forgot. Why do the spheres look like ovals?

Edited by sheep19, 14 December 2012 - 07:02 AM.

### #9Bacterius  Crossbones+   -  Reputation: 9886

Like
1Likes
Like

Posted 14 December 2012 - 07:15 AM

Actually, it's the same. I normalized them both and the result is the same.

Yes, the vector you had before has the correct direction (the hitpoint - c part), but the length was incorrect. That's why you need to normalize it (or, divide by the radius, which is equivalent if the hitpoint is on the sphere's surface). Normalizing just sets the vector's length to 1, leaving its direction unchanged, so if two vectors have the same direction but different lengths, normalizing them both will produce the same result.

For your problem, look closely at your intersection code for every object. Notice hitInfo is a storage variable, which is trashed at every ray-sphere intersection. The actual closest intersection results are located in the variables t and closestObject. You correctly identify that in the loop, but upon shading (at line 26) you make a mistake and use hitInfo (which now contains the intersection data for the last sphere intersected, which is not necessarily the closest) instead of the correct variable t. You want to keep track of the distance in its own HitInfo structure, instead of just the distance, since you need the other stuff (intersection point, normal, etc...) too for shading. So something like this:

HitInfo current, closest;
Surface closestObject; // default initialized to null
closest.t = float.max;

foreach(obj; scene.objects)
{
if( obj.hit(r, 0.1f, 1000, current) && current.t < closest.t ) // hit?
{
closest = current;
closestObject = obj;
}
}

And then:

Vector3 color = closestObject.shade(closest);

Oh, I forgot. Why do the spheres look like ovals?

Aspect ratio. You are rendering your 1024x768 (or whatever) bitmap as if it was a square, with (x, y) coordinates between -1 and 1, which will stretch the spheres horizontally into ovals. To take into account aspect ratio, you need to either scale x and y to account for it, like so:

float aspectRatio = (float)SCREEN_WIDTH / SCREEN_HEIGHT; // [w/h]

Vector3 p = {aspectRatio * (x - SCREEN_WIDTH * 0.5f) / (SCREEN_WIDTH * 0.5f), (y - SCREEN_HEIGHT * 0.5f) / (SCREEN_HEIGHT * 0.5f), 0};

This can be simplified but you get the idea. You can also contract the height instead of expanding the width, these are conventions depending on your definition of "aspect ratio". Not something you want to worry about at this point.

Edited by Bacterius, 14 December 2012 - 07:23 AM.

The slowsort algorithm is a perfect illustration of the multiply and surrender paradigm, which is perhaps the single most important paradigm in the development of reluctant algorithms. The basic multiply and surrender strategy consists in replacing the problem at hand by two or more subproblems, each slightly simpler than the original, and continue multiplying subproblems and subsubproblems recursively in this fashion as long as possible. At some point the subproblems will all become so simple that their solution can no longer be postponed, and we will have to surrender. Experience shows that, in most cases, by the time this point is reached the total work will be substantially higher than what could have been wasted by a more direct approach.

- Pessimal Algorithms and Simplexity Analysis

### #10sheep19  Members   -  Reputation: 413

Like
1Likes
Like

Posted 24 December 2012 - 11:50 AM

Hello again. I noticed that, when having two lights, it's like one of them over-writes the other's color (for the specular highlights). Here's what I mean:

If I remove one of the two lights, it is shown correctly.

Here is my shade function. Is there anything wrong?

Vector3 shade(HitInfo hitInfo, ref Scene scene) const
{
Vector3 finalColor = Vector3(0.0f, 0.0f, 0.0f); // the final color

// normalize the surface normal
hitInfo.surfaceNormal.normalize();

foreach(light; scene.lights)
{
Vector3 lightVector = light.position - hitInfo.hitPoint; // vector from the hit point to the light

lightVector.normalize();

HitInfo hitInfo2;
Ray ray = {hitInfo.hitPoint, light.position - hitInfo.hitPoint};
ray.d.normalize();

if( !scene.trace(ray, hitInfo2, 0.1f) )
{
if( hitInfo.surfaceNormal.dot(lightVector) > 0 )
{
finalColor = finalColor + light.I * color * hitInfo.surfaceNormal.dot(lightVector);

hitInfo.ray = -hitInfo.ray;
hitInfo.ray.normalize();

Vector3 H = (lightVector + hitInfo.ray) * 0.5f; // find the half vector, H
H.normalize();

float specularDotProduct = dot(hitInfo.surfaceNormal, H);

if( specularDotProduct > 0.0f )
finalColor = finalColor + light.I * std.math.pow(specularDotProduct, 10.0f);
}
}
else
{
}
}

return finalColor;
}

By the way, I have triangles now! I will write an .obj loader soon, and I will have 3D meshes!!!

Edited by sheep19, 24 December 2012 - 11:53 AM.

### #11Bacterius  Crossbones+   -  Reputation: 9886

Like
1Likes
Like

Posted 24 December 2012 - 03:44 PM

You're modifying hitInfo at every light (it's declared outside the foreach loop), this will trash the calculations at the second light. Basically, it'd do the first light right, and then on the second light, it'll incorrectly flip the hitInfo.ray vector once again, messing up the specular term (and not the diffuse, as you observed, since it doesn't use that variable). The third light would come out right, since the vector is flipped in the right direction once again, the fourth one will be wrong, and so on.

Easiest way to fix it would be to just get a temporary HitInfo variable inside the loop's scope, and make it equal to your original hitInfo at the start of each iteration, though in this case you might also be better off just hoisting the vector flip outside the loop, though that's less flexible and might bite you later if you modify more of this variable.

By the way, I have triangles now! I will write an .obj loader soon, and I will have 3D meshes!!!

Nice! Though if you want to go into really complex meshes you'll need some sort of scene graph to accelerate ray-triangle intersections, like an octree or a bounding volume hierarchy, otherwise it's going to take forever! But that's for later

Edited by Bacterius, 24 December 2012 - 03:46 PM.

The slowsort algorithm is a perfect illustration of the multiply and surrender paradigm, which is perhaps the single most important paradigm in the development of reluctant algorithms. The basic multiply and surrender strategy consists in replacing the problem at hand by two or more subproblems, each slightly simpler than the original, and continue multiplying subproblems and subsubproblems recursively in this fashion as long as possible. At some point the subproblems will all become so simple that their solution can no longer be postponed, and we will have to surrender. Experience shows that, in most cases, by the time this point is reached the total work will be substantially higher than what could have been wasted by a more direct approach.

- Pessimal Algorithms and Simplexity Analysis

### #12sheep19  Members   -  Reputation: 413

Like
1Likes
Like

Posted 24 December 2012 - 04:43 PM

Thank you very much

Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

PARTNERS