Sign in to follow this  
Tipotas688

Ray Tracer

Recommended Posts

Hello, I m building a ray tracer and something really weird goes on :P Global variables:
D3DXVECTOR3 camPos(0.0f, 0.0f, -10.0f);
float fovW = (45.0f)*3.14f/180;
float fovH = (float)HEIGHT / (float)WIDTH * fovW;

I create 5 balls (0, 1, 50.0f) (0, -3, 100.0f) (2, 0, 90.0f) (-1, 0, 75.0f) (-4, 4, 75.0f) and one light at (0,0,1000) To create the ray for each pixel I use the following for its direction and the camera position as its origin:
float xz = ( (2.0f * (float)x) - (float)WIDTH) / (float)WIDTH * tanFovW;
float yz = ( (2.0f * (float)y) - (float)HEIGHT) / (float)HEIGHT * tanFovH;

D3DXVECTOR3 m_direction = D3DXVECTOR3(xz- camPos.x, yz-camPos.y, -camPos.z);
NORMALIZE(m_direction);

Notice that light is further ahead of every ball and the camera yet I get as a result: Photobucket Also regardless of whether the objects are in shadow I get the same result :S Any ideas why I am getting this result and not proper normalized spheres even though I calculate lambert
//90+ degrees
if (lambert > 0)
{
	const float DIFFUSE_COEF = 0.8f;
	tempSphere->color = lambert * DIFFUSE_COEF;
}

Share this post


Link to post
Share on other sites
Yo can't do this:

fovH = (float)HEIGHT / (float)WIDTH * fovW;

These are angles, you can't "scale" an angle like that! Use trigonometry.

You should post some relevant code too (lighting calculations)
Shadows? shadows won't be generated "by themselves". Do you handle it at all? Post the relevant codes.

Share this post


Link to post
Share on other sites
Quote:
These are angles, you can't "scale" an angle like that! Use trigonometry.
yeah I actually don't know how this field of view is created or the way i direct the rays to the pixel, I had a similar post but it wasn't clarified.

as for the code:


for each pixel of the screen
{
Final color = 0;
Ray = { starting point, direction };
Repeat
{
for each object in the scene
{
determine closest ray object/intersection;
}
if intersection exists
{
for each light in the scene
{
if the light is not in shadow of another object
{
add this light contribution to computed color;
}
}
}
Final color = Final color + computed color * previous reflection factor;
reflection factor = reflection factor * surface reflection property;
increment depth;
} until reflection factor is 0 or maximum depth is reached;
}



[Edited by - Tipotas688 on March 30, 2010 4:26:14 AM]

Share this post


Link to post
Share on other sites
image

That should give you some hints about the geometry.

Your code is unclear for me.

If you normalize the light direction, you mustn't divide it again with the length of it.

lambert = DOT(sufaceNormal, lightDirection); // don't divide with lightDistance
//since
lightDirection is already normalized

I can only spot this at the moment.

Share this post


Link to post
Share on other sites
I was a bit uncertain about this cheers.

as for my code I m mostly following this algorithm: http://www.codermind.com/articles/Raytracer-in-C++-Part-I-First-rays.html

Share this post


Link to post
Share on other sites
I can only say one thing: Ray-casting/tracing is the most straightforward way of rendering. You take everything from life. You cast a ray, and see what happens with it.

What I'm trying to say: I think it's better to make the basic ray-tracing all by yourself, without articles/algorithms(basic ray-casting don't even need algorithms-optimization needs algorithms)/tutorials. It requires the most basic linear algebra: line-intersection,dot/cross product,some trigonometry. You have to fully understand the method, or you will only copy code, and struggle with debugging it.

By the way: did the normalization stuff do the job?

Share this post


Link to post
Share on other sites
I had already tried taking out the division in the first place as I thought it didn't make any sense but I put it back since it was in the guide as you said.

You are right in saying that I don't need a guide to do all that, to be honest I took out everything that was implemented to find the intersections and turned it all in maths which makes it much clearer.

What do you have to say about the camera-objects-light positioning, it is really confusing me how they are placed and the result I m getting, I should be getting shadowed spheres as light lights them in the other direction instead they appear flat as if I didn't do any light calculations

Share this post


Link to post
Share on other sites
Here:
NORMALIZE(pointOfIntersection); you normalize a position. You can't normalize a position, only a direction. Then you use this normalized position, to calculate the light direction: wrong. Use the un-normalized position (But again: normalizing a position is wrong)

This will be the problem, you don't take the normals well: I don't know what this line does:
D3DXVECTOR3 sufaceNormal = tempSphere->GetNormal(pointOfIntersection);

I just don't know how to get the normals with the classes you have (that's why I only program in C, this class stuff just ties your hands, if you're not an expert in C++)

In my C ray-tracer, I simply know the index of the intersected triangle (since it's not "hidden" in a class). Maybe you could access the index yourself too, so you can get the vertex normals of the triangle. Then you have to interpolate between the 3 normals, I guess there's a function for that in your math class.

I try to summarize:
-you cast a ray (that's okay, since you have the spheres displayed)
-you calculate the intersection point's coordinates (int_pos) (that's okay)
-you calculate the light direction: light_pos-int_pos
-you normalize the light direction
-you get the normal of the intersection point somehow
-you calculate the dot product of the normalized light direction and the normal
-you check for shadows (that's okay)
-etc


EDIT: maybe D3DXVECTOR3 sufaceNormal = tempSphere->GetNormal(pointOfIntersection); works like this:

you have to calculate the intersection point coordinates in the sphere's space. I don't know how to do that, because I don't know how the classes work.
But this intersection point has to be a temp variable! If you use it later, everything will be screwed up, if you get what I mean.

Or maybe the clas does everything for you: so

Don't normalize the intersection point!

Share this post


Link to post
Share on other sites

D3DXVECTOR3 Sphere::GetNormal(D3DXVECTOR3 p)
{
D3DXVECTOR3 normal = p - position;
Normalize(normal);
return normal;
}




thats how i get the normal on a point.

To be honest I am using algorithms and guides since I am no expert in C++, but then again I think that C#/Java is not a good language to make a ray tracer, C and C++ are better, so I m trying to learn the language and do the ray tracer. I know its wrong to try multiple stuff like that but I hope I ll manage.

Share this post


Link to post
Share on other sites
I still don't understand though what is going wrong with my ray tracer as the position of light is completely wrong and works, also even after modifying all that I still get the same picture as above :S

Share this post


Link to post
Share on other sites
if(intersected)
{
std::list<Light*>::iterator j;
for(j=lightList.begin(); j != lightList.end(); ++j)
{
D3DXVECTOR3 pointOfIntersection = ray.origin + ray.destination*minimumIntersectDistance;
//NORMALIZE(pointOfIntersection);
D3DXVECTOR3 sufaceNormal = tempSphere->GetNormal(pointOfIntersection);

D3DXVECTOR3 lightDirection = (*j)->position - pointOfIntersection;
//float lightDistance = sqrtf(lightDirection.x*lightDirection.x +
//lightDirection.y*lightDirection.y + lightDirection.z*lightDirection.z);
NORMALIZE(lightDirection);

if(ShadowRay(pointOfIntersection,(*j)->position))
{



float lambert = DOT(sufaceNormal, lightDirection);// / lightDistance;


if (lambert > 0)
{
const float DIFFUSE_COEF = 0.8f;
tempSphere->color = lambert * DIFFUSE_COEF;

}
}
}
if(GetRValue(tempSphere->color)>255)
tempSphere->color = RGB(255,GetGValue(tempSphere->color),GetBValue(tempSphere->color));
if(GetGValue(tempSphere->color)>255)
tempSphere->color = RGB(GetRValue(tempSphere->color),255,GetBValue(tempSphere->color));
if(GetBValue(tempSphere->color)>255)
tempSphere->color = RGB(GetRValue(tempSphere->color),GetGValue(tempSphere->color),255);
return tempSphere->color;
}

This should be fine now.

Share this post


Link to post
Share on other sites
Haha yeah I don't


D3DXVECTOR3 pointOfIntersection = ray.origin + ray.destination*minimumIntersectDistance;
//NORMALIZE(pointOfIntersection);



Something really interesting happens now when I put the light on a better place: camera(0,0,-1000)

Photobucket

Share this post


Link to post
Share on other sites
Okay, I have no idea what these lines are doing:
if(GetRValue(tempSphere->color)>255)
tempSphere->color = RGB(255,GetGValue(tempSphere->color),GetBValue(tempSphere->color));
if(GetGValue(tempSphere->color)>255)
tempSphere->color = RGB(GetRValue(tempSphere->color),255,GetBValue(tempSphere->color));
if(GetBValue(tempSphere->color)>255)
tempSphere->color = RGB(GetRValue(tempSphere->color),GetGValue(tempSphere->color),255);
return tempSphere->color;
comment these out, and see what happens.

Share this post


Link to post
Share on other sites
Quote:
Original post by Tipotas688
Haha yeah I don't

*** Source Snippet Removed ***

Something really interesting happens now when I put the light on a better place: camera(0,0,-1000)

Photobucket
Hey! those pictures are fine!
You don't map the colors properly, try to divide the output color with 256.0

And post the code of the shadow-ray stuff.

Share this post


Link to post
Share on other sites
If I divide with 256 then I get a black screen :S

if I do:

if(!ShadowRay(pointOfIntersection,(*j)->position))

I get the first picture

[Edited by - Tipotas688 on March 30, 2010 11:36:15 AM]

Share this post


Link to post
Share on other sites
And do this division in the display code.

Calm down a bit: you return a DWORD: 0-255
Your display stuff (directX) probably uses float: 0.0-1.0

Keep this in mind, and modify either the output of the ray-trace, or the display color mode.

Share this post


Link to post
Share on other sites
I changed the way I set the color since it creates a hex out of some floats, and I don't think it should work this way so:


BYTE r = GetRValue(tempSphere->color)+DIFFUSE_COEF * lambert;
BYTE g = GetGValue(tempSphere->color)+DIFFUSE_COEF * lambert;
BYTE b = GetBValue(tempSphere->color)+DIFFUSE_COEF * lambert;

tempSphere->color = RGB(r, g, b);




but now its:

Photobucket

Share this post


Link to post
Share on other sites
well if the rest seem fine to you then I ll try to learn how to convert those float-bytes-hex values into something colorful.

Thanks a lot for your time, shall I post the current version again?

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this