Jump to content
  • Advertisement
Sign in to follow this  
Tipotas688

Ray Tracer

This topic is 3031 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hello, I m building a ray tracer and something really weird goes on :P Global variables:
D3DXVECTOR3 camPos(0.0f, 0.0f, -10.0f);
float fovW = (45.0f)*3.14f/180;
float fovH = (float)HEIGHT / (float)WIDTH * fovW;

I create 5 balls (0, 1, 50.0f) (0, -3, 100.0f) (2, 0, 90.0f) (-1, 0, 75.0f) (-4, 4, 75.0f) and one light at (0,0,1000) To create the ray for each pixel I use the following for its direction and the camera position as its origin:
float xz = ( (2.0f * (float)x) - (float)WIDTH) / (float)WIDTH * tanFovW;
float yz = ( (2.0f * (float)y) - (float)HEIGHT) / (float)HEIGHT * tanFovH;

D3DXVECTOR3 m_direction = D3DXVECTOR3(xz- camPos.x, yz-camPos.y, -camPos.z);
NORMALIZE(m_direction);

Notice that light is further ahead of every ball and the camera yet I get as a result: Photobucket Also regardless of whether the objects are in shadow I get the same result :S Any ideas why I am getting this result and not proper normalized spheres even though I calculate lambert
//90+ degrees
if (lambert > 0)
{
	const float DIFFUSE_COEF = 0.8f;
	tempSphere->color = lambert * DIFFUSE_COEF;
}

Share this post


Link to post
Share on other sites
Advertisement
Yo can't do this:

fovH = (float)HEIGHT / (float)WIDTH * fovW;

These are angles, you can't "scale" an angle like that! Use trigonometry.

You should post some relevant code too (lighting calculations)
Shadows? shadows won't be generated "by themselves". Do you handle it at all? Post the relevant codes.

Share this post


Link to post
Share on other sites
Quote:
These are angles, you can't "scale" an angle like that! Use trigonometry.
yeah I actually don't know how this field of view is created or the way i direct the rays to the pixel, I had a similar post but it wasn't clarified.

as for the code:


for each pixel of the screen
{
Final color = 0;
Ray = { starting point, direction };
Repeat
{
for each object in the scene
{
determine closest ray object/intersection;
}
if intersection exists
{
for each light in the scene
{
if the light is not in shadow of another object
{
add this light contribution to computed color;
}
}
}
Final color = Final color + computed color * previous reflection factor;
reflection factor = reflection factor * surface reflection property;
increment depth;
} until reflection factor is 0 or maximum depth is reached;
}



[Edited by - Tipotas688 on March 30, 2010 4:26:14 AM]

Share this post


Link to post
Share on other sites
image

That should give you some hints about the geometry.

Your code is unclear for me.

If you normalize the light direction, you mustn't divide it again with the length of it.

lambert = DOT(sufaceNormal, lightDirection); // don't divide with lightDistance
//since
lightDirection is already normalized

I can only spot this at the moment.

Share this post


Link to post
Share on other sites
I was a bit uncertain about this cheers.

as for my code I m mostly following this algorithm: http://www.codermind.com/articles/Raytracer-in-C++-Part-I-First-rays.html

Share this post


Link to post
Share on other sites
I can only say one thing: Ray-casting/tracing is the most straightforward way of rendering. You take everything from life. You cast a ray, and see what happens with it.

What I'm trying to say: I think it's better to make the basic ray-tracing all by yourself, without articles/algorithms(basic ray-casting don't even need algorithms-optimization needs algorithms)/tutorials. It requires the most basic linear algebra: line-intersection,dot/cross product,some trigonometry. You have to fully understand the method, or you will only copy code, and struggle with debugging it.

By the way: did the normalization stuff do the job?

Share this post


Link to post
Share on other sites
I had already tried taking out the division in the first place as I thought it didn't make any sense but I put it back since it was in the guide as you said.

You are right in saying that I don't need a guide to do all that, to be honest I took out everything that was implemented to find the intersections and turned it all in maths which makes it much clearer.

What do you have to say about the camera-objects-light positioning, it is really confusing me how they are placed and the result I m getting, I should be getting shadowed spheres as light lights them in the other direction instead they appear flat as if I didn't do any light calculations

Share this post


Link to post
Share on other sites
Here:
NORMALIZE(pointOfIntersection); you normalize a position. You can't normalize a position, only a direction. Then you use this normalized position, to calculate the light direction: wrong. Use the un-normalized position (But again: normalizing a position is wrong)

This will be the problem, you don't take the normals well: I don't know what this line does:
D3DXVECTOR3 sufaceNormal = tempSphere->GetNormal(pointOfIntersection);

I just don't know how to get the normals with the classes you have (that's why I only program in C, this class stuff just ties your hands, if you're not an expert in C++)

In my C ray-tracer, I simply know the index of the intersected triangle (since it's not "hidden" in a class). Maybe you could access the index yourself too, so you can get the vertex normals of the triangle. Then you have to interpolate between the 3 normals, I guess there's a function for that in your math class.

I try to summarize:
-you cast a ray (that's okay, since you have the spheres displayed)
-you calculate the intersection point's coordinates (int_pos) (that's okay)
-you calculate the light direction: light_pos-int_pos
-you normalize the light direction
-you get the normal of the intersection point somehow
-you calculate the dot product of the normalized light direction and the normal
-you check for shadows (that's okay)
-etc


EDIT: maybe D3DXVECTOR3 sufaceNormal = tempSphere->GetNormal(pointOfIntersection); works like this:

you have to calculate the intersection point coordinates in the sphere's space. I don't know how to do that, because I don't know how the classes work.
But this intersection point has to be a temp variable! If you use it later, everything will be screwed up, if you get what I mean.

Or maybe the clas does everything for you: so

Don't normalize the intersection point!

Share this post


Link to post
Share on other sites

D3DXVECTOR3 Sphere::GetNormal(D3DXVECTOR3 p)
{
D3DXVECTOR3 normal = p - position;
Normalize(normal);
return normal;
}




thats how i get the normal on a point.

To be honest I am using algorithms and guides since I am no expert in C++, but then again I think that C#/Java is not a good language to make a ray tracer, C and C++ are better, so I m trying to learn the language and do the ray tracer. I know its wrong to try multiple stuff like that but I hope I ll manage.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!