Archived

This topic is now archived and is closed to further replies.

VERY basic Ray Tracer isn't working

This topic is 5130 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi all, I'm putting together a basic ray tracer but i've run into problems. All that I want this ray tracer to do right now is see if it intersects a sphere and return the color of the sphere if it does. Otherwise, it returns the background color. The problem i'm running into is that the sceen gets filled with entirely background color or sphere color. So either my rayIntersection algorithm is wrong, or my rays aren't being set up properly. If you could take a look at my code i would very much appreciate it. All of the ray tracing is done in the main.cpp file. you can download a zip of the source at the following link: http://math.uaa.alaska.edu/~merritts/rayTracer.zip Thanks in advance [edited by - apatequil on November 28, 2003 10:12:03 PM]

Share this post


Link to post
Share on other sites
I''m not entirely sure about your sphere intersection routine; I don''t have the concentration at the moment to try and derive your math, but it looks to me like you''re trying to get the right triangle formed by the ray, the sphere''s radial segment connecting the center to the intersection point, and the camera-to-sphere point. I''ve done this myself in the past; it has drawbacks (you can''t specifically identify where the ray hits the sphere, only if it does or not) but it should work. You may consider trying a more traditional quadratic approach (i.e. algebraically solve for the intersection instead of using the geometric method).

The only other suspicious area is your ray generation code; it may just be that I''m a bit brain-dead at the moment but it seems like your rays will be going off in odd directions. Normally you generate rays by setting:

x = Pixel X - Half Screen Width
y = Half Screen Height - Pixel Y
z = Camera focal length

and then transforming your rays into camera space.

Share this post


Link to post
Share on other sites
ok, here''s the code that you pointed out was suspicious. I''ve made some modifications but still have no luck. I''ve also added some comments to help you try to follow my logic (which isn''t guarranteed to be correct ). Here are the code segments:

the intersectSphere method is as follows:


void intersectSphere(Ray ray, Sphere sphere)
{

//rayToSphereCenter is the vector from the ray origin to the sphere center

Vector rayToSphereCenter = (sphere.getPosition() - ray.getPosition());

//v is the distance from the ray origin to the intersection of it''s

//perpendicular to the center of the sphere

float v = rayToSphereCenter.dotProduct(rayToSphereCenter, ray.getDirection());
//r is just the radius of the sphere

float r = sphere.getRadius();
//c is the distance from the ray''s origin to the center of the sphere

float c = rayToSphereCenter.dotProduct(rayToSphereCenter, rayToSphereCenter);
//disc is the distance between the center of the sphere and the

//spot where the perpendicular line hits the ray

float disc = ((r*r) - ((c*c)-(v*v)));
/*
if disc is less than zero, the ray missed the sphere
else the ray intersected the sphere
*/

if(disc < 0)
{
//return false to indicate no intersection is present

traceResult = false;
return;
}
else
{
//return true to indicate that the ray intersected the sphere

traceResult = true;
return;
}
}


the RayTrace method which casts the rays:


void rayTrace()
{
//the number of pixels per scanline

int screenWidth = 500;
//the number of scanlines

int screenHeight = 500;

/*
Setup the sphere....currently there is only one sphere
*/


Vector mySpherePos = Vector(250, 250, 50);
Sphere mySphere = Sphere(mySpherePos, 5);

//vectors which make up the ray (origin and direction)

Vector mainRayPos;
Vector mainRayDirection;
//the ray

Ray mainRay;

//for every scanline

for(int i = 0; i < screenWidth; i++)
{
//for every pixel on the scanline

for(int j = 0; j < screenHeight; j++)
{

/*
Setup the ray direction
Starts in the lower-left corner and scans left-to-right and bottom-to-top
*/

mainRayDirection = Vector(i, j, 100);
mainRayDirection.normalize(mainRayDirection);
mainRay.setDirection(mainRayDirection);

/*
Setup the ray position
This is always going to be in the center of the screen -50 from
the "view screen"
*/

mainRayPos = Vector(screenWidth/2, screenHeight/2, -50);
mainRay.setPosition(mainRayPos);

//find intersections

intersectSphere(mainRay, mySphere);

/*
There is only one sphere right now, so I''m not concerned with which
sphere is the closest to the ray''s origin...just whether it intersects
or not
*/


/*
if the sphere intersects, traceResult is true.
otherwise traceResult is false.
*/

if(traceResult)
{
//set newColor to red indicating a miss

colorVal newColor = {255, 0, 0};
//set the pixel to newColor

setPixel(i, j, newColor);
//reset traceResult

traceResult = false;
}
else
{
//set newColor to green indicating a hit

colorVal newColor = {0, 255, 0};
//set the pixel to newColor

setPixel(i, j, newColor);
//reset traceResult

traceResult = false;
}
}
}
}


Am I going in the right direction? Thx for the reply ApochPiQ. I''m not understanding why you set the ray''s x and y values to what you mentioned. When I do that, it causes the rays to rays to cast in negative y directions which miss the "viewing screen" completely and don''t go through the pixels. Sorry if I''m confusing the problem and I really appreciate the help.

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
Usually the center of your screen would be (0, 0), which is why he is setting up the ray direction like that.

Share this post


Link to post
Share on other sites
I forgot - my camera system is kind of odd. It will result in the camera pointing in the +z direction, with +y being up and +x being to the right. This isn't a very "normal" setup but I've gotten used to it. By shuffling the axes (say, substitute y for z, etc.) you can point the camera on different axes. Also note that you have to apply a transformation to the camera rays to move the camera into your desired position.

Your intersection logic seems correct, but I'd have to sit down and look at it harder to be sure. Maybe later today when I've got more time.

For now, try using an algebraic root solver for your sphere hit test:


void Ray_Sphere(Ray_S r, Sphere_S *s, float &t)
{
float q;
float scaleCorrection = TransformRay(r, s->transform);

t = (r.ODDT2sq-4*r.OSDM1);

if (t<0.0f)
{
t=100001;
return;
}
t = sqrtf(t) * 0.5f;
q = r.ODDT - t;

if (q<0.001f)
{
t = r.ODDT + t;
if (t<0.0f)
t = 100001; //farther than our clipping distance

}
else
t=q;


if(t < 100001)
t *= scaleCorrection;
}



// To help understand the r.weird stuff

// This is just doing some precalcs to save time during intersection tests

void PrepareRay(Ray_S &r)
{
// Calculate Direction^2 and Origin^2

r.D2.x = r.Direction.x * r.Direction.x;
r.D2.y = r.Direction.y * r.Direction.y;
r.D2.z = r.Direction.z * r.Direction.z;
r.O2.x = r.Origin.x * r.Origin.x;
r.O2.y = r.Origin.y * r.Origin.y;
r.O2.z = r.Origin.z * r.Origin.z;

// Calculate Origin * Direction

r.DO.x = r.Origin.x * r.Direction.x;
r.DO.y = r.Origin.y * r.Direction.y;
r.DO.z = r.Origin.z * r.Direction.z;

// Calculate 1/Direction -- used mainly by planar objects

r.OneOverD.x = 1.0f / r.Direction.x;
r.OneOverD.y = 1.0f / r.Direction.y;
r.OneOverD.z = 1.0f / r.Direction.z;

// Calculate the enigmatic ODDT2 and OSDM1 coefficients.

// ODDT2 stands for Origin Dot Direction Times Two, which

// is a slight misnomer since it is actually multiplied by

// negative two. OSDM1 is Origin Self Dotted Minus One. This

// is generally used by quadric surfaces (spheres, cylinders,

// etc) which have a radius of 1 and are scaled into the

// correct dimensions. ODDT2sq is just ODDT2 * ODDT2, as

// should be quite obvious from the code.

r.ODDT = -DotMacro(r.Origin, r.Direction);
r.OSDM1 = r.O2.x + r.O2.y + r.O2.z - 1;
r.ODDT2sq = r.ODDT * r.ODDT * 4;
}





[edit] Added the precalculation code to help demystify the intersection logic. This is pretty heavily optimized so it may be a bit murky, but you should be able to adapt the general flow to your code. The primary advantage of this is that it gives you an exact distance to the sphere intersection, and thusly the actual intersection point - when you get to lighting you will need that information. In general the geometric approach (which you are using) is only ever used for shadows (where all you need is a yes/no flag rather than an actual intersection point), and even then it isn't used often as its just as easy to use the existing algebraic code.

[edited by - ApochPiQ on November 29, 2003 11:57:13 AM]

Share this post


Link to post
Share on other sites