Sign in to follow this  

Basic Ray tracing

This topic is 2058 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hey people, I am trying to implement a basic raytracer application. I m not using in graphics library. Just making use of the fundamental concepts.
I m having a problem while displaying my image. Its a simple utah teapot.

This is how I am implementing my ray tracing. I m not considering shadows and secondary rays yet until I get the teapot rendered.

1. Constructing the ray through the center of the pixel. Given: camera position, lookat, up vector, camera FOV. All of them are in world space.
- I m using the ray equation r = origin + t*direction
origin is my camera position. I find the 3 camera axes u,v,w:
w = lookat - camposition; u = crossproduct(w,camera->up); v = crossproduct(u,w); Normalize u, v, w
I m choosing distance to the image plane, d = -5
I m using my image plane as : left = -4, right = 4, top = 3, bottom = -3
using this i m calculating the pixelPositions: x = left + (right-left) * (i+0.5) / screenX; y = bottom + (top-bottom) * (j+0.5)/screenY
i,j is the i,j the pixel
So I get the ray direction = pixelScreenPosition - Origin
= xU + yV - d*W
I normalize this.
This completes this function

2. Before performing the intersection, i do the transformation. I m using the triangle primitives. All the vertices are in Model Space and even the Vertex Normals. So I convert them to world space.
3. Then I do the intersection of the ray with the triangle and return the closest point.
4. Then I perform the shading calculations.

I m not sure about the ray construction step. I could include the code. I hope the theory is right

Share this post


Link to post
Share on other sites
Hello,

What is the problem you are experiencing? Can you tell us more on what the output is, show some code? Here is some information:

1. You should precalculate the inverse camera matrix (and actually, precalculate the focal plane's corners), but essentially this is correct. You can always generalize this later if you want to use more elaborate camera systems (like a spherical camera).

2. You should do that before you even start tracing anything, since this does not depend on any ray. So upon loading the teapot, you automatically transform its vertices in world space.

3. Correct, but if you want maximum speed you will need to use an acceleration structure (which can be difficult to implement), otherwise it will be quite slow.

4. Yes, and if you want to do shadow rays and more enhancements there is where you do reflections/refractions and possibly go back to step 3 with the new ray

The theory is essentially right and will give you a working raytracer, but that's only scratching the surface. Don't forget that one massive advantage of raytracers is the ability to display any analytical surface (as long as it's simple enough to be efficiently intersected with a ray), so don't just restrict yourself to triangles later on. Spheres, cylinders, planes come to mind. Of course most models are done purely in triangles.

[quote]I m not sure about the ray construction step. I could include the code. I hope the theory is right[/quote]
I can provide you with the code from my camera code from my path tracer, if you want to take a look. It's in Free Pascal but should be easily understandable.

Share this post


Link to post
Share on other sites
I would really appreciate it if you could help me out and could include the code you mentioned. I even tried orthographic projection but it did not work. A bunch of random triangles get drawn instead of the teapot. I tried it the code in world space as well as in camera space but it was giving me the same result. Here is my Constructing the ray code: It is for orthographic projection

void ConstructRayThroughPixel(GzRender* render, int i, int j,GzRay *ray)
{

GzCoord u,v,w; //cam axes

for(int k = 0; k < 3; k++) //cam z-axis
w[k] = render->camera.lookat[k] - render->camera.position[k];

Normalize(&w);

crossProduct(w, render->camera.worldup, &u);
Normalize(&u);

crossProduct(u,w,&v);
Normalize(&v);

GzCoord rayDirection;

//for orthographic projection rayDir = -w
//calculate the coordinates of the pixel in world space
//Pixelscreen = Camera origin + Au + Bv
//plane coordinates in world space
float planeLeft = -4.0;
float planeRight = 4.0;
float planeTop = 3.0;
float planeBottom = -3.0;

float pixU, pixV;


pixU = planeLeft + (imagePlaneWidth * (i + 0.5) / (render->display->xres));
pixV = planeBottom + (imagePlaneHeight * (j + 0.5) / (render->display->yres));

float d = 1; //distance to the image plane from the z-axis (any value)

rayDirection[X] = d * w[X];
rayDirection[Y] = d * w[Y];
rayDirection[Z] = d * w[Z];

//normalize the direction
Normalize(&rayDirection);

GzCoord rOrigin;
//origin = e + Au + Bv
rOrigin[X] = render->camera.position[X] + (pixU * u[X]) + (pixV * v[X]);
rOrigin[Y] = render->camera.position[Y] + (pixU * u[Y]) + (pixV * v[Y]);
rOrigin[Z] = render->camera.position[Z] + (pixU * u[Z]) + (pixV * v[Z]);

//ray equation is r = rOrigin + t * direction

memcpy(&ray->origin, &rOrigin, sizeof(GzCoord));
memcpy(&ray->direction, &rayDirection, sizeof(GzCoord));
}

Share this post


Link to post
Share on other sites
Ok, here is the code (this is for a perspective projection, not orthographic):

This part creates the view matrix. The matrix is 4x4, depending on how your memory is aligned you might have to switch around the entries a bit.
[CODE]// This function creates an orthogonal view matrix, using a position point and a target vector.
function LookAt(Position, Target: TVector): TMatrix;
Var
Up, XAxis, YAxis, ZAxis: TVector;
begin
// Create the Up vector.
Up := Vector(0, 1, 0);

// Create the rotation axis basis.
zAxis := NormalizeVector(Target - Position);
xAxis := NormalizeVector(CrossVector(Up, zAxis));
yAxis := NormalizeVector(CrossVector(zAxis, xAxis));

// Build the matrix from those vectors.
Result[0][0] := xAxis.X;
Result[0][1] := xAxis.Y;
Result[0][2] := xAxis.Z;
Result[0][3] := -DotVector(xAxis, Position);
Result[1][0] := yAxis.X;
Result[1][1] := yAxis.Y;
Result[1][2] := yAxis.Z;
Result[1][3] := -DotVector(yAxis, Position);
Result[2][0] := zAxis.X;
Result[2][1] := zAxis.Y;
Result[2][2] := zAxis.Z;
Result[2][3] := -DotVector(zAxis, Position);
Result[3][0] := 0;
Result[3][1] := 0;
Result[3][2] := 0;
Result[3][3] := 1;
end;[/CODE]

This code calculates the positions of the four corners of the "virtual screen" in world space, where HorizontalFOV is your field of view in radians (probably something between pi/4 and pi/2) - the tangent function is a correcting factor, but anyway:
[CODE]// Find the corner points (all the others can be interpolated with a bilinear interpolation).
FOV := Tan(HorizontalFOV * 0.5);
Corners[0] := MulMatrix(ViewMatrix, Vector(-(+FOV) * AspectRatio, (-FOV), 1));
Corners[1] := MulMatrix(ViewMatrix, Vector(-(-FOV) * AspectRatio, (-FOV), 1));
Corners[2] := MulMatrix(ViewMatrix, Vector(-(-FOV) * AspectRatio, (+FOV), 1));
Corners[3] := MulMatrix(ViewMatrix, Vector(-(+FOV) * AspectRatio, (+FOV), 1));[/CODE]
MulMatrix is your standard matrix multiplication (you may ignore the matrix's last row or column as it's just (0, 0, 0, 1))

And finally, this one actually traces the ray (the code above is preprocessing which is done only once when the camera moves):
[CODE]// Returns the camera ray going through the pixel at U, V.
function TCamera.Trace(U, V: Double): TRay;
begin
// Interpolate the focal plane point and trace the ray to it.
Result.Origin := CameraPosition;
Result.Direction := NormalizeVector(LerpVector(LerpVector(Corners[0], Corners[1], U), LerpVector(Corners[3], Corners[2], U), V));
end;[/CODE]
It's basically a bilinear interpolation if you look closely. Note that U and V are assumed to be between 0 and 1 here, so if yours are between -1 and 1, just add 1 to each coordinate and halve them. LerpVector is a vector linear interpolation, e.g. lerp(a, b, t) = a + (b - a)t.

This code should give you a proper camera ray construction (in world space) which can then be intersected against your teapot.

Note that from your description "a bunch of random triangles get drawn" I don't think it's actually the camera code that's failing. Can you show your triangle intersection code? I suspect this is where the problem lies, because with camera bugs in general it's all or nothing.

Share this post


Link to post
Share on other sites
Yes, i think the problem is in the intersection code. I changed it and now I can see a much clearer but still vague silhouette of the teapot. I'll post my code and the changes I made. I'll also post the image. One thing I did not understand in your corner points finding code. What does (-(+FOV) mean? I know that you are assigning the vector but I did not understand the notation of (-(+) FOV)? I have 2 versions of Intersect function. I m posting them both. My aim is to find the closest intersection point and its a triangle-ray intersection

[CODE]

void FindIntersection(GzRay ray, vector<SceneMesh> pScene, GzCoord* intersectionPoint,int& primitiveIndex)
{
GzCoord ClosestPoint;
int ClosestPrimitiveIdx = -1; //closest primitive not found
float closestDistance = INT_MAX; //sentinel value
ClosestPoint[X] = INT_MAX;
ClosestPoint[Y] = INT_MAX;
ClosestPoint[Z] = INT_MAX;
for(int i = 0; i < pScene.size(); i++) //loop through all the primitives (for now all the primitives are triangles that make up the teapot)
{
float tValue;
bool intersects = Intersect(ray,pScene[i].vertexList,tValue);

if(intersects == true)
{
//assign closest point
if(tValue < closestDistance)
{
closestDistance = tValue;
ClosestPrimitiveIdx = i;
//point is
ClosestPoint[X] = ray.origin[X] + (closestDistance * ray.direction[X]);
ClosestPoint[Y] = ray.origin[Y] + (closestDistance * ray.direction[Y]);
ClosestPoint[Z] = ray.origin[Z] + (closestDistance * ray.direction[Z]);
}
}
}
memcpy(intersectionPoint,ClosestPoint,sizeof(GzCoord));
primitiveIndex = ClosestPrimitiveIdx;
}

[/CODE]

Here is the 1st version of Intersect function

[CODE]
//1st find the surface normal to the plane containing the triangle vertices
GzCoord normal;
GzCoord tempPoint;
GetDirectionNormals(pVertexList[0],pVertexList[1],pVertexList[2],&normal);
//equation of the plane containing the triangle : ax+by+cz = d
float dValue = dotProduct(normal,pVertexList[0]); // technically its -d but just check both cases
//1st check if ray.direction . N > 0, then no intersection - early elimination
float dotDirNorm = dotProduct(normal,ray.direction);

if(!(dotDirNorm < 0.0))
{
//no intersection - so early elimination
tValue = INT_MAX;
return false;
}

//now find t = (d - rayOrigin.N)/direction.N
tValue = (dValue - dotProduct(ray.origin,normal)); //don't divide yet
if(!(tValue <= 0.0)) //we won't intersection only at the front side
{
//no frontal intersection
tValue = INT_MAX;
return false;
}
tValue /= dotDirNorm;
//now that we have t, we can find the point from the ray equation
tempPoint[X] = ray.origin[X] + (tValue * ray.direction[X]);
tempPoint[Y] = ray.origin[Y] + (tValue * ray.direction[Y]);
tempPoint[Z] = ray.origin[Z] + (tValue * ray.direction[Z]);
//check if the point lies inside or outside the triangle using barycentric coordinates
if(BarycentricInterpolationTest(tempPoint,pVertexList,normal) == true)
{
return true;
}
else //point probably lies outside
{
tValue = INT_MAX;
return false;
}
}
[/CODE]

Here is the barycentric interpolation test function:
[CODE]
bool BarycentricInterpolationTest(GzCoord point, GzCoord* pVertexList, GzCoord normal)
{
//we have to discard either x,y or z depending on which has higher abs value in the normal
GzCoord u,v; //edges
if(fabs(normal[X]) > fabs(normal[Y]))
{
if(fabs(normal[X]) > fabs(normal[Z]))
{
//discard x
u[0] = point[Y] - pVertexList[0][Y];
u[1] = pVertexList[1][Y] - pVertexList[0][Y];
u[2] = pVertexList[2][Y] - pVertexList[0][Y];
v[0] = point[Z] - pVertexList[0][Z];
v[1] = pVertexList[1][Z] - pVertexList[0][Z];
v[2] = pVertexList[2][Z] - pVertexList[0][Z];
}
else
{
//discard z
u[0] = point[X] - pVertexList[0][X];
u[1] = pVertexList[1][X] - pVertexList[0][X];
u[2] = pVertexList[2][X] - pVertexList[0][X];
v[0] = point[Y] - pVertexList[0][Y];
v[1] = pVertexList[1][Y] - pVertexList[0][Y];
v[2] = pVertexList[2][Y] - pVertexList[0][Y];
}
}
else
{
if(fabs(normal[Y]) > fabs(normal[Z]))
{
//discard Y
u[0] = point[X] - pVertexList[0][X];
u[1] = pVertexList[1][X] - pVertexList[0][X];
u[2] = pVertexList[2][X] - pVertexList[0][X];
v[0] = point[Z] - pVertexList[0][Z];
v[1] = pVertexList[1][Z] - pVertexList[0][Z];
v[2] = pVertexList[2][Z] - pVertexList[0][Z];
}
else
{
//discard Z
u[0] = point[X] - pVertexList[0][X];
u[1] = pVertexList[1][X] - pVertexList[0][X];
u[2] = pVertexList[2][X] - pVertexList[0][X];
v[0] = point[Y] - pVertexList[0][Y];
v[1] = pVertexList[1][Y] - pVertexList[0][Y];
v[2] = pVertexList[2][Y] - pVertexList[0][Y];
}
}
//now before computing the coordinates compute the denominator of the equation
//if it is zero return false
float denom = (u[Y] * v[Z]) - (v[Y] * u[Z]);

if(denom == 0)
{
return false;
}
//else compute the barycentric coordinates
denom = 1.0f / denom;
float alpha = ((u[X] * v[Z]) - (v[Z] * u[Z])) * denom;
if(!(alpha >= 0.0f)) //actually we might have to check if <= 1.0f
return false;

float beta = ((u[Y] * v[X]) - (v[Y] * u[X])) * denom;
if(!(beta >= 0.0f))
return false;
float gamma = 1.0f - alpha - beta;
if(!(gamma >= 0.0f))
return false;
return true;
}
[/CODE]

Here is the 2nd and more recent version of Intersect function:
[CODE]
bool Intersect(GzRay ray, GzCoord* pVertexList,float& tValue)
{
//1st find the surface normal to the plane containing the triangle vertices
GzCoord normal;
GzCoord tempPoint;
GetDirectionNormals(pVertexList[0],pVertexList[1],pVertexList[2],&normal);
//equation of the plane containing the triangle : ax+by+cz = d
float dValue = dotProduct(normal,pVertexList[0]); // technically its -d but just check both cases
//1st check if ray.direction . N > 0, then no intersection - early elimination
float dotDirNorm = dotProduct(normal,ray.direction);
if(dotDirNorm != 0)
{
tValue = -(dotProduct(ray.origin, normal) + dValue) / dotDirNorm;

if(tValue > 0)
return true;
}
return false;
}
[/CODE]

And I m also posting the images

Share this post


Link to post
Share on other sites
Oh the weird sign stuff was just me writing down each corner mechanically so I didn't mess up the signs when I was writing this code, I didn't bother changing it after. Regular sign rules apply, i.e. -(-FOV) = FOV, +(-FOV) = -FOV, etc..

Ok. Firstly you can simplify your main intersection loop by just keeping the closest distance. Only calculate the final intersection point at the very end, when you have found the closest intersection for all triangles..Then the main intersection loop looks like this, in pseudocode:
[CODE]
float closest = -1
int closest_index = -1

for all primitives
{
float distance = intersection(ray, this_primitive)
if (closest < 0) or ((distance > 0) and (distance < closest))
{
closest = distance;
closest_index = this_primitive_index;
}
}

if (closest < 0)
{
// there is no intersection
}
else
{
// the intersection is ray.origin + closest * ray.direction
// and the intersected primitive is closest_index
}
[/CODE]

Also, your triangle-ray intersection code is pretty big, there is a lot of potential for error there. Here is a version from my raytracer - it's not the fastest but it does work, it still uses barycentric coordinates but by using vectors the code is a lot shorter:
[CODE]
// point1, point2 and point3 are the three vertices of the triangle
// L1 is one side of the triangle (i.e. point2 - point1)
// L2 is another side of the triangle (i.e. point3 - point1)

float distance = ray.origin - point1;

vector S1 = cross(ray.direction, L2);
d = 1 / dot(S1, L1);
float u = dot(distance, S1) * d; // first barycentric coordinate

// if u < 0 or u > 1, then the ray doesn't intersect the triangle by definition of barycentric coordinates, so return a distance of -1
if (u < 0) or (u > 1) return -1;

vector S2 = cross(distance, L1);
float v = dot(ray.direction, S2) * d; // second barycentric coordinate

// if (v < 0) or (u + v > 1) then the ray doesn't intersect the triangle, by definition of barycentric coordinates
if (v < 0) or (u + v > 1) return -1;

// then the distance of the ray to the triangle is ...
return dot(L2, S2) * d:
[/CODE]
The above code is a bit obscure I know but it does work (you can also work it out geometrically). You will need to precompute a couple of vectors for each triangle upon loading the model (L1 and L2) and store it along with the normal. Note that for now, the way you calculate them does not matter, but it will later on when triangle winding direction becomes important.

Share this post


Link to post
Share on other sites
Also, your second image strikes me as being very similar to an vertex index bug. Is your model using vertex indices? It could just be you are loading your model wrong (because of how indices work, being off even a single index will garble the model and give a "spider web"-like triangle distribution)

Note debugging a raytracer isn't easy, but when it works it all fits together perfectly (from my experience).

Share this post


Link to post
Share on other sites
Yes it is using vertex indices. I m loading the primitives from a .asc file that stores the vertices, and vertex normals of the triangles in Object space. I tried various intersection equations. After applying your technique, I was getting the teapot kind of right. This is what I m getting. I think now I may have to alter the camera position?

[sharedmedia=gallery:images:2099]

Share this post


Link to post
Share on other sites
This looks a lot better. There are still some bugs but you are getting something. You probably want to move the camera a bit farther away from the teapot so that you can see what's going on better. I assume your teapot is centered on the origin so you can just increase the camera's distance from the teapot while keeping the target as the origin.

It seems all the triangles are actually there now, simply shaded a bit oddly. What is your shading code?

Share this post


Link to post
Share on other sites
I was doing the calculations in Camera space and now I switched to World Space. This is what I m getting. It is almost close to the correct image I have. I m using Phong shading and here is my code. I m using directional lights and they are given in camera space. So I had to convert them to world space.[CODE]
void GetColor(GzRender* render,GzRay ray, GzCoord hitPoint,GzCoord* vertexList,GzCoord* pNormalList,GzColor* intensity) //normals of the hit point object
{
//1st interpolate the normals at the hit point
GzCoord normal;
GzColor resultIntensity;
NormalInterpolation(hitPoint,vertexList,pNormalList,&normal);
//now that we have got the normal at the hit point, apply the Shading equation
PhongIllumination(render,ray,normal,&resultIntensity);
resultIntensity[RED] *= 255;
resultIntensity[GREEN] *= 255;
resultIntensity[BLUE] *= 255;
memcpy(intensity,&resultIntensity,sizeof(GzColor));
}

[/CODE]

Here is the phong illumination code:
[CODE]
void PhongIllumination(GzRender *render, GzRay ray, GzCoord normal,GzColor *intensity)
{
float colorCoeff = 0.0f;
GzColor specTerm = {0,0,0};
GzColor diffTerm = {0,0,0};
GzCoord Rvec;
GzCoord Evec;
GzCoord tempNorm = {normal[X],normal[Y],normal[Z]};
memcpy(&Evec,&ray.direction,sizeof(GzCoord));
float nl,ne,re; // dot product of N & L, N & E and R & E
bool flag; //to detect if lighting model should be computed
for(int j = 0; j < render->numlights; j++)
{
flag = false;
GzCoord LVec;
memcpy(&LVec,&render->lights[j].direction,sizeof(GzCoord));
//we have to do inverse Xform on the LVec to convert it from image to world space;
NormalTransformation(LVec,render->camera.Xwi,&LVec);
//ImageWorldXform(render,&LVec);
nl = dotProduct(tempNorm, LVec);
ne = dotProduct(tempNorm, Evec);
// check if we should include the light in the lighting model depending on its direction with the camera

if(nl < 0 && ne < 0) //invert the normal and calculate the lighting model
{
tempNorm[X] *= -1;
tempNorm[Y] *= -1;
tempNorm[Z] *= -1;
Normalize(&tempNorm);
nl = dotProduct(tempNorm, LVec);
ne = dotProduct(tempNorm, Evec);
//renormalize N and recompute nl and ne
flag = true;
}
else if(nl > 0 && ne > 0) // compute the lighting model
flag = true;

if(flag)
{
//r = 2(n.l)N - L
Rvec[X] = (2 * nl * tempNorm[X]) - LVec[X];
Rvec[Y] = (2 * nl * tempNorm[Y]) - LVec[Y];
Rvec[Z] = (2 * nl * tempNorm[Z]) - LVec[Z];
Normalize(&Rvec);

re = dotProduct(Rvec,Evec);
//clamp it to [0,1]
// if(re > 1.0)
// re = 1.0f;
// else if(re < 0.0)
// re = 0.0f;
re = pow(re,render->spec); //specular power
specTerm[RED] += render->lights[j].color[RED] * re; //clamp the colors individually as well
specTerm[GREEN] += render->lights[j].color[GREEN] * re;
specTerm[BLUE] += render->lights[j].color[BLUE] * re;

diffTerm[RED] += render->lights[j].color[RED] * nl;
diffTerm[GREEN] +=render->lights[j].color[GREEN] * nl;
diffTerm[BLUE] += render->lights[j].color[BLUE] * nl;
}
//now reset the normal back for the next light
memcpy(tempNorm,normal,sizeof(GzCoord));
}
GzColor specular, diffuse;
memcpy(specular, &specTerm, sizeof(GzColor));
memcpy(diffuse, &diffTerm, sizeof(GzColor));
MultiplyColor(render,specular,diffuse,intensity);
}
[/CODE]
[CODE]
void MultiplyColor(GzRender *render, GzColor specular, GzColor diffuse, GzColor* resultant)
{
GzColor result = {0,0,0};
result[RED] = render->Ka[RED] * render->ambientlight.color[RED] + render->Ks[RED] * specular[RED] + render->Kd[RED] * diffuse[RED];
result[GREEN] = render->Ka[GREEN] * render->ambientlight.color[GREEN] + render->Ks[GREEN] * specular[GREEN] + render->Kd[GREEN] * diffuse[GREEN];
result[BLUE] = render->Ka[BLUE] * render->ambientlight.color[BLUE] + render->Ks[BLUE] * specular[BLUE] + render->Kd[BLUE] * diffuse[BLUE];

if(result[RED] > 1.0f)
result[RED] = 1.0f;
if(result[RED] < 0.0f)
result[RED ]= 0.0f;
if(result[GREEN] > 1.0f)
result[GREEN] = 1.0f;
if(result[GREEN] < 0.0f)
result[GREEN] = 0.0f;

if(result[BLUE] > 1.0f)
result[BLUE] = 1.0f;
if(result[BLUE] < 0.0f)
result[BLUE] = 0.0f;
memcpy(resultant,result,sizeof(GzColor));
}
[/CODE]

Share this post


Link to post
Share on other sites
[sharedmedia=gallery:images:2102]
[sharedmedia=gallery:images:2103]
output 4 I get when I shifted the camera position. For the same case if I invert the z value of the ray direction I get output 3. I had it originally called it 'd' and for output 4 d = -5. for output 3 d = 5. there is still some mistake. I wanted to ask you, In the camera code you gave me, if we are performing all the calculations in world space, then why do we multiply by the view matrix to get the corners of the screen. Wouldn't that take you to camera space?

Share this post


Link to post
Share on other sites
[quote]I wanted to ask you, In the camera code you gave me, if we are performing all the calculations in world space, then why do we multiply by the view matrix to get the corners of the screen. Wouldn't that take you to camera space?[/quote]
No, in fact if you look closely:
[CODE]Corners[0] := MulMatrix(ViewMatrix, Vector(-(+FOV) * AspectRatio, (-FOV), 1));[/CODE]
The vector multiplied with the matrix is in fact in camera space. Multiplying it by the view matrix transforms it to world space (the virtual screen is in world space). It is in fact a matter of terminology - the word "view matrix" could be used to describe matrices for both directions (world->camera and camera->world), but since their inverses are their transposes, this blurs the line between the two significantly. You can think of the matrix as an inverse view matrix if it makes more sense to you, but it does transform camera-space vectors to world space.

All calculations should hopefully be done in world space, it doesn't make much sense to do them in any other space because rays are inherently in world space (this is different from rasterization which does things backwards, by converting world-space vertices to camera space, then clip space, which allows one to work in camera space). You *could* work in camera space if you really wanted by converting all your models to camera space but I don't really see the point except micro-optimizations (and it forces you to retransform all your vertices whenever you move your camera).

Now what you want to do is simplify your shading code to a maximum to reduce the potential for error. Try and get diffuse lighting working in its simplest form (remember to clamp the dot product to zero so it doesn't become negative, and remember to normalize all the vectors you use) before Phong shading. If it works then you should get a nicely shaded teapot (without the specular highlights, of course).

What is this big green/red quad by the way in the last two screenshots? It doesn't look like it should be there. Does it represent another model?

Share this post


Link to post
Share on other sites
oh god..i think i just realized i had been reading vertices from the wrong file...yes that quad was supposed to be a plane for the texture map that I had used in my rasterization program..i changed the file and it does not contain the plane but it still gives that white shade at the bottom..i think it might be due to the light direction or the shading equation..i shall check again

Share this post


Link to post
Share on other sites

This topic is 2058 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this