Basic Ray tracing

Started by
12 comments, last by RJSkywalker 12 years ago
I was doing the calculations in Camera space and now I switched to World Space. This is what I m getting. It is almost close to the correct image I have. I m using Phong shading and here is my code. I m using directional lights and they are given in camera space. So I had to convert them to world space.
void GetColor(GzRender* render,GzRay ray, GzCoord hitPoint,GzCoord* vertexList,GzCoord* pNormalList,GzColor* intensity) //normals of the hit point object
{
//1st interpolate the normals at the hit point
GzCoord normal;
GzColor resultIntensity;
NormalInterpolation(hitPoint,vertexList,pNormalList,&normal);
//now that we have got the normal at the hit point, apply the Shading equation
PhongIllumination(render,ray,normal,&resultIntensity);
resultIntensity[RED] *= 255;
resultIntensity[GREEN] *= 255;
resultIntensity[BLUE] *= 255;
memcpy(intensity,&resultIntensity,sizeof(GzColor));
}



Here is the phong illumination code:

void PhongIllumination(GzRender *render, GzRay ray, GzCoord normal,GzColor *intensity)
{
float colorCoeff = 0.0f;
GzColor specTerm = {0,0,0};
GzColor diffTerm = {0,0,0};
GzCoord Rvec;
GzCoord Evec;
GzCoord tempNorm = {normal[X],normal[Y],normal[Z]};
memcpy(&Evec,&ray.direction,sizeof(GzCoord));
float nl,ne,re; // dot product of N & L, N & E and R & E
bool flag; //to detect if lighting model should be computed
for(int j = 0; j < render->numlights; j++)
{
flag = false;
GzCoord LVec;
memcpy(&LVec,&render->lights[j].direction,sizeof(GzCoord));
//we have to do inverse Xform on the LVec to convert it from image to world space;
NormalTransformation(LVec,render->camera.Xwi,&LVec);
//ImageWorldXform(render,&LVec);
nl = dotProduct(tempNorm, LVec);
ne = dotProduct(tempNorm, Evec);
// check if we should include the light in the lighting model depending on its direction with the camera

if(nl < 0 && ne < 0) //invert the normal and calculate the lighting model
{
tempNorm[X] *= -1;
tempNorm[Y] *= -1;
tempNorm[Z] *= -1;
Normalize(&tempNorm);
nl = dotProduct(tempNorm, LVec);
ne = dotProduct(tempNorm, Evec);
//renormalize N and recompute nl and ne
flag = true;
}
else if(nl > 0 && ne > 0) // compute the lighting model
flag = true;

if(flag)
{
//r = 2(n.l)N - L
Rvec[X] = (2 * nl * tempNorm[X]) - LVec[X];
Rvec[Y] = (2 * nl * tempNorm[Y]) - LVec[Y];
Rvec[Z] = (2 * nl * tempNorm[Z]) - LVec[Z];
Normalize(&Rvec);

re = dotProduct(Rvec,Evec);
//clamp it to [0,1]
// if(re > 1.0)
// re = 1.0f;
// else if(re < 0.0)
// re = 0.0f;
re = pow(re,render->spec); //specular power
specTerm[RED] += render->lights[j].color[RED] * re; //clamp the colors individually as well
specTerm[GREEN] += render->lights[j].color[GREEN] * re;
specTerm[BLUE] += render->lights[j].color[BLUE] * re;

diffTerm[RED] += render->lights[j].color[RED] * nl;
diffTerm[GREEN] +=render->lights[j].color[GREEN] * nl;
diffTerm[BLUE] += render->lights[j].color[BLUE] * nl;
}
//now reset the normal back for the next light
memcpy(tempNorm,normal,sizeof(GzCoord));
}
GzColor specular, diffuse;
memcpy(specular, &specTerm, sizeof(GzColor));
memcpy(diffuse, &diffTerm, sizeof(GzColor));
MultiplyColor(render,specular,diffuse,intensity);
}


void MultiplyColor(GzRender *render, GzColor specular, GzColor diffuse, GzColor* resultant)
{
GzColor result = {0,0,0};
result[RED] = render->Ka[RED] * render->ambientlight.color[RED] + render->Ks[RED] * specular[RED] + render->Kd[RED] * diffuse[RED];
result[GREEN] = render->Ka[GREEN] * render->ambientlight.color[GREEN] + render->Ks[GREEN] * specular[GREEN] + render->Kd[GREEN] * diffuse[GREEN];
result[BLUE] = render->Ka[BLUE] * render->ambientlight.color[BLUE] + render->Ks[BLUE] * specular[BLUE] + render->Kd[BLUE] * diffuse[BLUE];

if(result[RED] > 1.0f)
result[RED] = 1.0f;
if(result[RED] < 0.0f)
result[RED ]= 0.0f;
if(result[GREEN] > 1.0f)
result[GREEN] = 1.0f;
if(result[GREEN] < 0.0f)
result[GREEN] = 0.0f;

if(result[BLUE] > 1.0f)
result[BLUE] = 1.0f;
if(result[BLUE] < 0.0f)
result[BLUE] = 0.0f;
memcpy(resultant,result,sizeof(GzColor));
}
Advertisement
[sharedmedia=gallery:images:2102]
[sharedmedia=gallery:images:2103]
output 4 I get when I shifted the camera position. For the same case if I invert the z value of the ray direction I get output 3. I had it originally called it 'd' and for output 4 d = -5. for output 3 d = 5. there is still some mistake. I wanted to ask you, In the camera code you gave me, if we are performing all the calculations in world space, then why do we multiply by the view matrix to get the corners of the screen. Wouldn't that take you to camera space?
I wanted to ask you, In the camera code you gave me, if we are performing all the calculations in world space, then why do we multiply by the view matrix to get the corners of the screen. Wouldn't that take you to camera space?[/quote]
No, in fact if you look closely:
Corners[0] := MulMatrix(ViewMatrix, Vector(-(+FOV) * AspectRatio, (-FOV), 1));
The vector multiplied with the matrix is in fact in camera space. Multiplying it by the view matrix transforms it to world space (the virtual screen is in world space). It is in fact a matter of terminology - the word "view matrix" could be used to describe matrices for both directions (world->camera and camera->world), but since their inverses are their transposes, this blurs the line between the two significantly. You can think of the matrix as an inverse view matrix if it makes more sense to you, but it does transform camera-space vectors to world space.

All calculations should hopefully be done in world space, it doesn't make much sense to do them in any other space because rays are inherently in world space (this is different from rasterization which does things backwards, by converting world-space vertices to camera space, then clip space, which allows one to work in camera space). You *could* work in camera space if you really wanted by converting all your models to camera space but I don't really see the point except micro-optimizations (and it forces you to retransform all your vertices whenever you move your camera).

Now what you want to do is simplify your shading code to a maximum to reduce the potential for error. Try and get diffuse lighting working in its simplest form (remember to clamp the dot product to zero so it doesn't become negative, and remember to normalize all the vectors you use) before Phong shading. If it works then you should get a nicely shaded teapot (without the specular highlights, of course).

What is this big green/red quad by the way in the last two screenshots? It doesn't look like it should be there. Does it represent another model?

“If I understand the standard right it is legal and safe to do this but the resulting value could be anything.”

oh god..i think i just realized i had been reading vertices from the wrong file...yes that quad was supposed to be a plane for the texture map that I had used in my rasterization program..i changed the file and it does not contain the plane but it still gives that white shade at the bottom..i think it might be due to the light direction or the shading equation..i shall check again

This topic is closed to new replies.

Advertisement