• Create Account

# RJSkywalker

Member Since 14 Jul 2009
Offline Last Active Jul 14 2016 09:17 PM

### In Topic: line segments having common endpoint

05 January 2013 - 09:03 PM

@Alvaro:

Thanks for the d >= abs(L0-L1) info.. I had figured out the L0+L1 part. Both of these would give if a solution exists or not. But I also want to know the configuration of the 2 line segments, as in also the point as well. I want my function to return a list of all such points. Because if I think of a brute force way, I can find atleast 3 points by moving by L0 parallel to  the 3 axes. The other points can be found by breaking L0 into x,y and z components if I am write. So if i have to iterate through a loop, how can I find the range of the loop it does not exactly go on a particular order when finding the point.

### In Topic: Setting up Nvidia PhysX SDK 3.2

02 October 2012 - 01:08 AM

I found the error: I was not including WIN32 in the preprocessor section of the properties

### In Topic: Basic Ray tracing

23 April 2012 - 03:13 AM

oh god..i think i just realized i had been reading vertices from the wrong file...yes that quad was supposed to be a plane for the texture map that I had used in my rasterization program..i changed the file and it does not contain the plane but it still gives that white shade at the bottom..i think it might be due to the light direction or the shading equation..i shall check again

### In Topic: Basic Ray tracing

23 April 2012 - 02:00 AM

output 4 I get when I shifted the camera position. For the same case if I invert the z value of the ray direction I get output 3. I had it originally called it 'd' and for output 4 d = -5. for output 3 d = 5. there is still some mistake. I wanted to ask you, In the camera code you gave me, if we are performing all the calculations in world space, then why do we multiply by the view matrix to get the corners of the screen. Wouldn't that take you to camera space?

### In Topic: Basic Ray tracing

23 April 2012 - 01:50 AM

I was doing the calculations in Camera space and now I switched to World Space. This is what I m getting. It is almost close to the correct image I have. I m using Phong shading and here is my code. I m using directional lights and they are given in camera space. So I had to convert them to world space.
```void GetColor(GzRender* render,GzRay ray, GzCoord hitPoint,GzCoord* vertexList,GzCoord* pNormalList,GzColor* intensity)  //normals of the hit point object
{
//1st interpolate the normals at the hit point
GzCoord normal;
GzColor resultIntensity;
NormalInterpolation(hitPoint,vertexList,pNormalList,&normal);
//now that we have got the normal at the hit point, apply the Shading equation
PhongIllumination(render,ray,normal,&resultIntensity);
resultIntensity[RED] *= 255;
resultIntensity[GREEN] *= 255;
resultIntensity[BLUE] *= 255;
memcpy(intensity,&resultIntensity,sizeof(GzColor));
}

```

Here is the phong illumination code:
```void PhongIllumination(GzRender *render, GzRay ray, GzCoord normal,GzColor *intensity)
{
float colorCoeff = 0.0f;
GzColor specTerm = {0,0,0};
GzColor diffTerm = {0,0,0};
GzCoord Rvec;
GzCoord Evec;
GzCoord tempNorm = {normal[X],normal[Y],normal[Z]};
memcpy(&Evec,&ray.direction,sizeof(GzCoord));
float nl,ne,re; // dot product of N & L,  N & E and R & E
bool flag; //to detect if lighting model should be computed
for(int j = 0; j < render->numlights; j++)
{
flag = false;
GzCoord LVec;
memcpy(&LVec,&render->lights[j].direction,sizeof(GzCoord));
//we have to do inverse Xform on the LVec to convert it from image to world space;
NormalTransformation(LVec,render->camera.Xwi,&LVec);
//ImageWorldXform(render,&LVec);
nl = dotProduct(tempNorm, LVec);
ne = dotProduct(tempNorm, Evec);
// check if we should include the light in the lighting model depending on its direction with the camera

if(nl < 0 && ne < 0) //invert the normal and calculate the lighting model
{
tempNorm[X] *= -1;
tempNorm[Y] *= -1;
tempNorm[Z] *= -1;
Normalize(&tempNorm);
nl = dotProduct(tempNorm, LVec);
ne = dotProduct(tempNorm, Evec);
//renormalize N and recompute nl and ne
flag = true;
}
else if(nl > 0 && ne > 0) // compute the lighting model
flag = true;

if(flag)
{
//r = 2(n.l)N - L
Rvec[X] = (2 * nl * tempNorm[X]) - LVec[X];
Rvec[Y] = (2 * nl * tempNorm[Y]) - LVec[Y];
Rvec[Z] = (2 * nl * tempNorm[Z]) - LVec[Z];
Normalize(&Rvec);

re = dotProduct(Rvec,Evec);
//clamp it to [0,1]
// if(re > 1.0)
//  re = 1.0f;
// else if(re < 0.0)
// re = 0.0f;
re = pow(re,render->spec);  //specular power
specTerm[RED] += render->lights[j].color[RED] * re;   //clamp the colors individually as well
specTerm[GREEN] += render->lights[j].color[GREEN] * re;
specTerm[BLUE] += render->lights[j].color[BLUE] * re;

diffTerm[RED] += render->lights[j].color[RED] * nl;
diffTerm[GREEN] +=render->lights[j].color[GREEN] * nl;
diffTerm[BLUE] += render->lights[j].color[BLUE] * nl;
}
//now reset the normal back for the next light
memcpy(tempNorm,normal,sizeof(GzCoord));
}
GzColor specular, diffuse;
memcpy(specular, &specTerm, sizeof(GzColor));
memcpy(diffuse,  &diffTerm, sizeof(GzColor));
MultiplyColor(render,specular,diffuse,intensity);
}
```
```void MultiplyColor(GzRender *render, GzColor specular, GzColor diffuse, GzColor* resultant)
{
GzColor result = {0,0,0};
result[RED]   =	 render->Ka[RED] *   render->ambientlight.color[RED]  + render->Ks[RED]    * specular[RED]    + render->Kd[RED]   * diffuse[RED];
result[GREEN] =   render->Ka[GREEN] * render->ambientlight.color[GREEN]   + render->Ks[GREEN] * specular[GREEN]  + render->Kd[GREEN] * diffuse[GREEN];
result[BLUE]  =    render->Ka[BLUE] *  render->ambientlight.color[BLUE]  + render->Ks[BLUE]   * specular[BLUE]   + render->Kd[BLUE]  * diffuse[BLUE];

if(result[RED] > 1.0f)
result[RED] = 1.0f;
if(result[RED] < 0.0f)
result[RED ]= 0.0f;
if(result[GREEN] > 1.0f)
result[GREEN] = 1.0f;
if(result[GREEN] < 0.0f)
result[GREEN] = 0.0f;

if(result[BLUE] > 1.0f)
result[BLUE] = 1.0f;
if(result[BLUE] < 0.0f)
result[BLUE] = 0.0f;
memcpy(resultant,result,sizeof(GzColor));
}
```

PARTNERS