I'm beginning work on a raytracer and I've managed to get the basics up and running but I've started running into some problems with the ray generation.
When I setup my camera at (0, 0, -10) looking at (0, 0, 0), with up (0, 1, 0), the ray in the middle of the viewport (512x512) should be (0, 0, 1) right? (pointing straight to the lookat point).
Here's my code for creating the matrices:
inline void mat_perspective(matrix44* res, float fov, float aspect, float hither, float yonder)
{
float invtanf = 1.0f / tanf( fov / 2.0f );
mat_load(res,
invtanf / aspect, 0, 0, 0,
0, invtanf, 0, 0,
0, 0, -(yonder + hither) / (hither - yonder), yonder * hither / (hither - yonder),
0, 0, 1, 0);
}
inline void mat_lookat(matrix44* res, const vec3& Position, const vec3& View, const vec3& Up)
{
vec3 dir, right, newUp;
vec_sub(View, Position, &dir);
vec_normalize(dir, &dir);
vec_normalize(Up, &right);
vec_cross(dir, right, &right);
vec_cross(right, dir, &newUp);
mat_load(res,
right.x, newUp.x, dir.x, Position.x,
right.y, newUp.y, dir.y, Position.y,
right.z, newUp.z, dir.z, Position.z,
0, 0, 0, 1.0f);
}
inline void mat_viewport(matrix44* res, float x, float y, float width, float height)
{
mat_load(res,
(width - x) / 2, 0, 0, (width + x) / 2,
0, (height - y) / 2, 0, (height + y) / 2,
0, 0, 1, 0,
0, 0, 0, 1);
}
And for creating the camera and generating rays:
void cam_perspective(Camera* cam, const vec3& pos, const vec3& view, const vec3& up,
float fov, float width, float height, float hither, float yonder)
{
mat_lookat(&cam->WorldToCamera, pos, view, up);
mat_perspective(&cam->CameraToScreen, fov, width / height, hither, yonder);
mat_viewport(&cam->ScreenToRaster, 0, 0, width, height);
mat_inverse(cam->WorldToCamera, &cam->CameraToWorld);
mat_inverse(cam->CameraToScreen, &cam->ScreenToCamera);
mat_inverse(cam->ScreenToRaster, &cam->RasterToScreen);
mat_mul(cam->RasterToScreen, cam->ScreenToCamera, &cam->RasterToCamera);
mat_print(cam->RasterToScreen);
cam->scr_width = width;
cam->scr_height = height;
cam->hither = hither;
cam->yonder = yonder;
}
void cam_generate_ray(const Camera& cam, const Sample& sample, ray3* ray)
{
vec_load(&ray->o, 0, 0, 0);
vec_load(&ray->d, sample.imageX, sample.imageY, 0);
//we need to transform this as a point
mat_transf_point(cam.RasterToCamera, ray->d, &ray->d);
vec_normalize(ray->d, &ray->d);
ray->mint = 0.0f;
ray->maxt = (cam.yonder - cam.hither) / ray->d.z;
mat_transf_point(cam.WorldToCamera, ray->o, &ray->o);
mat_transf_vec(cam.CameraToWorld, ray->d, &ray->d);
}
I know it's quite a bit of code to post, but I've looked in different places and the perspective matrices are given quite differently. For example:
1 |
2 |
3 |
4
I'm using the one OpenGL uses but with 1 in m[3][2] (taken from "Physically based rendering" by Pharr and Humphreys).
Another question: i've seen some places use NDCs and some use Cannonical coordinates.. is there any important difference affecting the end result?