• Advertisement
Sign in to follow this  

Camera creation vector problems

This topic is 4916 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I am currently working on coding a raytracer. i know how to do the ray intersection tests and such my current problem is this. given this information: width of image to generate. CamWidth height of image to generate. CamHeight location of the camera. PLoc point the camera is looking at. PLookAt and angle of the camera. Theta (in radians hehe) at the moment i am assumeing an 'up' vector of the camera of <0,1,0> what i need: i need to know how to generate the vector field starting at the camera location (PLoc) looking AT the point the camera looks at (PLookAt) with the outer most vectors of the camera's having the largest angle (Theta) this array needs to be of size [CamHeight] by [CamWidth]. i will eventually modify the code to make the up vector changeable but if anyone has any info on how to do this as is i would not turn it down hehe. also i would like to add that this is NOT a class project or any such nonsense, simply trying to get my raytracer code working and this is my current stumbling block =/

Share this post

Link to post
Share on other sites
Raytracing, my favourite. :)

First you need to think of the vectors from the camera going through a camera plane. You need the point on this camera plane, so that you can find the vector from the camera's point to the point on the camera plane, and this will be your direction vector.

First, always precalculate the rotation of the camera in order for the camera to be looking at the object. Store the rotation in a 3d Vector, using x, y, and z values for rotations around the x,y,and z axes respectively.

Then, for each pixel, calculate the point on the plane as if the plane was a plane that was in the XY plane (assuming you are using a Left-Handed system where Y is up, which seems to be the case).

I notice you're using an angle for the camera. I haven't done this, but there is a workaround as my system uses an aspect length of the camera, which is equal to the distance of the camera, PLoc, from the camera plane.

These values will be:
x = 1 * (pixelx/CamWidth - 0.5)
y = CamHeight / CamHeight * (0.5 - pixely/CamHeight)
z = aspect
where cos(theta/2) = aspect;
pixelx is the x-value of the current pixel's screen coordinate,
and pixely similarly is the y-value of the current pixel's screen coordinate

Note that in the y-equation pixely is negative to allow for the convention that y goes downwards on computer screens, whereas in our systems y goes upwards.
The 0.5's compensate for the fact that your camera's origin is at (0,0,aspect)+PLoc, while the origin of the surface you are rendering to is probably, most likely, at the top-left.

Now, to get the final direction vector, you should rotate this vector around the x-axis by the x-value of your rotation vector. Then do the same for y and z.

The rotation equation is as follows for rotating about the x-axis:
The original vector is (x1,y1,z1), and the rotated vector is (x2,y2,z2), and the angle is A.

I don't know exactly how much you know of maths, so I'll explain everything as much as i can:

The Matrix is
[x2] = [1 0 0 0] * [x1]
[y2] = [0 c s 0] * [y1]
[z2] = [0 -s c 0] * [z1]
[1] = [0 0 0 1] * [1]
resulting in

x-rotation: i.e. around the x axis
x2 = x1;
y2 = y1*cos(A) - z1*sin(A)
z2 = y1*sin(A) + z1*cos(A)

y-Rotation: i.e. around the y axis
y2 = y1;
x2 = x1*cos(A) - z1*sin(A)
z2 = x1*sin(A) + z1*cos(A)

Z-Rotation: i.e. around the z axis
z2 = z1;
x2 = x1*cos(A) - y1*sin(A)
y2 = x1*sin(A) + y1*cos(A)

To obtain the values you need to rotate by, you need to subtract PLoc from PLookAt to determine the difference, then find the `directional cosines' of this vector, which I'll call R.
x angle = atan(R.y/R.z)
y angle = atan(R.x/R.z)
z angle = 0 (see later note)

Just plug these values into the equations above to rotate it ad get the final vector...

Once you have rotated the direction vector around all three axes you have the result you want. I haven't *exactly* solved the up vector problem, but the z rotation value allows for such rotation of the camera around its axis (roll).

Well, that's the longest piece I've written in a long while. I hope it suits your needs. It helps if you know the maths first, which I hope you do. If you do, check the rotation formulae to make sure they're correct, although I have mentally checked them :)

Do contact me if it doesn't work!


Share this post

Link to post
Share on other sites
ok I THINK I got what you mean, check over this and see if i am doing this right.

int tempx,tempy;
double applyX,applyY,applyZ;
double rthetaX,rthetaY,rthetaZ;
for (tempx = 0; tempx < ImageWidth, tempx++)
for (tempy = 0; tempy < ImageHeight, tempy++)
VectorField[tempx][tempy].X = (tempx/ImageWidth-0.5);
VectorField[tempx][tempy].Y = (0.5 - tempy/ImageHeight);
VectorField[tempx][tempy].Z = cos(theta/2);
rthetaX = atan((CameraLookAt.Y - CameraLoc.Y) / (CameraLookAt.Z - CameraLoc.Z));
rthetaY = atan((CameraLookAt.X - CameraLoc.X) / (CameraLookAt.Z - CameraLoc.Z));
// confused on this but i THINK i got it lets see hehe

// x rotation
applyY = VectorField[tempx][tempy].Y * cos(rthetaX) - VectorField[tempx][tempy].Z * sin(rthetaX);
applyZ = VectorField[tempx][tempy].Y * sin(rthetaX) + VectorField[tempx][tempy].Z * cos(rthetaX);
// now swap um
VectorField[tempx][tempy].Y = applyY;
VectorField[tempx][tempy].Z = applyZ;

// y rotation
applyX = VectorField[tempx][tempy].X * cos(rthetaY) - VectorField[tempx][tempy].Z * sin(rthetaY);
applyZ = VectorField[tempx][tempy].X * sin(rthetaY) + VectorField[tempx][tempy].Z * cos(rthetaY);
// now swap um
VectorField[tempx][tempy].X = applyX;
VectorField[tempx][tempy].Z = applyZ;

// z rotation. still stumped.


Share this post

Link to post
Share on other sites
Guest Anonymous Poster

There are three parts to a camera class:


vectors define the direction, transforms do vector/matrix operations to rotate vectors and rays are simply vectors with a location. The camera class then generates the rays for each pixel on the screen.

The code is a translation from


Which is a feature rich javascript raytracer. Since it works fine in javascript (as far as I can tell based on sample scenes) the problems are most likely with the usage (order of operations) and not the math or classes themselves. The Slimeland raytracer doesn't provide an intuitive interface to position and point the camera.

The basic cVector, cTransform, cRay, and cCamera class should be enough to give you a very good idea on what to do and introduce you to the concepts so you know what you're looking for.

Share this post

Link to post
Share on other sites
Give me a minute, and I'll post my CPerspectiveCamera class. I'm still working on the CIsometricCamera.
I'll have to post the rt_vec.cxx, and the rt_math.cxx, and the CVector3D class itself though. Could be a lot to read! But I definately know it works.
//Edit: Here they are.

newrt.h - main declarations of base classes - only relevant classes included though

//Class Hierarchy, good thing I don't use multiple inheritance..
class CErrorHandler;
class CMsgBoxErrorHandler;
class CRasteriser;
class CGDIRasteriser;
class CRayTracer;
class CSimpleRayTracer;
class CDistributionRayTracer;
class CObject;
class CMappedObject;
class CPlane;
class CSphere;
class CMap;
class CColorMap;
class CTextureMap;
class CPlaneMap;
class CSphereMap;
class CGlobalIlluminationMap;
class CTexture;
class CSimpleTexture;
class CFilteredTexture;
class CGlobalIlluminationTexture;
class CLight;
class CAmbientLight;
class CAreaLight;
class CPointLight;
class CSpotLight;
class CLightArea;
class CPointArea;
class CSphereArea;
class CLightingEngine;
class CLightingEngine1;
class CScene;
class CScene1;
class CCamera;
class CPerspectiveCamera;
//Orthographic Camera is indeed a possibility
class CPixelProcessor;
class CRayCasterPixelProcessor;
class CPixelProcessor1;
class CPixelProcessor2;
class CLightingPixelProcessor;
class CGlobalIlluminationEngine;
class CGlobalIlluminationPixelProcessor;
class CVector3D;
class CRay;
class CGIRay;
struct SRGB;
struct SIntersectionData;
struct SMaterialData;
struct SIlluminationData;
struct SAttenuationData;

struct CVector3D{
float x,y,z;
CVector3D(float _x=0, float _y=0, float _z=0):
x(_x), y(_y), z(_z)
CVector3D(const CVector3D& v):
x(v.x), y(v.y), z(v.z)

float Modulus() const{ return sqrt(x*x+y*y+z*z); }
float SquareModulus() const{ return x*x+y*y+z*z; }

CVector3D operator+(const CVector3D& v)const{return CVector3D(x+v.x,y+v.y,z+v.z);}
CVector3D operator+=(const CVector3D& v){x+=v.x;y+=v.y;z+=v.z; return *this;}
CVector3D operator-(const CVector3D& v)const{return CVector3D(x-v.x,y-v.y,z-v.z);}
CVector3D operator-=(const CVector3D& v) {x-=v.x;y-=v.y;z-=v.z;return *this;}
CVector3D operator*(const CVector3D& v)const{return CVector3D(x*v.x,y*v.y,z*v.z);}
CVector3D operator*=(const CVector3D& v){x*=v.x;y*=v.y;z*=v.z;return *this;}
CVector3D operator/(const CVector3D& v)const{return CVector3D(x/v.x,y/v.y,z/v.z);}
CVector3D operator/=(const CVector3D& v){x/=v.x;y/=v.y;z/=v.z;return *this;}
CVector3D operator*(const float f)const{return CVector3D(x*f,y*f,z*f);}
CVector3D operator*=(const float f){x*=f;y*=f;z*=f; return *this;}
CVector3D operator/(const float f)const{return CVector3D(x/f,y/f,z/f);}
CVector3D operator /=(const float f){x/=f;y/=f;z/=f;return *this;}

CVector3D operator -(){ return CVector3D(-x, -y, -z); }

bool operator==(CVector3D& v){return(x==v.x && y==v.y && z==v.z);}
bool operator!=(CVector3D& v){return !operator==(v);}

/* Dot Product */
float operator()(const CVector3D& v) const{ return x*v.x+y*v.y+z*v.z; }

/* Cross Product */
CVector3D operator[](const CVector3D& v) const{ return CVector3D( y*v.z-z*v.y , z*v.x-x*v.z , x*v.y-y*v.x ); }

void Normalize(float fNewSize = 1.f){
float m = Modulus();
x *= fNewSize / m;
y *= fNewSize / m;
z *= fNewSize / m;
CVector3D Unit(float fSize = 1) const{
CVector3D answer = *this;
return answer;

CVector3D BreedX() const{
if(x==0 && z==0) return CVector3D(1.f,0.f,0.f);
return CVector3D(0.f,1.f,0.f)[*this].Unit();
CVector3D BreedZ() const{
return BreedX()[*this].Unit();
bool IsParallel(const CVector3D& v) const{
if(v.x == 0 || v.y == 0 || v.z == 0) return false;
return (x/v.x == y/v.y && y/v.y == z/v.z);

CVector3D Refract(CVector3D n, float index){
float cosi, sini, cosr, sinr, cosiminusr;
if(index == 0) return *this;
cosi = (*this)(n);
if( cosi < 0 ) {
cosi = -cosi;
n = -n;
index = 1 / index;
sini = sqrt(1 - cosi*cosi);
if(index < 1){
sinr = sini * index;
cosr = sqrt(1 - sinr*sinr);
cosiminusr = cosi*cosr+sini*sinr;
return *this + n * (cosr-cosi);
sinr = sini * index;
if(sinr > 1) return Reflect(n);
cosr = sqrt(1 - sinr * sinr);
cosiminusr = cosi*cosr+sini*sinr;
return *this + n * (cosr-cosi);
CVector3D Reflect(CVector3D n){
return *this - n*2*(*this)(n);

class CRay{
CVector3D Direction;
CVector3D Point;
CRay(const CVector3D& _Point=CVector3D(), const CVector3D& _Dir=CVector3D()):
Point(_Point), Direction(_Dir){}

CVector3D Param(float fDistance) const{
return Point + Direction * fDistance;

class CCamera{
virtual CRay GetRay(float x, float y)=0;
virtual CVector3D Transform(const CVector3D& rWorldLocation)=0;
virtual ~CCamera(){}

rt_vec.cxx: some vector functions and definitions, particularly RectToPolar and PolarToRect

//Vector Functions:

ostream& operator<<(ostream& ostr, const CVector3D& v){
ostr << '{' << v.x << ' ' << v.y << ' ' << v.z << '}';
return ostr;

inline CVector3D XRotate(const CVector3D& v, float angle){
float s=sin(angle), c=cos(angle);
return CVector3D(v.x, v.y*c-v.z*s, v.y*s+v.z*c);
inline CVector3D YRotate(const CVector3D& v, float angle){
float s=sin(angle), c=cos(angle);
return CVector3D(v.x*c-v.z*s, v.y ,v.x*s+v.z*c);
inline CVector3D ZRotate(const CVector3D& v, float angle){
float s=sin(angle), c=cos(angle);
return CVector3D(v.x*c-v.y*s, v.x*s+v.y*c, v.z);

CVector3D VectorRotate(const CVector3D& v, const CVector3D& angle){
CVector3D u = XRotate(v, angle.x);
u = YRotate(u, angle.y);
return ZRotate(u, angle.z);

CVector3D RectToPolar(const CVector3D& Rect){
//Get x rotation
CVector3D r = Rect;
float yr = CoordAngle(r.x, r.z);
r = YRotate(r, -yr);
float zr = CoordAngle(r.x, r.y);
r = ZRotate(r, -zr);
return CVector3D(zr, yr, r.x);

return CVector3D(
CoordAngle(Rect.x, Rect.y),
CoordAngle(Rect.x, Rect.z),

/* Polar in (theta, lambda, r) notation
theta = angle in xy-plane (around z axis?)
lambda = angle in xz-plane (around y axis?)
r = magnitude

CVector3D PolarToRect(const CVector3D& Polar){
return YRotate(ZRotate(CVector3D(Polar.z, 0, 0), Polar.x), Polar.y);

rt_math.cxx: contains unoptimized CoordAngle() function referenced in RectToPolar(), and more stuff (excluded here)

inline float CoordAngle(float x, float y){
if( x == 0 ) {
return -M_PI/2;
return M_PI/2;
return 0;
if(y == 0){
if(x < 0){
return M_PI;
return 0;
float f = atan(y/x);
if(x < 0 && y < 0){
f += M_PI;
}else if(x < 0 && y > 0){
f = M_PI + f;
}else if(x > 0 && y < 0){
//Quadrant 4; Principal values also correct;
//Quadrant 1; Principal values are correct
return f;

rt_cam.h: camera definition

class CPerspectiveCamera: public CCamera{
CVector3D Position;
CVector3D Rotation;
float fAspect;

CPerspectiveCamera(const CVector3D& __P = CVector3D(0.f,0.f,0.f), const CVector3D& __R = CVector3D(0.f,0.f,0.f), float fCamAspect = 1.f):
Position(__P), Rotation(__R), fAspect(fCamAspect){}

virtual CRay GetRay(float x, float y);
virtual CVector3D Transform(const CVector3D& rWorldLocation);
virtual ~CPerspectiveCamera(){}

rt_cam/per.cxx: perspective camera class that returns rays
given a screen coord between (0,0) and (1,1)

CRay CPerspectiveCamera::GetRay(float x, float y){
CRay Ray;
CVector3D Dir = CVector3D(x-.5f, .5f-y, fAspect);
Ray.Point = this->Position;
Ray.Direction = VectorRotate(Dir, Rotation).Unit();
return Ray;

//Transform 3D vector to 2D screen coordinate, used for debug purposes only with global illumination
CVector3D CPerspectiveCamera::Transform(const CVector3D& rWorldSpace){
CVector3D r2d;
CVector3D r3d = rWorldSpace - this->Position;
r3d = VectorRotate(r3d, -(this->Rotation));
r2d.z = r3d.z;
r2d.x = this->fAspect * r3d.x / (r3d.z + this->fAspect) + 0.5f;
r2d.y = 0.5f - this->fAspect * r3d.y / (r3d.z + this->fAspect);
return r2d;

//More Edit:
The CVector3D has two useful functions, the BreedX and BreedZ functions. Given an arbritrary y-axis, they will return the X-axis and Z-axis. I use these for texture mapping on planes, although someone else had a problem with camera strafing which could be solved by this.
Also, it has refraction and reflection functions which, funnily enough, I use for reflection and refraction, disrespectively.

NB: I'm quite happy with my project. It's not finished just yet, but I will be uploading the whole source under the GPL or something like that. I think this is quite enough for now.

[Edited by - MrEvil on September 6, 2004 11:14:01 AM]

Share this post

Link to post
Share on other sites
Sign in to follow this  

  • Advertisement