Jump to content

  • Log In with Google      Sign In   
  • Create Account


Suen

Member Since 19 Feb 2012
Offline Last Active Feb 09 2014 04:26 PM

Topics I've Started

FPS Camera

08 August 2013 - 01:55 PM

I recently went back to look at some old OpenGL(3.x+) programs I did and checked my camera class. In my camera class I have a function which calculates the view matrix. The calculations are based on OpenGL's gluLookAt function

Here's what the class looks like (I've omitted several identical and irrelevant functions for less code):

#include "camera.h"

//Create view matrix
glm::mat4 Camera::createViewMatrix()
{
    glm::vec3 lookDirection = glm::normalize(cameraTargetPos); //Look direction of camera
    static glm::vec3 upDirection = glm::normalize(glm::vec3(0.0f, 1.0f, 0.0f)); //Up direction of camera in the world, aligned with world's y-axis.
    glm::vec3 rightDirection = glm::normalize(glm::cross(lookDirection, upDirection)); //Right direction of camera
    glm::vec3 perpUpDirection = glm::cross(rightDirection, lookDirection); //Re-calculate up direction, basis vectors may not be orthonormal

    static glm::mat4 viewMatrixAxes;

    //Create view matrix, [0] is first column, [1] is second etc.
    viewMatrixAxes[0] = glm::vec4(rightDirection, 0.0f);
    viewMatrixAxes[1] = glm::vec4(perpUpDirection, 0.0f);
    viewMatrixAxes[2] = glm::vec4(-lookDirection, 0.0f);

    viewMatrixAxes = glm::transpose(viewMatrixAxes); //Transpose for inverse

    static glm::mat4 camPosTranslation; //Translate to position of camera

    camPosTranslation[3] = glm::vec4(-cameraPosition, 1.0f);

    viewMatrix = viewMatrixAxes*camPosTranslation;

    return viewMatrix;
}

void Camera::yawRotation(GLfloat radian)
{
    glm::vec3 spherCameraTarget(cartesianToSpherical(cameraTargetPos)); //Convert camera target to spherical coordinates
    spherCameraTarget.y += radian; //Add radian units to camera target (in spherical coordinates)
    cameraTargetPos = sphericalToCartesian(spherCameraTarget); //Convert camera target to cartesian coordinates
}

void Camera::pitchRotation(GLfloat radian)
{
    glm::vec3 spherCameraTarget(cartesianToSpherical(cameraTargetPos)); //Convert camera target to spherical coordinates

    spherCameraTarget.z += radian; //Add radian units to camera target (in spherical coordinates)
    spherCameraTarget.z = glm::clamp(spherCameraTarget.z, 0.0001f, PI); //Clamp the pitch rotation between [0 PI] radians

   cameraTargetPos = sphericalToCartesian(spherCameraTarget); //Convert camera target to cartesian coordinates
}

void Camera::moveBackward(GLfloat moveSpeed)
{
    cameraPosition.x += (viewMatrix[0][2]*moveSpeed);
    cameraPosition.y += (viewMatrix[1][2]*moveSpeed);
    cameraPosition.z += (viewMatrix[2][2]*moveSpeed);
}

//Note: Change from standard convention. Z-axis is Y-axis, Y-axis is Z-axis and Z-axis should be negative
//Change this in formula below
glm::vec3 Camera::cartesianToSpherical(glm::vec3 cartesianCoordinate)
{
    GLfloat r = (sqrt(pow(cartesianCoordinate.x, 2) + pow(cartesianCoordinate.y, 2) + pow(cartesianCoordinate.z, 2)));

    GLfloat theta = atan2(-cartesianCoordinate.z, cartesianCoordinate.x);
    GLfloat phi = acos(cartesianCoordinate.y/r);

    glm::vec3 sphericalCoordinate(r, theta, phi);

    return sphericalCoordinate;
}

//Note: See notes for cartesianToSpherical() function
glm::vec3 Camera::sphericalToCartesian(glm::vec3 sphericalCoordinate)
{
    GLfloat theta = sphericalCoordinate.y;
    GLfloat phi = sphericalCoordinate.z;

    glm::vec3 cartesianCoordinate(cos(theta)*sin(phi), cos(phi), -sin(theta)*sin(phi));

    return cartesianCoordinate * sphericalCoordinate.x;
}

This worked as expected. I could strafe left/right, up/down, forward/backwards and I could rotate around my up vector 360 degrees and rotate around my right vector +-90 degrees. However I realized that according to the OpenGL documentation I'm calculating my view matrix wrong. If you look the first line of code in the createViewMatrix() function then you can see that my look direction is only a normalized camera look-at point. According to the documentation it should be (cameraTargetPos-cameraPosition). When I corrected it the entire camera system broke down, both the strafing and rotation. I fixed the strafing so it works now as it should, here's the new code (note that the view matrix is calculated correctly now and that all strafe methods update the camera look-at point):

//Create view matrix
glm::mat4 Camera::createViewMatrix()
{
    glm::vec3 lookDirection = glm::normalize(cameraTargetPos-cameraPosition); //Look direction of camera
    static glm::vec3 upDirection = glm::normalize(glm::vec3(0.0f, 1.0f, 0.0f)); //Up direction of camera in the world, aligned with world's y-axis.
    glm::vec3 rightDirection = glm::normalize(glm::cross(lookDirection, upDirection)); //Right direction of camera
    glm::vec3 perpUpDirection = glm::cross(rightDirection, lookDirection); //Re-calculate up direction, basis vectors may not be orthonormal

    static glm::mat4 viewMatrixAxes;

    //Create view matrix, [0] is first column, [1] is second etc.
    viewMatrixAxes[0] = glm::vec4(rightDirection, 0.0f);
    viewMatrixAxes[1] = glm::vec4(perpUpDirection, 0.0f);
    viewMatrixAxes[2] = glm::vec4(-lookDirection, 0.0f);

    viewMatrixAxes = glm::transpose(viewMatrixAxes); //Transpose for inverse

    static glm::mat4 camPosTranslation; //Translate to position of camera

    camPosTranslation[3] = glm::vec4(-cameraPosition, 1.0f);

    viewMatrix = viewMatrixAxes*camPosTranslation;

    return viewMatrix;
}

void Camera::moveBackward(GLfloat moveSpeed)
{
    cameraTargetPos.x += (viewMatrix[0][2]*moveSpeed);
    cameraTargetPos.y += (viewMatrix[1][2]*moveSpeed);
    cameraTargetPos.z += (viewMatrix[2][2]*moveSpeed);

    cameraPosition.x += (viewMatrix[0][2]*moveSpeed);
    cameraPosition.y += (viewMatrix[1][2]*moveSpeed);
    cameraPosition.z += (viewMatrix[2][2]*moveSpeed);
}

The rest of the functions in the old code (cartesianToSpherical(), yawRotation etc.) remain unchanged.

 

The rotation still remains broken. If I'm close enough to my object it works as it should. It's hard to describe how it's broken. If, for example I move my camera far back enough that the camera look-at point  has a positive value (it starts with a negative value a first) one of the spherical coordinates (theta) ends up being negative so when I rotate around the up vector to the left I end up rotating right instead, not only that but I never complete a whole revolution. It's as if I rotate CW 45 degrees and then CCW 45 degrees, it just keeps going on like that. There's some other weird behaviour that goes on as well.

 

I'm quite certain it has to do with how I go back and forth between cartesian/spherical coordinates and the formulas I use but I'm lost in how to solve it. That I got it to work properly with wrong code is a miracle in itself already.

If you want to try and compile the code and run it to see the effect yourself let me know and I'll attach the source code here.

Any help is appreciated, thanks!


SSE2 Integer operations on Vectors

11 March 2013 - 10:53 AM

I've been playing around with SSE to get a better understanding of it, using SSE2 intrinsics for integer operations. Currently I've done a very common yet simple code example:

 

Vector2* ar = CacheAlignedAlloc<Vector2>(SIZE); //16 or multiple of 16-byte aligned array, SIZE is a largue value

//Some values are set for ar here....

__m128i sse;
__m128i sse2 = _mm_set_epi32(0,5,0,5);
__m128i result;

for(int i=0; i<SIZE; i=i+2)
{
	sse = _mm_load_si128((__m128i*)&ar[i]);
	result = _mm_add_epi32(sse, sse2);
	_mm_store_si128((__m128i*)&ar[i], result);
}

 

Vector2 is a very simple struct:

 

struct Vector2
{
	int x, y;
};

 

The way things work now is that I can at most load a maximum of two Vector2 in a 128-bit register. This is fine if I were to perform an operation on both values of each Vector2 at the same time. However if you look at the code above the y-value of each Vector2 only gets added with zero so it remains unchanged, thus 64 bits of the 128-bit register are essentially doing nothing. Is there a way to load four x-values from four Vectors2 instead, perform operations and then store the result back again?


Ray tracer - perspective distortion

15 December 2012 - 07:38 PM

I'm writing a ray tracer and while I have my simple scene up and running I've encountered what seems to be a common problem. My scene suffers from perspective distortion and this is of course most noticable on spheres. Basically when I create a sphere who's center is not focused on the z-axis of my camera the sphere get elongated in a radial manner. I've attached an image of my scene to show what I mean. The green and blue sphere shows the distortion quite well.

I do understand that this effect is bound to occur due to the nature of perspective projection. Having a rectangular projection plane will create this distortion and after seeing a helpful image on the topic I at least have a basic grasp as to why this distortion occurs. I believe I even saw the same effect on a real photo when I was searching for more information about it. The thing is that I don't have the slightest clue on how to lessen this distortion.

This is the code which renders my scene:

const int imageWidth = 600;
const int imageHeight = 600;

float aspectR = imageWidth/(float)imageHeight;
float fieldOfView = 60.0f; // Field of view is 120 degrees, need half of it.

Image testImage("image1.ppm", imageWidth, imageHeight);

int main()
{
	 /*
	   building scene here for now...
	 */

	 std::cout << "Rendering scene..." << std::endl;

	 renderScene(sceneObjects, 5);

	 std::cout << "Scene rendered" << std::endl;

	 return 0;
}

//Render the scene
void renderScene(Object* objects[], int nrOfObjects)
{
	 //Create light and set light properties
	 Light light1(glm::vec3(5.0f, 10.0f, 10.0f));
	 light1.setAmbient(glm::vec3(0.2f));
	 light1.setDiffuse(glm::vec3(1.0f));

	 //Create a ray with an origin and direction. Origin will act as the
	 //CoP (Center of Projection)
	 Ray r(glm::vec3(0.0f, 0.0f, 0.0f), glm::vec3(0.0f));

	 //Will hold ray (x,y)-direction in world space
	 float dirX, dirY;

	 //Will hold the intensity of reflected ambient and diffuse light
	 glm::vec3 ambient, diffuse;

	 //Loop through each pixel...
	 for(int y=0; y<imageHeight; y++)
	 {
	 	 for(int x=0; x<imageWidth; x++)
	 	 {
	 	 	 //Normalized pixel coordinates, remapped to range between [-1, 1].
	 	 	 //Formula for dirY differs because we want to swap the y-axis so
	 	 	 //that positive y-values of coordinates are on the upper half of
	 	 	 //the image plane with respect to the center of the plane.
	 	 	 dirX = (2.0f*(x+0.5f)/imageWidth)-1.0f;
	 	 	 dirY = 1.0f-(2.0f*(y+0.5f)/imageHeight);

	 	 	 //Account for aspect ratio and field of view
	 	 	 dirX = dirX*aspectR*glm::tan(glm::radians(fieldOfView));;
	 	 	 dirY = dirY*glm::tan(glm::radians(fieldOfView));

	 	 	 //Set the ray direction in world space. We can calculate the distance
	 	 	 //from the camera to the image plane by using the FoV and half height
	 	 	 //of the image plane, tan(fov/2) = a/d => d = a/tan(fov/2)

	 	 	 //r.setDirection(glm::vec3(dirX, dirY, -1.0f)-r.origin());
	 	 	 r.setDirection(glm::vec3(dirX, dirY, -1.0f/glm::tan(glm::radians(fieldOfView)))-r.origin());
  
	 	 	 //Will hold object with closest intersection
	 	 	 Object* currentObject = NULL;

	 	 	 //Will hold solution of ray-object intersection
	 	 	 float closestHit = std::numeric_limits<float>::infinity(), newHit = std::numeric_limits<float>::infinity();

	 	 	 //For each object...
	 	 	 for(int i=0; i<nrOfObjects; i++)
	 	 	 {
	 	 	 	 //If ray intersect object...
	 	 	 	 if(objects[i]->intersection(r, newHit))
	 	 	 	 {
	 	 	 	 	 //If intersection is closer then previous intersection
	 	 	 	 	 if(newHit<closestHit)
	 	 	 	 	 {
	 	 	 	 	 	 //Update closest intersection and corresponding object
	 	 	 	 	 	 closestHit = newHit;
	 	 	 	 	 	 currentObject = objects[i];
	 	 	 	 	 }
	 	 	 	 }
	 	 	 }

	 	 	 //If an object has been intersected...
	 	 	 if(currentObject != NULL)
	 	 	 {
	 	 	 	 //Get intersection point
	 	 	 	 glm::vec3 intersectionPoint = r.origin()+closestHit*r.direction();

	 	 	 	 //Get light direction and normal
	 	 	 	 glm::vec3 lightDirection = glm::normalize(light1.position()-intersectionPoint);
	 	 	 	 glm::vec3 normal = glm::normalize(currentObject->normal(intersectionPoint));
  
	 	 	 	 //Factor affecting reflected diffuse light
	 	 	 	 float LdotN = glm::clamp(glm::dot(lightDirection, normal), 0.0f, 1.0f);

	 	 	 	 //Get diffuse and ambient color of object
	 	 	 	 ambient = currentObject->diffuse()*light1.ambient();
	 	 	 	 diffuse =  currentObject->diffuse()*LdotN*light1.diffuse();
  
	 	 	 	 //Final color value of pixel
	 	 	 	 glm::vec3 RGB = ambient+diffuse;
  
	 	 	 	 //Make sure color values are clamped between 0-255 to avoid artifacts
	 	 	 	 RGB = glm::clamp(RGB*255.0f, 0.0f, 255.0f);

	 	 	 	 //Set color value to pixel
	 	 	 	 testImage.setPixel(x, y, RGB);
	 	 	 }
	 	 	 else
	 	 	 {
	 	 	 	 //No intersection, set black color to pixel.
	 	 	 	 testImage.setPixel(x, y, glm::vec3(0.0f));
	 	 	 }
	 	 }
	 }

	 //Write out the image
	 testImage.writeImage();
}

I do want to mention that I have been playing around a bit with the parameters involved in my code. I tried putting the position of the ray/camera (CoP) further away from the image plane. As expected the scene zooms in as the field of view is supposed to become smaller. This did obviously not solve anything so I increased the field of view parameter (to neutralize the zoomed-in effect). Doing so removed the distortion (see second image) but this was just a temporal solution because I moved the blue sphere further out from the center of the scene which resulted in it being stretched again with the new settings (see third image).

This is the settings used for the three images:

Image 1:
float fieldOfView = 60.0f
Ray r(glm::vec3(0.0f, 0.0f, 0.0f), glm::vec3(0.0f));

Image 2:
float fieldOfView = 85.0f
Ray r(glm::vec3(0.0f, 0.0f, 20.0f), glm::vec3(0.0f));

Image 3:
float fieldOfView = 85.0f
Ray r(glm::vec3(0.0f, 0.0f, 20.0f), glm::vec3(0.0f));

Again I am totally lost in how to solve this problem. I don't even understand why the new settings I tried removed the distortion (temporarily). I realize some distortion will happen regardless but this seems extreme, furthermore the fact that I had to increase my fov to such a high value above gives me a bad feeling and makes me suspect my code might be at fault somewhere. Any help and explanation will be appreciated Posted Image

Writing out image data

11 October 2012 - 01:45 PM

Hello. I'm not sure if this is really the right place to ask this but here goes. I'm trying to write an image to my hard drive. Basically I allocate space for an array big enough to hold my image and then simply insert my image data in it and write it to the hard drive. Since I'm fairly new when it comes to writing code that writes out an image to the hard drive I've decided to go with a very simple approach, I just write out a raw binary file which contains my image data and then use Adobe Photoshop CSS Extended (Ver 12.1) to open it to check that the result is correct. If I want to use some image format then it requires that I provide information for the image header and more which I want to avoid doing for the moment (although depending on my problem I might have to switch to another format).

Here's the relevant code for writing out the image to my hard drive:

typedef unsigned char byte;

int imageWidth = 400;
int imageHeight = 400;
int imageSize = imageWidth*imageHeight;

std::fstream imageStream;

imageStream.open("image.raw", std::ios::out|std::ios::binary);

if(!imageStream.is_open())
{
	 std::cout << "Could not open file" << std::endl;
	 exit(1);
}

for(int i=0; i<imageSize; i++)
{
	 imageStream.put((byte)imageData[i].r); //Red channel
	 imageStream.put((byte)imageData[i].g); //Green channel
	 imageStream.put((byte)imageData[i].b); //Blue channel
}

imageStream.close();

Now this works perfectly fine IF my image width and height are of the same value. In other words I need to have the same amount of pixels horizontally as I have vertically. So if I have a 400x400 image as above I get the correct image. Photoshop will ask me, before opening the file, to describe the raw file options (dimensions of image, number of color channels etc.). Normally the default options provided by the application are already correct if I have the same value in my dimensions.

Here's the thing though, if I change my dimensions to be different from each other, say changing it to 600x150, then some problems arise. The image still gets written to the hard drive but when I open it the default raw options in Photoshop suggest that the image is a 300x300 image. If I choose this my image ends up being heavily distorted. If I choose to specify the original dimension (600x150) the image show correct results but this is only because 300*300 = 600*150 = 90000 (thus same image size) So it's a special case. If I went with something more random like 250x532 then the default Photoshop raw options consider my image to be 665x600. Changing this back to 250x532 in the Photoshop raw options will only produce a result with rubbish in it as 250*532 != 665*600 (the application will warn me that I'm decreasing the size of the image which it consider to be 665*600).

Now if I change my code to this instead (writing out a .ppm file instead of a .raw file):

imageStream.open("image.ppm", std::ios::out|std::ios::binary);

if(!imageStream.is_open())
{
	 std::cout << "Could not open file" << std::endl;
	 exit(1);
}

imageStream << "P6\n" << imageWidth << " " << imageHeight << "\n255\n";

for(int i=0; i<imageSize; i++)
{
	 imageStream.put((byte)imageData[i].r); //Red channel
	 imageStream.put((byte)imageData[i].g); //Green channel
	 imageStream.put((byte)imageData[i].b); //Blue channel
}

imageStream.close();

then everything works regardless of my dimensions. Photoshop won't ask anything but just open the file with the correct dimensions and correct image result.

Now I most likely think that the problem here is not the code itself but just how Photoshop internally handle .raw file (although why it change the dimensions of an image with different values for the dimensions when I try to open it is beyond me). Especially if you compare it with the .ppm format where information about the dimension of the image is provided before writing out the data and the .raw file...provide no information at all.

In the case that this is related to Photoshop then this topic probably does not even belong in this forum Posted Image But I just want to make sure that it's not my code which is at fault here. If it is my code then what is exactly wrong and would I change this?

Best regards

Swapping addresses between pointers and cache locality

08 September 2012 - 02:11 PM

Assume I have a vector that consist of pointers. When the content of this vector is read (one element at a time in order) I should be reducing cache misses (since the pointers would be adjacent to each other in memory). Now if we assume what each pointer is POINTING at also lies in a continuous block of memory then from what I've understood it should also reduce cache misses (corrections here would be welcomed). To clarify with some code:

Foo* foo = new Foo[size];
std::vector<Foo*> fooVector;

for(int i=0; i<size; i++)
{
	 //fill fooVector with &foo[i]
}

Now let's assume that during run-time an event occurs and each time this event occurs a pointer in the vector is being processed. Some operations are done on the value being pointed at by this pointer and then a swap is done. The swap takes the address being pointed at by the current processed pointer and swap it with the address being pointed at by the last pointer in the vector. Note that 'last' here doesn't necessarily mean the last element in the vector, there could be some variable which keeps track of what it considers to be the "last element" in the vector, this is not important though as the point here is that the addresses that the pointers are pointing at are being swapped between the pointers. Again some code to clarify what I mean:

void processPointer(int pointerId)
{
	 //do some operations on foo object being pointed at with fooVector[pointerId]

	 //Swap the addresses between two pointers
	 std::swap(fooVector[pointerId], fooVector[lastElement]);
}


Now as far as I am concerned, the next time the vector is read the pointers are still adjacent in memory so there is no change there. What they are pointing at becomes unordered with time though, the first element in the vector could be pointing at the third element in the array allocated from the heap and the third element in the vector could be pointing at the tenth element in the array and so on. How does this affect cache locality when the values, which the pointers are pointing at, are read when everything is unordered? I realize that the values that are being pointed at still belong to the same continuous block but a cache line doesn't consist of unlimited memory.

To elaborate on the last sentence above I see it this way: say we end up in a situation where fooVector[0] points at foo[3] and fooVector{1] point at foo[10]. Now when I go through my vector, step by step and perform calculations on the values being pointed at then when I process fooVector[0] I could perhaps be reading foo[3], foo[4] and foo[5] (perhaps something before foo[3] as well, I don't honestly know if it looks "behind" it or not) and that might be as much as I could fit in one cache line. When I process fooVector[1] then foo[10] will not be in the cache and thus I would be incurring a cache miss. I realize this entirely depends on the actual size of my objects (a foo object in this case) but this is how I thought of it.

I could be wrong about many things here, I'm fairly new when it comes to considering the cache when coding so I could have assumed many things which are wrong, if so please let me know.

PARTNERS