I do understand that this effect is bound to occur due to the nature of perspective projection. Having a rectangular projection plane will create this distortion and after seeing a helpful image on the topic I at least have a basic grasp as to why this distortion occurs. I believe I even saw the same effect on a real photo when I was searching for more information about it. The thing is that I don't have the slightest clue on how to lessen this distortion.

This is the code which renders my scene:

const int imageWidth = 600; const int imageHeight = 600; float aspectR = imageWidth/(float)imageHeight; float fieldOfView = 60.0f; // Field of view is 120 degrees, need half of it. Image testImage("image1.ppm", imageWidth, imageHeight); int main() { /* building scene here for now... */ std::cout << "Rendering scene..." << std::endl; renderScene(sceneObjects, 5); std::cout << "Scene rendered" << std::endl; return 0; } //Render the scene void renderScene(Object* objects[], int nrOfObjects) { //Create light and set light properties Light light1(glm::vec3(5.0f, 10.0f, 10.0f)); light1.setAmbient(glm::vec3(0.2f)); light1.setDiffuse(glm::vec3(1.0f)); //Create a ray with an origin and direction. Origin will act as the //CoP (Center of Projection) Ray r(glm::vec3(0.0f, 0.0f, 0.0f), glm::vec3(0.0f)); //Will hold ray (x,y)-direction in world space float dirX, dirY; //Will hold the intensity of reflected ambient and diffuse light glm::vec3 ambient, diffuse; //Loop through each pixel... for(int y=0; y<imageHeight; y++) { for(int x=0; x<imageWidth; x++) { //Normalized pixel coordinates, remapped to range between [-1, 1]. //Formula for dirY differs because we want to swap the y-axis so //that positive y-values of coordinates are on the upper half of //the image plane with respect to the center of the plane. dirX = (2.0f*(x+0.5f)/imageWidth)-1.0f; dirY = 1.0f-(2.0f*(y+0.5f)/imageHeight); //Account for aspect ratio and field of view dirX = dirX*aspectR*glm::tan(glm::radians(fieldOfView));; dirY = dirY*glm::tan(glm::radians(fieldOfView)); //Set the ray direction in world space. We can calculate the distance //from the camera to the image plane by using the FoV and half height //of the image plane, tan(fov/2) = a/d => d = a/tan(fov/2) //r.setDirection(glm::vec3(dirX, dirY, -1.0f)-r.origin()); r.setDirection(glm::vec3(dirX, dirY, -1.0f/glm::tan(glm::radians(fieldOfView)))-r.origin()); //Will hold object with closest intersection Object* currentObject = NULL; //Will hold solution of ray-object intersection float closestHit = std::numeric_limits<float>::infinity(), newHit = std::numeric_limits<float>::infinity(); //For each object... for(int i=0; i<nrOfObjects; i++) { //If ray intersect object... if(objects[i]->intersection(r, newHit)) { //If intersection is closer then previous intersection if(newHit<closestHit) { //Update closest intersection and corresponding object closestHit = newHit; currentObject = objects[i]; } } } //If an object has been intersected... if(currentObject != NULL) { //Get intersection point glm::vec3 intersectionPoint = r.origin()+closestHit*r.direction(); //Get light direction and normal glm::vec3 lightDirection = glm::normalize(light1.position()-intersectionPoint); glm::vec3 normal = glm::normalize(currentObject->normal(intersectionPoint)); //Factor affecting reflected diffuse light float LdotN = glm::clamp(glm::dot(lightDirection, normal), 0.0f, 1.0f); //Get diffuse and ambient color of object ambient = currentObject->diffuse()*light1.ambient(); diffuse = currentObject->diffuse()*LdotN*light1.diffuse(); //Final color value of pixel glm::vec3 RGB = ambient+diffuse; //Make sure color values are clamped between 0-255 to avoid artifacts RGB = glm::clamp(RGB*255.0f, 0.0f, 255.0f); //Set color value to pixel testImage.setPixel(x, y, RGB); } else { //No intersection, set black color to pixel. testImage.setPixel(x, y, glm::vec3(0.0f)); } } } //Write out the image testImage.writeImage(); }

I do want to mention that I have been playing around a bit with the parameters involved in my code. I tried putting the position of the ray/camera (CoP) further away from the image plane. As expected the scene zooms in as the field of view is supposed to become smaller. This did obviously not solve anything so I increased the field of view parameter (to neutralize the zoomed-in effect). Doing so removed the distortion (see second image) but this was just a temporal solution because I moved the blue sphere further out from the center of the scene which resulted in it being stretched again with the new settings (see third image).

This is the settings used for the three images:

Image 1: float fieldOfView = 60.0f Ray r(glm::vec3(0.0f, 0.0f, 0.0f), glm::vec3(0.0f)); Image 2: float fieldOfView = 85.0f Ray r(glm::vec3(0.0f, 0.0f, 20.0f), glm::vec3(0.0f)); Image 3: float fieldOfView = 85.0f Ray r(glm::vec3(0.0f, 0.0f, 20.0f), glm::vec3(0.0f));

Again I am totally lost in how to solve this problem. I don't even understand why the new settings I tried removed the distortion (temporarily). I realize some distortion will happen regardless but this seems extreme, furthermore the fact that I had to increase my fov to such a high value above gives me a bad feeling and makes me suspect my code might be at fault somewhere. Any help and explanation will be appreciated