Sign in to follow this  

I have a vertex projection issue on a custom 3D soft-engine

This topic is 397 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

i have been writing a 3D soft-engine in Java for a little while and i have gotten it a reasonable position but recently noticed a bug that i can't seem to solve. When i am rendering a few cubes as a test, if a vertex of the cube moves beyond a certain point outside of the field of view, the coordinate value of that vertex is extended much further than it should be.
 
When working:
 
[attachment=33879:normalBehaviour.png]
 
When glitching:
 
[attachment=33880:abnormalbehaviour.png]
 
I printed the coordinates before the screen space conversion and got:
 point, x: -6.493128495525562 y: 3.074185544504503
so no surprise when the output of a debug line after screen space conversion was:
point a, x: -18294.57930136702 y: -3975.1260633549537
point b, x: 2637.13493945939 y: 745.4028905130891
point c, x: 1376.2540951732117 y: 643.1113243439205
point b and point c's coordinates are as expected but point a's coordinates are not.
 
Please note that those numbers were from different tests so they may not marry together perfectly but the principle is still the same.
 
Hopefully that adds some context.
 
I have been stuck on this bug for a little while now and i can't figure it out, maybe someone else can.
 
I'll attach the methods that i think are the main issues, if there are any that i have excluded please let me know.
 
The important methods from the rendering class:
   public Vector3 transform(Vector3 coord, double[][] transMat) {
// transforming the coordinates
Vector3 point = Matrix.vectorMatrixMultiply(coord, transMat);


//converting the coordinates from 0,0 at the centre to 0,0 at the top left
double x =  point.x * width + width / 2;
double y = -point.y * height + height / 2;
return (new Vector3(x, y, point.z));
}


     // DrawPoint places the point into a back buffer ready to be drawn and tests if it is visible
public void drawPoint(Vector3 point, Color color) {
// Clipping what's visible on screen
int index = (int) (point.x + point.y * width);
if ((point.x >= 0) && (point.y >= 0) && (point.x < width) && (point.y < height)) {
// Drawing a pixel
if (point.z <= depthBuffer[index]) {
depthBuffer[index] = point.z;
backBuffer[index] = color.getRGB();
}
}
}


     public void render(Camera camera, Vector<Mesh> meshes) {
//Transmat  = world*view*projection
double[][] viewMatrix = Matrix.fpsViewRH(camera.position, camera.target.y, camera.target.x);
double[][] projectionMatrix = Matrix.perspectiveProjection(90.35, 90, 0.001, 100);


Arrays.fill(depthBuffer,Double.MAX_VALUE);//reset the depth buffer for the next frame
Arrays.fill(backBuffer, 0);//reset the back buffer for the next frame


meshes.parallelStream().forEach(mesh ->{
// Apply rotation before translation
double[][] worldMatrix = Matrix.matrixMultiply(Matrix.rotationYawPitchRoll(mesh.rotation.x, mesh.rotation.y, mesh.rotation.z), Matrix.setTranslationMatrix(mesh.position.x, mesh.position.y, mesh.position.z));
double[][] transformMatrix = Matrix.matrixMultiply(Matrix.matrixMultiply(worldMatrix, viewMatrix), projectionMatrix);
mesh.faces.parallelStream().forEach(face->{


//get the vertices of each face
Vector3 vertexA = mesh.vertices[face.a];
Vector3 vertexB = mesh.vertices[face.b];
Vector3 vertexC = mesh.vertices[face.c];


//transform the coordinates to be projected properly
Vector3 pixelA = transform(vertexA, transformMatrix);
Vector3 pixelB = transform(vertexB, transformMatrix);
Vector3 pixelC = transform(vertexC, transformMatrix);


if(mesh.drawWireFrame){
//draw the lines of the bounding box
drawLine(pixelA, pixelB, face.colour);
drawLine(pixelB, pixelC, face.colour);
drawLine(pixelC, pixelA, face.colour);


}else{
//draw a triangle based off the transformed coordinates
drawRasterTriangle(pixelA, pixelB, pixelC, face.colour);
}
System.out.println("point a, x: " + pixelA.x+" y: "+pixelA.y);
System.out.println("point b, x: " + pixelB.x+" y: "+pixelB.y);
System.out.println("point c, x: " + pixelC.x+" y: "+pixelC.y);
});


});
System.arraycopy(backBuffer, 0, ((DataBufferInt) bi.getRaster().getDataBuffer()).getData(), 0, backBuffer.length);
paintImmediately(0,0,width, height);
}
 
The used methods from the Matrix class:
 
  
   public static double[][] perspectiveProjection(double fovx, double fovy, double near, double far){
double[][] m = new double[4][4];
for (double[] row: m){
   Arrays.fill(row, 0);
}
m[0][0]= 1/(Math.tan(fovx/2));
m[1][1]= 1/(Math.tan(fovy/2));
m[2][2]= -(far+near)/(far-near);
m[3][2]=-(2*(near*far))/(far-near);
m[2][3]=-1;
return m;
}


    //set the translation matrix
public static double[][] setTranslationMatrix(double tx, double ty, double tz) {
double[][] m = new double[4][4];
for (double[] row: m){
   Arrays.fill(row, 0);
}
m[0][0] = 1; 
m[3][0] = tx;
m[1][1] = 1;
m[3][1] = ty;
m[2][2] = 1;
m[3][2] = tz;
m[3][3] = 1;
return m;
}


    //set the x rotation matrix
public static double[][] setXRotationMatrix(double angle) {
double[][] m = new double[4][4];
for (double[] row: m){
   Arrays.fill(row, 0);
}
m[0][0] = 1;
m[1][1] = Math.cos(angle);
m[1][2] = -Math.sin(angle);
m[2][1] = Math.sin(angle);
m[2][2] = Math.cos(angle);
m[3][3] = 1;
return m;
}


//set the y rotaion matrix
public static double[][] setYRotationMatrix(double angle) {
double[][] m = new double[4][4];
for (double[] row: m){
   Arrays.fill(row, 0);
}
m[0][0] = (double) Math.cos(angle);
m[0][2] = (double) Math.sin(angle);
m[1][1] = 1;
m[2][0] = (double) -Math.sin(angle);
m[2][2] = (double) Math.cos(angle);
m[3][3] = 1;
return m;
}


//set the z rotation matrix
public static double[][] setZRotationMatrix(double angle) {
double[][] m = new double[4][4];
for (double[] row: m){
   Arrays.fill(row, 0);
}
m[0][0] = (double) Math.cos(angle);
m[0][1] = (double) -Math.sin(angle);
m[1][0] = (double) Math.sin(angle);
m[1][1] = (double) Math.cos(angle);
m[2][2]=1;
m[3][3] =1;
return m;
}


public static Vector3 vectorMatrixMultiply(Vector3 point, double[][] m) {
// maths to project it to a perspective view
Vector3 out = new Vector3();
out.x = (point.x * m[0][0]) + (point.y * m[1][0]) + (point.z * m[2][0]) + m[3][0];
out.y = (point.x * m[0][1]) + (point.y * m[1][1]) + (point.z * m[2][1]) + m[3][1];
out.z = (point.x * m[0][2]) + (point.y * m[1][2]) + (point.z * m[2][2]) + m[3][2];
double w = (point.x * m[0][3]) + (point.y * m[1][3]) + (point.z * m[2][3]) + m[3][3];
if (w != 1) { 
       out.x = out.x/ w; 
       out.y = out.y/w; 
       out.z = out.z/w; 
   } 
return out;
} 


public static double[][] matrixMultiply(double[][] a, double[][] b){
double[][] temp = new double[4][4];
         temp[0][0] = (a[0][0] * b[0][0]) + (a[0][1] * b[1][0]) + (a[0][2] * b[2][0]) + (a[0][3] * b[3][0]);
         temp[0][1] = (a[0][0] * b[0][1]) + (a[0][1] * b[1][1]) + (a[0][2] * b[2][1]) + (a[0][3] * b[3][1]);
         temp[0][2] = (a[0][0] * b[0][2]) + (a[0][1] * b[1][2]) + (a[0][2] * b[2][2]) + (a[0][3] * b[3][2]);
         temp[0][3] = (a[0][0] * b[0][3]) + (a[0][1] * b[1][3]) + (a[0][2] * b[2][3]) + (a[0][3] * b[3][3]);
         temp[1][0] = (a[1][0] * b[0][0]) + (a[1][1] * b[1][0]) + (a[1][2] * b[2][0]) + (a[1][3] * b[3][0]);
         temp[1][1] = (a[1][0] * b[0][1]) + (a[1][1] * b[1][1]) + (a[1][2] * b[2][1]) + (a[1][3] * b[3][1]);
         temp[1][2] = (a[1][0] * b[0][2]) + (a[1][1] * b[1][2]) + (a[1][2] * b[2][2]) + (a[1][3] * b[3][2]);
         temp[1][3] = (a[1][0] * b[0][3]) + (a[1][1] * b[1][3]) + (a[1][2] * b[2][3]) + (a[1][3] * b[3][3]);
         temp[2][0] = (a[2][0] * b[0][0]) + (a[2][1] * b[1][0]) + (a[2][2] * b[2][0]) + (a[2][3] * b[3][0]);
         temp[2][1] = (a[2][0] * b[0][1]) + (a[2][1] * b[1][1]) + (a[2][2] * b[2][1]) + (a[2][3] * b[3][1]);
         temp[2][2] = (a[2][0] * b[0][2]) + (a[2][1] * b[1][2]) + (a[2][2] * b[2][2]) + (a[2][3] * b[3][2]);
         temp[2][3] = (a[2][0] * b[0][3]) + (a[2][1] * b[1][3]) + (a[2][2] * b[2][3]) + (a[2][3] * b[3][3]);
         temp[3][0] = (a[3][0] * b[0][0]) + (a[3][1] * b[1][0]) + (a[3][2] * b[2][0]) + (a[3][3] * b[3][0]);
         temp[3][1] = (a[3][0] * b[0][1]) + (a[3][1] * b[1][1]) + (a[3][2] * b[2][1]) + (a[3][3] * b[3][1]);
         temp[3][2] = (a[3][0] * b[0][2]) + (a[3][1] * b[1][2]) + (a[3][2] * b[2][2]) + (a[3][3] * b[3][2]);
         temp[3][3] = (a[3][0] * b[0][3]) + (a[3][1] * b[1][3]) + (a[3][2] * b[2][3]) + (a[3][3] * b[3][3]);
         return temp;
}


public static double[][] rotationYawPitchRoll(double x, double y, double z) {
double[][] m = new double[4][4];
m=matrixMultiply(setXRotationMatrix(x), setYRotationMatrix(y));
m=matrixMultiply(m, setZRotationMatrix(z));
return m;
}


    //the view matrix that changes the eye position, pitch and yaw into a view matrix for a right-handed system
// should be in the range of [0 ... 360] degrees.
public static double[][] fpsViewRH(Vector3 eye, double pitch, double yaw ){
//the pitch and yaw values need to have been converted to radians
   double cosPitch = Math.cos(pitch);
   double sinPitch = Math.sin(pitch);
   double cosYaw = Math.cos(yaw);
   double sinYaw = Math.sin(yaw);


   Vector3 xaxis = new Vector3(cosYaw, 0, -sinYaw);
   Vector3 yaxis = new Vector3( sinYaw * sinPitch, cosPitch, cosYaw * sinPitch);
   Vector3 zaxis = new Vector3(sinYaw * cosPitch, -sinPitch, cosPitch * cosYaw);


   // Create a 4x4 view matrix from the right, up, forward and eye position vectors
   double[][] viewMatrix = new double[4][4];


   viewMatrix[0][0]=xaxis.x; viewMatrix[0][1]=yaxis.x; viewMatrix[0][2]=zaxis.x; viewMatrix[0][3]=0;
        viewMatrix[1][0]=xaxis.y; viewMatrix[1][1]=yaxis.y; viewMatrix[1][2]=zaxis.y; viewMatrix[1][3]=0;
        viewMatrix[2][0]=xaxis.z; viewMatrix[2][1]=yaxis.z; viewMatrix[2][2]=zaxis.z; viewMatrix[2][3]=0;
        viewMatrix[3][0]=-xaxis.dot(eye); viewMatrix[3][1]= -yaxis.dot(eye); viewMatrix[3][2]= -zaxis.dot(eye); viewMatrix[3][3]= 1;


        return viewMatrix;
}
 
The values entered into the view matrix are confined within 0 and 360 and converted to radians so i know that's not an issue as well.
Please excuse the crudity of the code, it is not finished yet. Any help would be greatly appreciated.

Share this post


Link to post
Share on other sites

point b and point c's coordinates are as expected but point a's coordinates are not.

 

I suspect if you'd print out their Z coordinates (after conversion into camera space) it would reveal that point a is behind the camera. If that's the case, add some frustum clipping, especially for the near plane. Here's an algorithm that should work: http://mikro.naprvyraz.sk/docs/Coding/2/FRUSTUM.TXT

Share this post


Link to post
Share on other sites

Actually I've been hitting similar problem (with projection and clipping) when writing my own simple software renderer. You can take a look at it here (this is the source code of the project, it is a bit out dated though):

 

https://github.com/Zgragselus/SoftwareRenderer

 

Search for procedure _dev_cs_tri_clip and it will be inside src/graphics/device.c file if I remember correctly. It is not that easy to read as it allows for something similar to actual vertex and fragment shader programming ... so if you have any question, feel free to ask.

 

In general it is doing the following -> You want to rasterize already projected triangle. So you test it against viewport bounds and clip it (basically you end up with N-gon that can be triangulated into several triangles at most). Once this is done all your triangles are inside the viewport boundaries and therefore you can continue in rasterization.

 

This literally mean that single triangle can be clipped into multiple ones - I think at most 7 vertex convex polygon (if I did the math in my head right). This can be triangulated into 5 triangles at most.

Share this post


Link to post
Share on other sites

I suspect if you'd print out their Z coordinates (after conversion into camera space) it would reveal that point a is behind the camera. If that's the case, add some frustum clipping, especially for the near plane. Here's an algorithm that should work: http://mikro.naprvyraz.sk/docs/Coding/2/FRUSTUM.TXT

 

Thanks for the reply, i've had a look into the frustum clipping, and it looks like i would need to re-structure my Face class? What would be the best way to structure it if i were to do so?

package meshes;

import java.awt.Color;

public class Face{
	public int a,b,c;
	public Color colour;
	public int numpoints = 3;
	
	public Face(int aa,int bb,int cc, Color color){
		a= aa;
		b=bb;
		c=cc;
		colour=color;
	}
	
	public Face(){}
}

Thanks in advanced

Share this post


Link to post
Share on other sites

i suspect the step you are missing is the sutherland-hodgeman cliiping algo.  it clips tris that are part way off the screen to the edge of the screen. as i recall, it comes after you transform to screen space, but before you rasterize. but look it up, its been years (1 year before dx v1.0 came out) since i wrote a software renderer.

 

https://en.wikipedia.org/wiki/Sutherland%E2%80%93Hodgman_algorithm

Share this post


Link to post
Share on other sites

This topic is 397 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this