• FEATURED

View more

View more

View more

### Image of the Day Submit

IOTD | Top Screenshots

### The latest, straight to your Inbox.

Subscribe to GameDev.net Direct to receive the latest updates and exclusive content.

# Implement Perspective Correct Texture Mapping

Old topic!

Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

12 replies to this topic

### #1john13to  Members

Posted 15 June 2012 - 06:53 AM

I have implemented an icosahedron that can rotate around the x-, y- and z-axis and
where it is drawn by a software rasterizer which I have written using active edge list
and drawing every pixel with GL_POINTS in C++/OpenGL.

Now that I managed to implement texture mapping on every face triangle of the
icosahedron by using a checkerboard, which consists of a 2D array containing
color RGB value, I now want to project the texture on every face triangle with a
perspective on it because now the texture on every face triangle is projected flat
on the screen seen in the image 'icotexnotcorrect.png'. I want have a perspective
projection on the texture mapping so that you can see the 3D-depth of the
icosahedron.

This has been described in the Wikipedia article Texture Mapping in the section
'Perspective Correctness' http://en.wikipedia....Texture_mapping and in the
paper 'Fundamentals of Texture Mapping and Image Warping' by Paul Heckbert
http://www.cs.cmu.ed...und/texfund.pdf

Especially in Paul Heckberts paper, he is describing about different kinds of
mapping like Affine, Bilinear and Projective. But I find it very confusing,
there is a lot of transformations he is mentioning from (u,v) space to (x, y) space
it is needed to be done with several unknown element values in a transformation
matrix. I dont know how and where I should implement any of those mapping inside
the code.

I have a 2D array that is representing a red-white checkerboard:

/* Create checkerboard texture */
#define checkImageWidth 2048
#define checkImageHeight 2048
static GLubyte checkImage[checkImageHeight][checkImageWidth][4];

void makeCheckImage(void)
{
int i, j;
float c;

for (i = 0; i < checkImageHeight; i++) {
for (j = 0; j < checkImageWidth; j++) {
c = ((((i&0x8)==0)^((j&0x8))==0))*1.0f;

if(c == 1.0f)
{
checkImage[i][j][0] = (GLubyte) c; // Red
checkImage[i][j][1] = (GLubyte) 0.0f; // Green
checkImage[i][j][2] = (GLubyte) 0.0f; // Blue
}
else
{
checkImage[i][j][0] = (GLubyte) 1.0f; // Red
checkImage[i][j][1] = (GLubyte) 1.0f; // Green
checkImage[i][j][2] = (GLubyte) 1.0f; // Blue
}
checkImage[i][j][3] = (GLubyte) 1.0f; // Alpha
}
}
}

And here is the code where I am drawing the current face triangle
by using Active Edge List. At the current scanline I am drawing
the pixels along x from the first edge to the second edge of the
face triangle:

for (int k = 0; k < size; k+=2)
{
float currX = intersectVertices.at(k);

float currDZDX = spanVerticesDepth.at(k);

float dz = spanVerticesDepth.at(k+1) - spanVerticesDepth.at(k);
float dx = intersectVertices.at(k+1) - intersectVertices.at(k);

float dzdx = (dz/dx);

while(currX < intersectVertices.at(k+1))
{

if (texture)
{

//The texture mapping
texValueR = (float)checkImage[(int)(abs(currX-abs(xStart))/4.0f)]
[(int)(abs(scanline-abs(scanlineStart))/4.0f)][0];
texValueG = (float)checkImage[(int)(abs(currX-abs(xStart))/4.0f)]
[(int)(abs(scanline-abs(scanlineStart))/4.0f)][1];
texValueB = (float)checkImage[(int)(abs(currX-abs(xStart))/4.0f)]
[(int)(abs(scanline-abs(scanlineStart))/4.0f)][2];

//Sets current texture color
glColor3f(texValueR, texValueG, texValueB);
}

// Draw at the current scanline
if(currDZDX < zbufferdata[(int)currX+1000][(int)scanline+1000])
{
glVertex2f((currX)/1000.0f,
(scanline)/1000.0f);
zbufferdata[(int)currX+1000][(int)scanline+1000] = currDZDX;
}

currX += 1.0f;
currDZDX += dzdx;

}

}

This partial code produces the image that has been attached to this topic.
But how and where should I implement the Perspective Correct Texture Mapping?

### #2Olof Hedman  Members

Posted 15 June 2012 - 06:26 PM

I have implemented an icosahedron that can rotate around the x-, y- and z-axis and

where it is drawn by a software rasterizer which I have written using active edge list

and drawing every pixel with GL_POINTS in C++/OpenGL.

That must be the weirdest use of OpenGL I've heard about yet

I'm assuming you do this for some educational purpouse?

I wouldn't use opengl at all for this, but if you really want to, why not render into a memory buffer and upload it as a texture?

The whole point of texture mapping is to know what texture coordinate (u,v) to select for the given screen coordinate (x,y). Thats the transformation.

I'd recommend you start with affine texture mapping, since it is a lot simpler to understand, and you will need all parts of it for your perspective correct texture map.

Basically you need a coordinate into your texture for each vertex, and interpolate those coordinates along the edges of the triangle while rastering, and then just select the texel at that coordinate.

For perspective correct, you need to add some divisions to compensate.

I might also recommend to actually use opengl properly, specially using a programmable pipeline is not that far from doing a software renderer, only you don't have to bother too much with the messy parts of interpolating, perspective correcting, rasterizing and shuffling vertexes/indices, since the hardware takes care of that for you.

Edited by Olof Hedman, 15 June 2012 - 06:56 PM.

### #3john13to  Members

Posted 16 June 2012 - 05:18 PM

That must be the weirdest use of OpenGL I've heard about yet
I'm assuming you do this for some educational purpouse?

Yes it is for a educational purpose, it is to explain rasterization and to
implement a software rasterizer that maps a texture correctly as a
part of a master thesis report. ;-)

### #4bronxbomber92  Members

Posted 16 June 2012 - 06:52 PM

This should be very helpful: http://www.altdevblogaday.com/2012/04/29/software-rasterizer-part-2/. Eric Lengyel's book Mathematics for 3D Game Programming & Computer Graphics, Third Edition has a great explanation, too.

Without getting into the mathematical explanation, to do perspective correct attribute interpolation (such as texture-coordinate mapping) the easiest way is to realize that 1/view-space-z-coordinate (which for brevity lets call 1/z-view) can be interpolated linearly in screen-space and remain correct for perspective projections. Thus, if you divide your attributes by z-view (attribute/z-view) and linear interpolate both the attribute values divided by z-view and the 1/z-view, then at each pixel you can get the perspective-correct attribute value by multiplying your interpolated attribute/z-view value by the reciprocal of your interpolated 1/z-view value. And as the above article states, conveniently the w coordinate of our homogeneous coordinates (our vertices after be transformed by the projection matrix) is equal to z-view.

### #5john13to  Members

Posted 15 August 2012 - 06:42 AM

I have now tried to implement the perspective correctness after studying and reading the articles in the link you posted on and then for the past
two months. And while some things have been cleared up on how and why you need to interpolate 1/z-view which I read in Chris Heckers
article in http://chrishecker.c.../41/Gdmtex1.pdf there are other things that still are very unclear and confusing.

Not only is 1/z-view is interpolated but also the u/z-view and v/z-view and what I also can see in the source code in that article is that
there are gradient variables that are created in texture coordinates as well like dUOverdX, dUOverdY, dVOverdX and dVOverdY.
I can see that the color values are picked out from the texture with the texture variables U and V in texture coordinates.
But the thing is I have a texture represented by a 2D array that has 2048 positions along the height and the 2048 positions
along the width. I usually do access every color value in that texture with the current X and Y that have a value in the range 0....2048.

Am I supposed to represent that checkboard texture in texture coordinates? Can´t I use the regular range values in that 2D array
to calculate the perspective correctness with the formula seen in http://en.wikipedia....ive_correctness ?
Like this for instance:

u_a = ((1-a)*(u0 / z0) + a*(u1 / z1)) / ((1-a)*(1/z0) + a*(1/z1))
v_a = ((1-a)*(v0 / z0) + a*(v1 / z1)) / ((1-a)*(1/z0) + a*(1/z1))

I use this

widthStart = 0
widthEnd = 2048
heightStart = 0
heightEnd = 2048

xTex_a = ((1-a)*(widthStart / z0) + a*(widthEnd / z1)) / ((1-a)*(1 / z0) + a*(1 / z1))
yTex_a = ((1-a)*(heightStart / z0) + a*(heightEnd / z1)) / ((1-a)*(1 / z0) + a*(1 / z1))

Edited by john13to, 15 August 2012 - 06:55 AM.

### #6Olof Hedman  Members

Posted 16 August 2012 - 12:48 AM

Its good for numerical precision to do your calculations with numbers in the range 0-1.0.
You can just multiply the texture coordinate with 2048 at the point where you access it to convert to pixel coordinates within the texture

Edit: Though, I guess that would mean more calculations in the inner loop, so I see how it could make sense to move it in this case

Edited by Olof Hedman, 16 August 2012 - 04:41 AM.

### #7Catmull Dog  Members

Posted 16 August 2012 - 07:30 AM

Perspective Correct Texture Mapping is not an easy problem to solve. Back in the mid 1990's it had to be implemented in software, but since then it's done in hardware so you don't have to worry much about it. I remember a colleague who did it in software circa 1994 for another game company and he gave me a hint at how they did it: "Subdivide, subdivide, subdivide".

### #8DekuTree64  Members

Posted 17 August 2012 - 05:34 PM

Without getting into the mathematical explanation, to do perspective correct attribute interpolation (such as texture-coordinate mapping) the easiest way is to realize that 1/view-space-z-coordinate (which for brevity lets call 1/z-view) can be interpolated linearly in screen-space and remain correct for perspective projections. Thus, if you divide your attributes by z-view (attribute/z-view) and linear interpolate both the attribute values divided by z-view and the 1/z-view, then at each pixel you can get the perspective-correct attribute value by multiplying your interpolated attribute/z-view value by the reciprocal of your interpolated 1/z-view value. And as the above article states, conveniently the w coordinate of our homogeneous coordinates (our vertices after be transformed by the projection matrix) is equal to z-view.

This is correct. Let me try to explain it in different words so you get a better picture of what to do...

First, just get a triangle rendering with affine texture mapping. That is done like this:
Sort the 3 vertices by Y coordinate. The triangle is drawn in two halves. The first half is from v1.y down to v2.y, the second half is from v2.y down to v3.y.

So, to draw the top half, calculate the X gradient, U gradient, and V gradient for vertex 1 to vertex 2, and for vertex 1 to vertex 3. One set of gradients will be the left edge of the triangle, and the other will be the right edge (depending on whether v2.x is less or greater than v3.x).

Gradients are calculated by dividing the change in x/u/v by the change in y along the edge in question:
dx = (v2.x - v1.x) / (v2.y - v1.y);
du = (v2.u - v1.u) / (v2.y - v1.y);
dv = (v2.v - v1.v) / (v2.y - v1.y);

Then do that again with v3 in place of v2 to get the other edge's gradients.

Then loop from v1.y to v2.y, adding the gradients to the current x/u/v values for the left and right edges.

For each scanline as you do that Y loop, you draw a span of the triangle. So, while looping from the current xLeft to xRight, you need another set of gradients, calculated like so:
du = (duRight - duLeft) - (xRight - xLeft);
dv = (dvRight - dvLeft) - (xRight - xLeft);

And the very inner loop looks like this:
for (x = xLeft, u = uLeft, v = vLeft; x < xRight; x += 1, u += du, v += dv)
DrawPixel(x, y, GetTexel(u, v));

And when y reaches v2.y, then you calculate the gradients for v2 to v3, put them in either the left or right variables (depending on whether v2.x is less or greater than v3.x), and continue on drawing the bottom half of the triangle.

So... that's triangle drawing with affine texture mapping. For perspective correct, you just switch from interpolating u and v directly, to interpolating u/z and v/z, plus you need to interpolate one more variable, which is 1/z. Then when looking up the texel, you do GetTexel(uOverZ / oneOverZ, vOverZ / oneOverZ). That's all there is to it.

The easiest way to implement that, is right at the start of the triangle draw function, modify all the vertex values to their z-divided forms, like so:
v1.u /= v1.z;
v1.v /= v1.z;
v1.z = 1.0 / v1.z;

v2.u /= v2.z;
v2.v /= v2.z;
v2.z = 1.0 / v2.z;

v3.u /= v3.z;
v3.v /= v3.z;
v3.z = 1.0 / v3.z;

Then you can do all the gradient calculations just the same as the affine version, just adding in the z interpolation too. And all that changes in the inner loop is adding the divide by reciprocal z.

### #9Hodgman  Moderators

Posted 17 August 2012 - 11:30 PM

Perspective Correct Texture Mapping is not an easy problem to solve.

It actually is pretty simple - you just divide your interpolated values by the projected position's homogenous coordinate after interpolating. E.g. uv /= pos.w
It was only challenging in the 90's because division is slow, so people got creative in trying to avoid doing it properly.

### #10john13to  Members

Posted 18 August 2012 - 11:41 AM

Thanks for all of the replies when it comes to the solution of this problem but I still dont manage to implement this feature correctly. Since I have been using Active Edge List to draw every triangle of the Icosahedron, I have embedded the 1/z, u/z and v/z interpolation into the Active Edge List algorithm. Every vertex of the triangle has been assigned a u and v coordinate value that resides between 0.0 and 1.0 to denote the start and end values for u/z and v/z during the interpolation. When I finally pick out the color values from the texture array with u and v I calculate them as:

z = 1/(1/z_ip)
u = (u_ip/z_ip)*z
v = (v_ip/z_ip)*z

I have attached the image 'perspcorrtextri.png' which shows the visual result of the implementation for one triangle. It becomes distorted since it produces red and white stripes that are bended and when I am rasterizing all of the triangles the program eventually crashes. I have also copied the whole source code at its current state on how I have implemented so far into the code section. It is very large but the most important part is located in the void function 'facefillRot(Faces faceTriangle)'.

#include <iostream>
#include <cmath>
#include <vector>
#include <GL/glut.h>
using namespace std;
const int INTMIN = -2147483648;
const int INTMAX = 2147483647;
int counter = 0;
#define ZBUFFERW 2000
#define ZBUFFERH 2000
#define INF 10000.0f
static GLfloat zbufferdata[ZBUFFERW][ZBUFFERH];
/*  Create checkerboard texture  */
#define checkImageWidth 2048
#define checkImageHeight 2048
static GLubyte checkImage[checkImageHeight][checkImageWidth][4];
static GLuint texName;
static int N = 16;
float angle = 0.0f;
float phi = (1.0f + sqrt(5.0f))/2.0f;
float a = 0.2f;
float b = 0.4f;
bool drawVertexRot = true;
bool drawVertexRotFixed = false;
bool drawRot = false;
bool drawRotFixed = false;
struct Vertex {
float x, y, z, u, v;
};
struct Vector {
float x, y, z;
};
struct Faces {
Vertex v1;
Vertex v2;
Vertex v3;
};

Vertex vertices[12];
Faces FaceTriangles[20];
Faces FaceTriRot[20];
float zCentroids[20][2];
bool aroundX = true;
bool aroundY = false;
bool aroundZ = false;
bool texture = true;
bool multiColor = false;
Vector view = {0.0f, 0.0f, 1.0f};
struct Edge {
Vertex v1;		  // 1st vertex
Vertex v2;		  // 2nd vertex

float m;			// slope dy/dx
float currX;		// current x that intersects with the current scanline

float r;			// slope dy/dz
float currZ;		// current z that intersects with the current scanline

float dydOneoverZ;  // slope dy/(d(1/z)) is going to be inverted in UpdateEdge
float currOneoverZ; // current interpolated (1/z)

float dydUoverZ;	// slope dy/(d(u/z)) is going to be inverted in UpdateEdge
float currUoverZ;   // current interpolated (u/z)

float dydVoverZ;	// slope dy/(d(v/z)) is going to be inverted in UpdateEdge
float currVoverZ;   // current interpolated (v/z)
};
Edge CreateEdge(Vertex v1, Vertex v2)
{
Edge e;

e.v1 = v1;
e.v2 = v2;

e.m = (v2.y - v1.y) / (v2.x - v1.x);

e.r = (v2.y - v1.y) / (v2.z - v1.z);

e.dydOneoverZ = (v2.y - v1.y) / ((1.0f/v2.z) - (1.0f/v1.z));

e.dydUoverZ = (v2.y - v1.y) / ((v2.u/v2.z) - (v1.u/v1.z));

e.dydVoverZ = (v2.y - v1.y) / ((v2.v/v2.z) - (v1.v/v1.z));

return e;
}
void UpdateEdge(Edge &e, float scanline)
{
e.currX += 1.0f/e.m;
e.currZ += 1.0f/e.r;

e.currOneoverZ += 1.0f/e.dydOneoverZ;
e.currUoverZ += 1.0f/e.dydUoverZ;
e.currVoverZ += 1.0f/e.dydVoverZ;
}
void ActivateEdge(Edge &e, float scanline)
{
e.currX = e.v1.x;
e.currZ = e.v1.z;

e.currOneoverZ = 1.0f/e.v1.z;
e.currUoverZ = e.v1.u/e.v1.z;
e.currVoverZ = e.v1.v/e.v1.z;
}
void DeactivateEdge(Edge &e, float scanline)
{
e.currX = e.v2.x;
e.currZ = e.v2.z;

e.currOneoverZ = 1.0f/e.v2.z;
e.currUoverZ = e.v2.u/e.v2.z;
e.currVoverZ = e.v2.v/e.v2.z;
}
Edge* CreateEdges(Vertex polygon[], int nVertices)
{
Edge *e = new Edge[nVertices];

//Sort the edges based upon which vertex has the smallest y
for(int i = 0; i < nVertices - 1; i++)
{
if (polygon[i].y < polygon[i+1].y)
{
e[i] = CreateEdge(polygon[i], polygon[i+1]);
}
else
{
e[i] = CreateEdge(polygon[i+1], polygon[i]);
}

}

return e;
}
void initZBufferData()
{
for(int i = 0; i < ZBUFFERW; i++)
{
for(int j = 0; j < ZBUFFERH; j++)
{
zbufferdata[i][j] = INF;
}
}
}
Faces rotFaceX(Faces f, float angle)
{

f.v1.x = f.v1.x;
f.v1.y = f.v1.y * cos(angle) - f.v1.z * sin(angle);
f.v1.z = f.v1.y * sin(angle) + f.v1.z * cos(angle);

f.v2.x = f.v2.x;
f.v2.y = f.v2.y * cos(angle) - f.v2.z * sin(angle);
f.v2.z = f.v2.y * sin(angle) + f.v2.z * cos(angle);

f.v3.x = f.v3.x;
f.v3.y = f.v3.y * cos(angle) - f.v3.z * sin(angle);
f.v3.z = f.v3.y * sin(angle) + f.v3.z * cos(angle);

return f;

}

Faces rotFaceY(Faces f, float angle)
{

f.v1.x = f.v1.x * cos(angle) + f.v1.z * sin(angle);
f.v1.y = f.v1.y;
f.v1.z = -f.v1.x * sin(angle) + f.v1.z * cos(angle);

f.v2.x = f.v2.x * cos(angle) + f.v2.z * sin(angle);
f.v2.y = f.v2.y;
f.v2.z = -f.v2.x * sin(angle) + f.v2.z * cos(angle);

f.v3.x = f.v3.x * cos(angle) + f.v3.z * sin(angle);
f.v3.y = f.v3.y;
f.v3.z = -f.v3.x * sin(angle) + f.v3.z * cos(angle);

return f;

}
Faces rotFaceZ(Faces f, float angle)
{

f.v1.x = f.v1.x * cos(angle) - f.v1.y * sin(angle);
f.v1.y = f.v1.x * sin(angle) + f.v1.y * cos(angle);
f.v1.z = f.v1.z;

f.v2.x = f.v2.x * cos(angle) - f.v2.y * sin(angle);
f.v2.y = f.v2.x * sin(angle) + f.v2.y * cos(angle);
f.v2.z = f.v2.z;

f.v3.x = f.v3.x * cos(angle) - f.v3.y * sin(angle);
f.v3.y = f.v3.x * sin(angle) + f.v3.y * cos(angle);
f.v3.z = f.v3.z;

return f;

}
float signedAreaRot(Faces currF)
{

Faces f = currF;
Vector u1;
u1.x = f.v2.x - f.v1.x;
u1.y = f.v2.y - f.v1.y;

Vector u2;
u2.x = f.v3.x - f.v1.x;
u2.y = f.v3.y - f.v1.y;

return (1.0f/2.0f) * ((u1.x*u2.y) - (u2.x*u1.y));
}

void makeIcosahedron(void)
{
vertices[0].x = a;	vertices[0].y = 0.0f; vertices[0].z = b;
vertices[0].u = a+0.5f;	vertices[0].v = 0.0f+0.5f;

vertices[1].x = -a;   vertices[1].y = 0.0f; vertices[1].z = b;
vertices[1].u = -a+0.5f;   vertices[1].v = 0.0f+0.5f;

vertices[2].x = a;	vertices[2].y = 0.0f; vertices[2].z = -b;
vertices[2].u = a+0.5f;   vertices[1].v = 0.0f+0.5f;

vertices[3].x = -a;   vertices[3].y = 0.0f; vertices[3].z = -b;
vertices[3].u = -a+0.5f;   vertices[3].v = 0.0f+0.5f;

vertices[4].x = 0.0f; vertices[4].y = b;	vertices[4].z = a;
vertices[4].u = 0.0f+0.5f; vertices[4].v = b+0.5f;

vertices[5].x = 0.0f; vertices[5].y = -b;   vertices[5].z = a;
vertices[5].u = 0.0f+0.5f; vertices[5].v = -b+0.5f;

vertices[6].x = 0.0f; vertices[6].y = b;	vertices[6].z = -a;
vertices[6].u = 0.0f+0.5f; vertices[6].v = b+0.5f;

vertices[7].x = 0.0f; vertices[7].y = -b;   vertices[7].z = -a;
vertices[7].u = 0.0f+0.5f; vertices[7].v = -b+0.5f;

vertices[8].x = b;	vertices[8].y = a;	vertices[8].z = 0.0f;
vertices[8].u = b+0.5f;	vertices[8].v = a+0.5f;

vertices[9].x = -b;   vertices[9].y = a;	vertices[9].z = 0.0f;
vertices[9].u = -b+0.5f;   vertices[9].v = a+0.5f;

vertices[10].x = b;   vertices[10].y = -a;  vertices[10].z = 0.0f;
vertices[10].u = b+0.5f;   vertices[10].v = -a+0.5f;

vertices[11].x = -b;  vertices[11].y = -a;  vertices[11].z = 0.0f;
vertices[11].u = -b+0.5f;   vertices[11].v = -a+0.5f;

FaceTriangles[0].v1 = vertices[0];
FaceTriangles[0].v2 = vertices[4];
FaceTriangles[0].v3 = vertices[1];

FaceTriangles[1].v1 = vertices[0];
FaceTriangles[1].v2 = vertices[1];
FaceTriangles[1].v3 = vertices[5];

FaceTriangles[2].v1 = vertices[0];
FaceTriangles[2].v2 = vertices[5];
FaceTriangles[2].v3 = vertices[10];

FaceTriangles[3].v1 = vertices[0];
FaceTriangles[3].v2 = vertices[10];
FaceTriangles[3].v3 = vertices[8];

FaceTriangles[4].v1 = vertices[0];
FaceTriangles[4].v2 = vertices[8];
FaceTriangles[4].v3 = vertices[4];

FaceTriangles[5].v1 = vertices[4];
FaceTriangles[5].v2 = vertices[8];
FaceTriangles[5].v3 = vertices[6];

FaceTriangles[6].v1 = vertices[4];
FaceTriangles[6].v2 = vertices[6];
FaceTriangles[6].v3 = vertices[9];

FaceTriangles[7].v1 = vertices[4];
FaceTriangles[7].v2 = vertices[9];
FaceTriangles[7].v3 = vertices[1];

FaceTriangles[8].v1 = vertices[1];
FaceTriangles[8].v2 = vertices[9];
FaceTriangles[8].v3 = vertices[11];

FaceTriangles[9].v1 = vertices[1];
FaceTriangles[9].v2 = vertices[11];
FaceTriangles[9].v3 = vertices[5];

FaceTriangles[10].v1 = vertices[2];
FaceTriangles[10].v2 = vertices[7];
FaceTriangles[10].v3 = vertices[3];

FaceTriangles[11].v1 = vertices[2];
FaceTriangles[11].v2 = vertices[3];
FaceTriangles[11].v3 = vertices[6];

FaceTriangles[12].v1 = vertices[2];
FaceTriangles[12].v2 = vertices[6];
FaceTriangles[12].v3 = vertices[8];

FaceTriangles[13].v1 = vertices[2];
FaceTriangles[13].v2 = vertices[8];
FaceTriangles[13].v3 = vertices[10];

FaceTriangles[14].v1 = vertices[2];
FaceTriangles[14].v2 = vertices[10];
FaceTriangles[14].v3 = vertices[7];

FaceTriangles[15].v1 = vertices[7];
FaceTriangles[15].v2 = vertices[10];
FaceTriangles[15].v3 = vertices[5];

FaceTriangles[16].v1 = vertices[7];
FaceTriangles[16].v2 = vertices[5];
FaceTriangles[16].v3 = vertices[11];

FaceTriangles[17].v1 = vertices[7];
FaceTriangles[17].v2 = vertices[11];
FaceTriangles[17].v3 = vertices[3];

FaceTriangles[18].v1 = vertices[3];
FaceTriangles[18].v2 = vertices[11];
FaceTriangles[18].v3 = vertices[9];

FaceTriangles[19].v1 = vertices[3];
FaceTriangles[19].v2 = vertices[9];
FaceTriangles[19].v3 = vertices[6];

}
void makeCheckImage(void)
{
int i, j;
float c;

for (i = 0; i < checkImageHeight; i++) {
for (j = 0; j < checkImageWidth; j++) {
c = ((((i&0x8)==0)^((j&0x8))==0))*1.0f;

if(c == 1.0f)
{
checkImage[i][j][0] = (GLubyte) c;		 // Red
checkImage[i][j][1] = (GLubyte) 0.0f;	  // Green
checkImage[i][j][2] = (GLubyte) 0.0f;	  // Blue
}
else
{
checkImage[i][j][0] = (GLubyte) 1.0f;	   // Red
checkImage[i][j][1] = (GLubyte) 1.0f;	   // Green
checkImage[i][j][2] = (GLubyte) 1.0f;	   // Blue
}
checkImage[i][j][3] = (GLubyte) 1.0f;		   // Alpha
}
}
}

void faceFillRot(Faces FaceTriangle)
{

int nVertices = 4;
Vertex polygon[4];

polygon[0].x = floor(1000.0f*FaceTriangle.v1.x);
polygon[0].y = floor(1000.0f*FaceTriangle.v1.y);
polygon[0].z = floor(1000.0f*FaceTriangle.v1.z);

polygon[1].x = floor(1000.0f*FaceTriangle.v2.x);
polygon[1].y = floor(1000.0f*FaceTriangle.v2.y);
polygon[1].z = floor(1000.0f*FaceTriangle.v2.z);

polygon[2].x = floor(1000.0f*FaceTriangle.v3.x);
polygon[2].y = floor(1000.0f*FaceTriangle.v3.y);
polygon[2].z = floor(1000.0f*FaceTriangle.v3.z);

polygon[3].x = floor(1000.0f*FaceTriangle.v1.x);
polygon[3].y = floor(1000.0f*FaceTriangle.v1.y);
polygon[3].z = floor(1000.0f*FaceTriangle.v1.z);

Edge *edgeList = CreateEdges(polygon, nVertices);

Edge temp;

for(int i = nVertices - 1; i > 0 ; i--)
{
for(int j = 0; j < i; j++)
{

if (edgeList[j].v1.y > edgeList[j+1].v1.y)
{
temp = edgeList[j];
edgeList[j] = edgeList[j+1];
edgeList[j+1] = temp;
}
}
}

float scanlineEnd = 0.0f;
float xStart = 0.0f;
float xEnd = 0.0f;

// Find out which vertex has the largest y and smallest x
for (int i = 0; i < nVertices; i++)
{
if (scanlineEnd < edgeList[i].v2.y)
{
scanlineEnd = edgeList[i].v2.y;
//z1 = edgeList[i].v1.z;
}

if (xStart > edgeList[i].v1.x)
{
xStart = edgeList[i].v1.x;
}

if (xEnd < edgeList[i].v2.x)
{
xEnd = edgeList[i].v2.x;
}
}

float scanlineStart = edgeList[0].v1.y;
//float z0 = edgeList[0].v1.z;

if(xStart > scanlineStart)
{
xStart = scanlineStart;
}
if (scanlineStart < xStart)
{
scanlineStart = xStart;
}

vector<float> intersectVertices;
vector<float> spanVerticesDepth;
vector<float> OneoverZDepth;
vector<float> UoverZDepth;
vector<float> VoverZDepth;
for (float scanline = scanlineStart;
scanline < scanlineEnd; scanline+=1.0f)
{

//Clear the list
intersectVertices.clear();
spanVerticesDepth.clear();

OneoverZDepth.clear();
UoverZDepth.clear();
VoverZDepth.clear();

for (int i = 0; i < nVertices; i++)
{

// The smaller vertex is intersected by the scanline
if (scanline == edgeList[i].v1.y)
{
// A horizontal edge
if(scanline == edgeList[i].v2.y)
{
DeactivateEdge(edgeList[i], scanline);
intersectVertices.push_back(edgeList[i].currX);
spanVerticesDepth.push_back(edgeList[i].currZ);

OneoverZDepth.push_back(edgeList[i].currOneoverZ);
UoverZDepth.push_back(edgeList[i].currUoverZ);
VoverZDepth.push_back(edgeList[i].currVoverZ);

}
else
{
ActivateEdge(edgeList[i], scanline);
}
}

// The larger vertex is intersected by the scanline
if (scanline == edgeList[i].v2.y)
{
DeactivateEdge(edgeList[i], scanline);
intersectVertices.push_back(edgeList[i].currX);
spanVerticesDepth.push_back(edgeList[i].currZ);

OneoverZDepth.push_back(edgeList[i].currOneoverZ);
UoverZDepth.push_back(edgeList[i].currUoverZ);
VoverZDepth.push_back(edgeList[i].currVoverZ);
}

// The edge is intersected by the scanline, calculate the intersection point
if (scanline > edgeList[i].v1.y && scanline < edgeList[i].v2.y)
{
UpdateEdge(edgeList[i], scanline);
intersectVertices.push_back(edgeList[i].currX);
spanVerticesDepth.push_back(edgeList[i].currZ);

OneoverZDepth.push_back(edgeList[i].currOneoverZ);
UoverZDepth.push_back(edgeList[i].currUoverZ);
VoverZDepth.push_back(edgeList[i].currVoverZ);
}

}

//Sort both x and z based on the x value in rising order
float swaptemp;
float swaptempDepth;
float swaptempOneoverZDepth;
float swaptempUoverZDepth;
float swaptempVoverZDepth;

for (int m = intersectVertices.size() - 1; m > 0; m--)
{
for (int n = 0; n < m; n++)
{
if( intersectVertices.at(n) > intersectVertices.at(n+1))
{
swaptemp = intersectVertices.at(n);
intersectVertices.at(n) = intersectVertices.at(n+1);
intersectVertices.at(n+1) = swaptemp;

swaptempDepth = spanVerticesDepth.at(n);
spanVerticesDepth.at(n) = spanVerticesDepth.at(n+1);
spanVerticesDepth.at(n+1) = swaptempDepth;

swaptempOneoverZDepth = OneoverZDepth.at(n);
OneoverZDepth.at(n) = OneoverZDepth.at(n+1);
OneoverZDepth.at(n+1) = swaptempOneoverZDepth;

swaptempUoverZDepth = UoverZDepth.at(n);
UoverZDepth.at(n) = UoverZDepth.at(n+1);
UoverZDepth.at(n+1) = swaptempUoverZDepth;

swaptempVoverZDepth = VoverZDepth.at(n);
VoverZDepth.at(n) = VoverZDepth.at(n+1);
VoverZDepth.at(n+1) = swaptempVoverZDepth;

}
}
}

float texValueR = 0.0f;
float texValueG = 0.0f;
float texValueB = 0.0f;

int size = 0;
if (intersectVertices.size() > 2 && intersectVertices.size() % 2 != 0)
{
size = intersectVertices.size() - 1;
}
else if (intersectVertices.size() < 2 && intersectVertices.size() % 2 != 0)
{
size = 0;
}
else
{
size = intersectVertices.size();
}
for (int k = 0; k < size; k+=2)
{
float currX = intersectVertices.at(k);

float currDZDX = spanVerticesDepth.at(k);

float dz = spanVerticesDepth.at(k+1) - spanVerticesDepth.at(k);
float dx = intersectVertices.at(k+1) - intersectVertices.at(k);

float dzdx = (dz/dx);

float currOneoverZ = OneoverZDepth.at(k);
float dOneoverZ = OneoverZDepth.at(k+1) - OneoverZDepth.at(k);
float dOneoverZdx = dOneoverZ/dx;

float currUoverZ = UoverZDepth.at(k);
float dUoverZ = UoverZDepth.at(k+1) - UoverZDepth.at(k);
float dUoverZdx = dUoverZ/dx;

float currVoverZ = VoverZDepth.at(k);
float dVoverZ = VoverZDepth.at(k+1) - VoverZDepth.at(k);
float dVoverZdx = dVoverZ/dx;

float zOrig = 0.0f;
float uOrig = 0.0f;
float vOrig = 0.0f;

while(currX < intersectVertices.at(k+1))
{

if (texture)
{

zOrig = 1.0f/currOneoverZ;
uOrig = currUoverZ*zOrig;
vOrig = currVoverZ*zOrig;

texValueR = (float)checkImage[(int)(abs(uOrig*2048.0f/8.0f))]
[(int)(abs(vOrig*2048.0f/8.0f))][0];
texValueG = (float)checkImage[(int)(abs(uOrig*2048.0f/8.0f))]
[(int)(abs(vOrig*2048.0f/8.0f))][1];
texValueB = (float)checkImage[(int)(abs(uOrig*2048.0f/8.0f))]
[(int)(abs(vOrig*2048.0f/8.0f))][2];

//Sets current texture color
glColor3f(texValueR, texValueG, texValueB);
}

// Draw at the current scanline
if(currDZDX < zbufferdata[(int)currX+1000][(int)scanline+1000])
{
glVertex2f((currX)/1000.0f,
(scanline)/1000.0f);
zbufferdata[(int)currX+1000][(int)scanline+1000] = currDZDX;
}

currX += 1.0f;
currDZDX += dzdx;

currOneoverZ += dOneoverZdx;
currUoverZ += dUoverZdx;
currVoverZ += dVoverZdx;

}

}

}
}
void renderScene(void)
{
glClearColor(0.0f, 0.0f, 0.5f, 0.0f);
glClear( GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT );

if(drawVertexRot)
{
initZBufferData();
glCullFace(GL_BACK);
glEnable(GL_CULL_FACE);
glEnable(GL_DEPTH_TEST);
glBegin(GL_POINTS);
if(aroundX)
{
for(int i = 0; i < 20; i++)
{
FaceTriRot[i] = rotFaceX(FaceTriangles[i], angle);
}
}

if(aroundY)
{
for(int i = 0; i < 20; i++)
{
FaceTriRot[i] = rotFaceY(FaceTriangles[i], angle);
}
}

if(aroundZ)
{
for(int i = 0; i < 20; i++)
{
FaceTriRot[i] = rotFaceZ(FaceTriangles[i], angle);
}
}

if (texture)
{
for (int i = 0; i < 1; i++) {

/* color information here */
if(signedAreaRot(FaceTriRot[i]) > 0 )
{
faceFillRot(FaceTriRot[i]);
}
}
}

if(multiColor)
{

if(signedAreaRot(FaceTriRot[0]) > 0 )
{

glColor3f(0.0f, 1.0f, 1.0f);
faceFillRot(FaceTriRot[0]);
}

if(signedAreaRot(FaceTriRot[1]) > 0 )
{
glColor3f(1.0f, 1.0f, 0.0f);
faceFillRot(FaceTriRot[1]);
}

if(signedAreaRot(FaceTriRot[2]) > 0)
{
glColor3f(1.0f, 0.0f, 1.0f);
faceFillRot(FaceTriRot[2]);
}

if(signedAreaRot(FaceTriRot[3]) > 0 )
{
glColor3f(1.0f, 0.0f, 0.0f);
faceFillRot(FaceTriRot[3]);
}

if(signedAreaRot(FaceTriRot[4]) > 0 )
{
glColor3f(0.0f, 1.0f, 0.0f);
faceFillRot(FaceTriRot[4]);
}

if(signedAreaRot(FaceTriRot[5]) > 0 )
{
glColor3f(0.0f, 1.0f, 0.5f);
faceFillRot(FaceTriRot[5]);
}

if(signedAreaRot(FaceTriRot[6]) > 0 )
{
glColor3f(0.7f, 0.7f, 0.4f);
faceFillRot(FaceTriRot[6]);
}

if(signedAreaRot(FaceTriRot[7]) > 0 )
{
glColor3f(0.3f, 0.3f, 0.0f);
faceFillRot(FaceTriRot[7]);
}

if(signedAreaRot(FaceTriRot[8]) > 0 )
{
glColor3f(0.6f, 0.3f, 0.4f);
faceFillRot(FaceTriRot[8]);
}

if(signedAreaRot(FaceTriRot[9]) > 0 )
{
glColor3f(1.0f, 0.4f, 0.2f);
faceFillRot(FaceTriRot[9]);
}

if(signedAreaRot(FaceTriRot[10]) > 0 )
{
glColor3f(0.6f, 0.4f, 0.4f);
faceFillRot(FaceTriRot[10]);
}

if(signedAreaRot(FaceTriRot[11]) > 0 )
{
glColor3f(0.3f, 0.2f, 0.5f);
faceFillRot(FaceTriRot[11]);
}

if(signedAreaRot(FaceTriRot[12]) > 0 )
{
glColor3f(0.5f, 0.6f, 0.1f);
faceFillRot(FaceTriRot[12]);
}

if(signedAreaRot(FaceTriRot[13]) > 0 )
{
glColor3f(0.9f, 0.5f, 0.7f);
faceFillRot(FaceTriRot[13]);
}

if(signedAreaRot(FaceTriRot[14]) > 0 )
{
glColor3f(0.2f, 0.3f, 0.8f);
faceFillRot(FaceTriRot[14]);
}

if(signedAreaRot(FaceTriRot[15]) > 0 )
{
glColor3f(0.4f, 0.1f, 0.2f);
faceFillRot(FaceTriRot[15]);
}

if(signedAreaRot(FaceTriRot[16]) > 0 )
{
glColor3f(0.4f, 0.4f, 0.4f);
faceFillRot(FaceTriRot[16]);
}

if(signedAreaRot(FaceTriRot[17]) > 0 )
{
glColor3f(0.5f, 0.1f, 0.6f);
faceFillRot(FaceTriRot[17]);
}

if(signedAreaRot(FaceTriRot[18]) > 0 )
{
glColor3f(0.5f, 0.9f, 0.4f);
faceFillRot(FaceTriRot[18]);
}

if(signedAreaRot(FaceTriRot[19]) > 0 )
{
glColor3f(1.0f, 1.0f, 1.0f);
faceFillRot(FaceTriRot[19]);
}
}

glEnd();
}

angle += 0.0f;
glutSwapBuffers();
}
void changeSize(int w, int h)
{
// Prevent a divide by zero, when the window is too short
// (you can make a window with zero width).
if (h == 0)
{
h = 1;
}

float ratio = 1.0f * w / h;
// Use the Projection Matrix
glMatrixMode( GL_MODELVIEW );
// Resets Matrix
// Set the viewport to be the entire window
glViewport( 0, 0, w, h );

// Set the correct perspective
//gluPerspective(45, ratio, 1, 100);
gluOrtho2D(-1.0f, 1.0f, -1.0f, 1.0f);
// Get Back to the Modelviews
glMatrixMode(GL_MODELVIEW);
}
void processNormalKeys(unsigned char key, int x, int y)
{
if (key == 49)
{
aroundX = true;
aroundY = false;
aroundZ = false;
}

if (key == 50)
{
aroundX = false;
aroundY = true;
aroundZ = false;
}

if (key == 51)
{
aroundX = false;
aroundY = false;
aroundZ = true;
}

if (key == 52)
{
texture = false;
multiColor = true;
}

if (key == 53)
{
texture = true;
multiColor = false;
}

/*if (key == 54)
{
angle += 0.2f;
glutPostRedisplay();
}*/

}
void processSpecialKeys(int key, int x, int y)
{
/*switch(key) {
case GLUT_KEY_F1 :
drawVertexRot = true;
drawVertexRotFixed = false;
drawRot = false;
drawRotFixed = false;
break;
case GLUT_KEY_F2 :
drawVertexRot = false;
drawVertexRotFixed = true;
drawRot = false;
drawRotFixed = false;
break;
case GLUT_KEY_F3 :
drawVertexRot = false;
drawVertexRotFixed = false;
drawRot = true;
drawRotFixed = false;
break;
case GLUT_KEY_F4 :
drawVertexRot = false;
drawVertexRotFixed = false;
drawRot = false;
drawRotFixed = true;
break;
}*/
}
int main(int argc, char **argv)
{

// init GLUT and create window
glutInit( &argc, argv );
glutInitDisplayMode( GLUT_DEPTH | GLUT_DOUBLE | GLUT_RGBA );
glutInitWindowPosition( 100, 100 );
glutInitWindowSize( 500, 500 );
glutCreateWindow("Icosahedron");
makeCheckImage();
makeIcosahedron();
initZBufferData();
// register callbacks
glutDisplayFunc( renderScene );
glutReshapeFunc( changeSize );
glutIdleFunc( renderScene );

glutKeyboardFunc(processNormalKeys);
glutSpecialFunc(processSpecialKeys);

// enter GLUT event processing cycle
glutMainLoop();
return 0;
}


### #11Olof Hedman  Members

Posted 19 August 2012 - 04:36 PM

I didn't look at your code, but the pic looks like you might be using the same texture coordinates for the right and the bottom edge... maybe you mix them up somewhere.

### #12john13to  Members

Posted 20 August 2012 - 10:23 AM

I didn't look at your code, but the pic looks like you might be using the same texture coordinates for the right and the bottom edge... maybe you mix them up somewhere.

I check it out but even though I changed the values on the texture coordinates it produced the same result.
On the other hand I did a different implementation where it calculated u and v directly with the
perspective texture formula, seen in http://en.wikipedia.org/wiki/Texture_mapping#Perspective_correctness,
instead of interpolating 1/z, u/z and v/z along the edges and then obtain u and v from them.
I embedded it in the active edge list algorithm and it did render perspective corrected textures but that
was only on some of the triangles while the other triangles became completely white instead.

I noticed that while the textured triangles did have different values on u and v, the blank triangles did have
the same constant value on u and v and were at the maximum size number of the width and height of texture
all the time at in this case 2048. No matter how I change the perspective formula it always returned the same
maximum size number value. Here is how I implemented it in short and simple version using the active edge list
algorithm:
[source lang="cpp"]vAlpha = 0.0f;vAlphaGrad = 1.0f/(scanlineEnd - scanlineStart);vStart = 0.0f; vEnd = 0.0f;float Zinv0 = 0.0f;float Zinv1 = 0.0f; if (z0 == 0.0f && z1 != 0.0f){ Zinv0 = 2147483647.0f; Zinv1 = 1.0/z1;}else if (z0 != 0.0f && z1 == 0.0f){ Zinv0 = 1.0f/z0; Zinv1 = 2147483647.0f;}else if (z0 == 0.0f && z1 == 0.0f){ Zinv0 = 2147483647.0f; Zinv1 = 2147483647.0f;} else{ Zinv0 = 1.0/z0; Zinv1 = 1.0/z1;} for each scanline:{ //Perspective formula for v float v = (((1.0f - valpha)*(vStart*Zinv0)) + (valpha*(vEnd*Zinv1))) / (((1.0f - valpha)*(Zinv0)) + (valpha*(Zinv1))); for each x along scanline { float ualpha = 0.0f; float ualphaGrad = 1.0f/(xEnd - xStart); //Perspective formula for u float u = ((1.0f - ualpha)*(0.0f*Zinv0) + ualpha*(2048.0f*Zinv1)) / ((1.0f - ualpha)*(1.0f*Zinv0) + ualpha*(1.0f*Zinv1)); texValueR = (float)checkImage[(int)(u/16.0f)] [(int)(v/16.0f)][0]; texValueG = (float)checkImage[(int)(u/16.0f)] [(int)(v)][1]; texValueB = (float)checkImage[(int)(u/16.0f)] [(int)(v)][2]; glColor3f(texValueR, texValueG, texValueB); glVertex2f((currX), (scanline)); currX += 1.0f; uAlpha += uAlphaGrad; } vAlpha += vAlphaGrad;}[/source]
Also, when I also rotate the object around the x-axis the texture disappear on some of the triangles and
appear on other triangles. It acts very strange and it is very hard to point what is causing all of these
problems.

### #13john13to  Members

Posted 22 August 2012 - 06:22 AM

I managed to fix it at last and thing was that I used Affine Mapping instead to give the texture a depth to it when it is projected on the icosahedron where I used the Perspective Correctness Mapping before.
The difference is that with Affine Mapping it does not have any 1/z that Perspective Correctness Mapping has. Here is a comparison of the two formulas:

Affine Mapping:

u = (1 - ualpha)*u0 + ualpha*u1
v = (1 - valpha)*v0 + valpha*v1

Perspective Correctness Mapping:

u = ( (1 - ualpha)*(u0/z0) + ualpha*(u1/z1) ) / ( (1 - ualpha)*(1 / z0) + ualpha*(1/z1) )
v = ( (1 - valpha)*(v0/z0) + valpha*(v1/z1) ) / ( (1 - valpha)*(1 / z0) + valpha*(1/z1) )

When I implemented Affine Mapping I just removed the 1/z0 and the 1/z1 terms from the Perspective formula completely so that I received the result from the attached image below. But then I came to realize that instead of interpolating ualpha and valpha I could interpolate u and v directly where the interpolating factor would be:

dudx = (u1 - u0) / (xEnd - xStart)
dvdy = (v1 - v0) / (scanlineEnd - scanlineStart)

For every pixel along the scanline u would be incremented with:
u = u + dudx

And for every scanline v would be incremented with:
v = v + dvdy

It was ironic that how simple it was to implement this feature in reality. I got fooled really well by this simplicity and never came across in my mind until now. Not only that but I had used the exact same kind of method when I implemented Gouraud Shading several months before when I was going to apply a shader to 2D flat polygons. So much time and energy had I spent to implement this simple thing =S

I also get the idea to implement Perspective Correctness Mapping as well even though I have not implement it (not have time! =P) but on the other hand this only seems to come in handy if I am going to apply a texture on a square that is built up by two triangles attached to each other for instance.

According to Wikipedia, if the square is to be textured with Affine it will cause discontinuity between the triangles where Perspectve eliminates that problem (http://en.wikipedia....ive_correctness). But Perspective does require more calculations since it involves 1/z divisions and that is a time consuming operation.

I got what I wanted at last I and I am very satisfied at last of the attached figure where I also have applied checkerboard textures of different colors! =)

#### Attached Thumbnails

Old topic!

Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.