Sign in to follow this  

Software triangle rasterizer

This topic is 3558 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I am working on writing a software rasterizer to learn what is going on behind the scenes. I have figured out drawing lines and clipping in 2D. Next I figured out 3D transformation with clipping in 3D, both in model space and homogeneous. Getting the inner working of the perspective transform was hard. Now I want to draw filled triangles, and I have two questions. 1. What is the prefered way do this in software? I have written one, but it seems slow. That brings me to my second question 2. What kind of frame rates are acceptable for a software rasterizer? I am not sure if I already have a good algorithm or not?

Share this post


Link to post
Share on other sites
Regarding 1. this is how I did it a while ago on a 486 :

1. Sort the vertices of your triangle according to their y coordinate, lets call them v0, v1, and v2.

2. Starting with v0, it will almost be as if you were going to draw the 2 lines segments [v0 v1] [v0 v2] and then [v1 v2] [v0 v2]:

starting with y = v0.y, go down to y = v1.y
At each step calculate coordinates x1 and x2 the x coordinate of [v0 v1] and [v0 v2] at height y.
Draw the horizontal segment [(x1, y) (x2, y)], (this should be fast).

Once y reaches v1.y, segment [v0 v1] will become [v1 v2], and you will continue the process until v2 is reached.

Of course there are special cases you need to take care of.

Share this post


Link to post
Share on other sites
Thanks bulgurmayo.

I have been doing a lot of reading and searching about this topic. Here is what I have so far:

1. Flood Fill - This algorithm works by selecting a seed point and then recursivly checking all the pixels around it to see if the have already been filled. This pesudocode is from wikipedia,

Flood-fill (node, target-color, replacement-color):
1. If the color of node is not equal to target-color, return.
2. Set the color of node to replacement-color.
3. Perform Flood-fill (one step to the west of node, target-color, replacement-color).
Perform Flood-fill (one step to the east of node, target-color, replacement-color).
Perform Flood-fill (one step to the north of node, target-color, replacement-color).
Perform Flood-fill (one step to the south of node, target-color, replacement-color).
4. Return.

This algorithm, while simple, will not help with shaded triangles, and does not make interpolation possible for lighting and z-buffer depth testing. Unless I am wrong about that, this algorithm is out.

2. Scanline with edge walking - This algorithm works on each scanline. The pesudocode is as follows:

1. Sort edges in y direction
2. Pick the first two deges
3. Compute left and right intersection pixels using line equations
4. Fill scanline between intersection points
5. If finished with second line, recompute slope for third

This algorithm can be used to interpolate colors, but I do not understand what to do about the z component, which is gone after the perspective divide.

3. Active Edge Table - At first glance, this apears to be a method for speeding up the edge walking algorithm, but I am not sure about that.

4. Edge Equations (screen space) - This algorithm works by examining the edge equations for each line, and testing every pixel in the bouning box contaning the triangle. If all three functions are > 0, then draw the pixel. I have implemented this one, but it is slow in software, and I do not see how to interpolate color values for shading or z values for depth testing.

4. Edge Equations (Homogeneous space) - It looks like values can be interpolated using this, but I have only scratched the surface because I do not yet have a grasp of all the math.

If anyone has implemented any of these, please let me know if my understading is wrong. If one of these methods is the prefered method for reasons I have not though off, please let me know. And if there is another algorithm out there that I have not found, that would be great too.


Thanks again,

[Edited by - Glass_Knife on March 8, 2008 6:47:29 AM]

Share this post


Link to post
Share on other sites
If you like you can take a look at my software renderer at my homepage. It has been optimized very much and only uses fixed point math internally. This way I can target the GP2X Console which does not support floats in hardware.

If you did the 3D transformation stuff correctly (a good knowledge of the rendering pipeline is advicable; formulas can be taken from the OpenGL specification) you will have a 4d vector (x, y, z, w) after transformation. For the perspective divice you do x' = x/w, y' = y/w, z' = z/w. (x', y') are your screen coordinates in NDC space (you still have to transform them to window space) and z' is in [0, 1] and can be used for depth buffering. For perspective correct texture mapping you also need to interpolate 1/w (in some resources referred to as 1/z) across the triangle.

My software renderer can handle vertex and pixel shaders written in C++ and can interpolate an arbitrary number of interpolants across the triangle. I used some C++ template trickery together with inline functions to still make it fast.

EDIT: You may also want to check out Chris Heckers articles. They also have some info on how to get subpixel precision right.

Share this post


Link to post
Share on other sites
Quote:
Original post by Trenki
If you like you can take a look at my software renderer at my homepage. It has been optimized very much and only uses fixed point math internally. This way I can target the GP2X Console which does not support floats in hardware.


Thanks for the links to the articles and the software renderer source code. This can only help with my understanding...

Share this post


Link to post
Share on other sites
Like many graphics enthusiasts, I've tried out a number of methods for triangle rendering, and I strongly recommend using edge tables. Once you get the basic idea, it is very easy to implement and doesn't require a lot of the complex branching logic that various sorting schemes use. It is also very fast, and can be made quite efficient on low-end hardware. If memory is an issue, you can use the output from the algorithm directly with an s-buffer, which offers big savings over a z-buffer. Another advantage is that edge tables automatically handle arbitrary convex polygons, which makes it simpler to integrate with homogeneous clip coordinates.

To get edge-tables working, you render the edges of the triangle into a pair of 1-D buffers using standard linear interpolation. All varying variables for the polygon renderer should be stored in these edge tables. The left buffer stores the values at the start of each scan-line, while the right buffer stores the final values. Once you have filled in both buffers, just interpolate from the left scan position to the right in order to fill in the polygon.

Share this post


Link to post
Share on other sites
Fast Affine texture mapping (suitable for billboards, HUD etc):
http://www.multi.fi/~mbc/sources/fatmap.txt
http://www.exaflop.org/docs/fatmap/

Perspective correct texture mapping:
http://chrishecker.com/Miscellaneous_Technical_Articles


Quote:
Original post by Glass_Knife
2. Scanline with edge walking - This algorithm works on each scanline. The pesudocode is as follows:

1. Sort edges in y direction
2. Pick the first two deges
3. Compute left and right intersection pixels using line equations
4. Fill scanline between intersection points
5. If finished with second line, recompute slope for third

This algorithm can be used to interpolate colors, but I do not understand what to do about the z component, which is gone after the perspective divide.


As Trenki has pointed out, the 3d transformations need to be correct.

First the vertices are transformed to camera (view) space using the model view matrix (v' = TMcamInverse * TMmodel * v).

Then the projection matrix is used to transform from camera space to the clip space. v" = TMproj * v'
v' is of the form [x,y,z,1]. and v" is of the form [x,y,z,w]
Clip-space is a cube defined by the planes x = +/-w, y = +/-w and z = +/-w (considering an OpenGL projection matrix, with DirectX its slightly different).
Dividing by w (i.e. [x/w, y/w, z/w, 1]) results in a normalized clip space (x = +/-1, y = +/-1 and z = +/-1).
Now the 3 components (x/w, y/w, z/w) vary from -1 to 1.
The z/w component needs to be retained in order to do perspective correct mapping.

U,V, color and other vertex attributes vary linearly along the triangle edges in the unprojected space. However in screen space, this is not true - imagine a train viewed in perspective, even though the windows are evenly spaced in the real world, with the projected image, the windows seem to be more densely packed at the far end of the train.
The affine texture mapper completely ignores this fact and linearly interpolates the vertex attributes.

Perspective correct mapping is based on the fact that 1/z', u/z', v/z' (where z' = z/w) etc vary linearly across the screen space triangle (see Hecker's articles for proof).

Hope this helps.

Share this post


Link to post
Share on other sites
Quote:
Original post by robotichrist
Like many graphics enthusiasts, I've tried out a number of methods for triangle rendering, and I strongly recommend using edge tables.


Thanks for the advice. I guess I have learned two things...
1. I am a graphics "enthusiast."
2. I should try out all the different approaches just to learn them.

Thanks again!

Share this post


Link to post
Share on other sites
Quote:
Original post by chand81
Fast Affine texture mapping (suitable for billboards, HUD etc):
http://www.multi.fi/~mbc/sources/fatmap.txt
http://www.exaflop.org/docs/fatmap/

Perspective correct texture mapping:
http://chrishecker.com/Miscellaneous_Technical_Articles

Thanks for the articles. This really helps!
Quote:
Original post by chand81As Trenki has pointed out, the 3d transformations need to be correct. [...] Hope this helps.

This really helps. What I am learning as I write graphics code from scratch is this: If you do not really understand what is going on, you can not be sure that you are doing it right, and if there is a problem, and you don't really understand what you are doing, you are never going to be able to fix it.

Just knowing that the Z values after the perspective transform should be normalized helps a great deal. I did have a lot of trouble understaing that matrix code, mostly because of problems with right-hand vs left-hand systems, and then figuring out if the near and far values should be negative of positive. I think I have it working now...

Thanks again,

Share this post


Link to post
Share on other sites
OK, I have been working hard on understanding how to do the edge walking and edge tables. After studying these algorithms, it seems that no matter which one I choose, I need to be able to calculate the intersection of the polygon lines with each scan line. Which is another way of saying I need to find the intersection of two lines. I started down this path and figured out how this works.

Then I realized that in order to understand what the line drawing algorithms are doing, I needed to study them in greater detail, and so I have done that. I looked at Bresenham's, Po-Han Lin's EFLA (Extremely Fast Line Algorithm), and Gatopeich's line algorithm. Then I took everything I learned from that and created my own from scratch, which it not optimized (so it is easier to understand).

So I thought I was ready... Now I am stuck. Someone please tell me if I am on the right path here. If I just calculate the intersection of each scan line and the lines from the polygon, this does not really work. The problems is that for lines with very little distance between the y coordinates, but large distance between the x's, for example, (2,4) and (12,6), have many pixels on each scan line. Depending on which side of line is inside the polygon could change which of pixels should be the correct pixel to use for filling.

Currently, the intersections where Y is the longer side do not look the same as when X is the longer side, if all I am doing is calculation the intersection.

Also, after all of my research, it looks at thought I need to adopt a filling convention, such as leaving off the bottom and right pixels, so other polygons that share edges are not overdrawn.

If I am correct about the edges, or if I am not, please let me know.

Thanks,

Share this post


Link to post
Share on other sites
You want to be doing scanline rendering. The active Edge Table is a method of doing scanline rendering that also incorporates visible surface determination, and also eliminates the need to pre-sort polygons.

Try getting hold of the Quake 1 source code. It does the edge stepping in fixed point.

An acceptable frame rate is perhaps 30fps which I manage to do just fine at 640x480x32, and that's only using BitBlt to put it on screen.

Share this post


Link to post
Share on other sites
Quote:
Original post by iMalc
You want to be doing scanline rendering. The active Edge Table is a method of doing scanline rendering that also incorporates visible surface determination, and also eliminates the need to pre-sort polygons.


I will check out the source code. Thanks.

Share this post


Link to post
Share on other sites

This topic is 3558 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this