Jump to content
  • Advertisement
Sign in to follow this  
  • entries
    10
  • comments
    17
  • views
    8607

Entries in this blog

 

depth testing woes

The next step was to get depth testing up and running. I had briefly toyed with the idea of doing some sort of triangle sorting, but decided against it in the end for several reasons. First, it wouldn't be practical for my pipeline, which processes one triangle at a time and doesn't rely on having all triangles in memory at once (though this might change in the future if required). Secondly, there are quality issues when it comes to depth sorting (e.g. intersecting triangles). With that in mind, I decided that using a z-buffer would be the best way to go. I know that z-buffer depth testing is relatively slow in software, but it seems to have less quality issues than other algorithms i've read about. I'll worry about optimizations later on down the road :D.

Before incorporating a z-buffer into my rasterization routine, I needed a way to interpolate z-coordinate values. I read some articles online (Chris Hecker's perspective correct texture mapping was a good one). I ended up writing a general function that lets you calculate interpolants using gradiants. Gradiants in screenspace represent the change in an interpolants (such a z-value) that occurs when you move up and down the x or y axis. These remain constant across the triangle so you only have to calculate them once. Whats more, the method of generating a gradiant can be generalized to work with other interpolants (e.g. colour, texture coordinates).

This is a screenshot before depth testing...



After adding z-interpolation and adding the z-buffer, I get the following...



When seen in motion, there are some artifacts (sometimes very tiny cracks are visible between triangles and there is some minor flickering).

Clapfoot

Clapfoot

 

...the long road to rasterization

I've finally found time to implement the first few steps of rasterization.

The first thing I implemented was a triangle fill function. I pretty much just started with the "brute-force" approach, which was basically my own best guess at an implementation. I basically started by sorting the points of the triangle by their Y screenspace coordinates. I divided my triangle in half (horizontally at the mid-point) so that I could scan convert each half separately. I won't go into the details of the algorithm (you can find many articles on basic triangle filling online) but essetially, i'd iterate through each scanline of the triangle half and fill in pixels within the bounds of the triangle edges. On my very first try, triangles were generally looking right, except that on occasion, the triangles would warp out of shape. I realized that my inverse slope calculations were off because I was using floating point coordinates instead of converting to discreet integer values beforehand.

Things were looking quite great at this point, but there seemed to be some noticeable overdraw in some places. After reading up on a few rasterization articles online, I discovered that I wasn't properly using a "Fill convention". For those who don't know, its just a standard way of filling pixels such that adjacent primitives will not overlap or leave a gap between each other. The used whats known as a "top-left" fill convention. This basically means that all pixels that intersect the top-left edges of the primitive will be filled, but pixels intersecting other edges will not be.

The next big thing to add is some depth testing.

Here are some progress pics:




Clapfoot

Clapfoot

 

.. clipping!!!

I haven't had a whole lot of time to work on this lately, but the project is still alive and well!

I finally got around to implement frustum clipping. Specifically, I added clipping in NDC space for its efficiency and code simplicity.

There were several issues I had to overcome for this...

1) I had to totally re-organize the way vertex data flowed through my rendering pipeline. Previously, I sent all the vertices into the vertex stage at once, processing them all at once before proceeding to the next stage. This would be a problem, as the extra triangles/vertices that could be potentially be generated by the clipping would require extra memory to store and this space would either have to be pre-allocated buffer thats as big as the maximum number of possible clipped vertices, or a growing buffer (both bad news). Thus, I modified my pipeline so that only a single triangle is sent into the pipeline at a time, and then a small pre-allocated buffer (2^(# clip planes) triangles) would be used to store any extra vertices. The data related to this triangle (and the extra triangles resulting from its clipping) are then propagated through the rendering pipeline until fragments are drawn to the framebuffer.

2) For the actual clipping process, I just send each triangle through a clipping "pipeline", where the output of clipping to one plane, goes into the clipping of the next plane. I should also mention that I used the Sutherland Hodgeman algorithm for this (you basically traverse each edge of the primitive, perform an intersection test, and possibly output new vertices depending on the outcome). I clipped against all 6 frustum planes.

3) Adding the clipping process also revealed several earlier bugs in my code. For example, when mapping to screenspace, I was mapping NDC x,y coordinates from 0 to width/height instead of 0 to width/height minus 1, resulting in out of bounds screen coordinates.

Below is a picture of a cube being clipped to the negative X frustum plane...



.. and another picture of the same cube (camera slightly above cube this time) being clipped to the far plane..



I decided at this point to hold off on any sort of culling until a later point.

.. at last.. I can start tackling scan conversion!!!

Clapfoot

Clapfoot

 

... projection, perspective, viewport

The next step was to add a perspective projection transformation.

I made the following changes for this:

1) Added a projection matrix in the Renderer class and the following function for generating the matrix:

int SetPerspective(float aFov, float aAspectRatio, float aNear, float aFar);

This function creates a projection matrix, which later gets passed down to the VertexEngine class before drawing.

2) Added a function for setting the viewport in the Renderer class:

int SetViewPort(int aWidth, int aHeight, int aOffsetX, int aOffsetY);

This function passes down the viewport data to the VertexEngine class, which uses it to form a matrix that will later be used to transform from normalized device coordinates to screenspace.

3) In the VertexEngine class, I updated the 'ProcessVertex' function I had in there such that three extra steps are performed at the end:
- multiply world-view transformed vertex with the projection matrix to perform linear portion of perspective projection
- divide the result with Z coordinate (stored in the homogenous W coordinate) resulting in normalized device coordinates
- multiply the NDCs to screenspace by multiplying by the matrix created in step 2 above

After adding the appropriate calls to my test application, I get the following (there is a slight x-axis rotation going on so its from a front-top view) ...



The remaining tasks for the vertex stage are culling and frustum clipping.

Clapfoot

Clapfoot

 

... view transform

The next step was to add a view transformation to the vertex stage. I decided to go with a right-handed view frame to avoid the extra reflection. I implemented a "LookAt" function to the renderer interface to generate the view matrix and used this to transform each vertex from world to view.

The following is the prototype for this function:

int SetLookAt(Vector3* apEye, Vector3* apLookAt, Vector3* apUp);

The following screenshot shows a diagonal view of the scene. I created a cube to get a better visual sense of the view transformation.


Clapfoot

Clapfoot

 

.. world transforms & framebuffer mgmt

The next step was to add world transformations. I added a world matrix to my Renderer class along with added rotate, translate, and scale functions similar to those of OpenGL. When the draw function is called, the world matrix updates the world matrix in the VertexEngine, and the VertexEngine multiplies every incoming vertex position with this.

I also added state for the clear colour and funtionality for clearing the draw buffer (only a single draw buffer is maintained). The draw buffers are currently maintained in the Platform module.. and so they translate to SDL surfaces.

Here is a snippet of my Renderer interface currently:

int Init();
int Update();
int SetVertexBuffer(Vector3* mpPosition, uint32 numVertices);
int DrawPrimitives();

/* World transform functions */
int WorldIdentity();
int WorldRotate(float32 aAngle, float32 aX, float32 aY, float32 aZ);
int WorldTranslate(float32 aX, float32 aY, float32 aZ);
int WorldScale();

/* Framebuffer operations */
int SetClearColour(uint32 aClearColour);
int ClearScreen();

With this in place.. I was able to start doing some visual testing. At this point, no view or projection matrices are in place so all I get is an Orthographic projection which maps the x,y positions to x, y screen coordinates. I added the necessary Renderer calls to my test application to put up a rotating triangle. The result is ....




Clapfoot

Clapfoot

 

Initial class/file organization

...

The next step was to plan out the organization of my project. I wanted my aggregate my project into an application and a graphics library and I wanted to interface to the graphics library to similar to a simplified version of OpenGL. The idea (again not to plan to far ahead) is to be able to write different test applications to test out different capabilities of the renderer later on.

The next step was to come up with some initial classes. The most logical way to do this to me was to create classes loosely representing a rasterization rendering pipeline. As such, I initially created the following classes:

1) Renderer - interfaces to the application, top level management of data flow through the pipeline, basic framebuffer management, basic state management
2) VertexEngine - vertex stage processing
3) PixelEngine - pixel stage processing

I didn't want to go overboard with the object oriented side of things (not really my strength anyways) so I just decided shoved a lot of stuff into Renderer for now. I plan on breaking down the classes further later on (e.g. create a Framebuffer class).

I wanted the management of data flow between the app, the vertex engine, and the pixel engine to resemble the passing of data between rendering stages when using modern day programmable shaders. I literally created structs in my VertexEngine/PixelEngine representing the IN/OUT data structs passed between shader stages. I added a "Process Stage" function in each stage that receives a stream of data structs. I broke this down into a sub function for processing an individual data struct (i.e. processing a vertex or processing an array).

The last thing I wanted to touch upon was abstracting away the OS/platform. I created a platform module that handles all plaform specific functionality. For the most part, its responsible for creating a window, handling events, and filling in pixel colours in that window. I've opted to use SDL to implement this module, but it could easily be replaced with Win32, X, GTK, etc. As a side note, I've been temporarily been using a library called SDL_draw (to draw points and lines) as a means to visualize vertex transformations early on in the project until I get scan conversion going.

Clapfoot

Clapfoot

 

Starting point...

To put the timeline into perspective, I'll probably be putting on avg 1-2 hours per day on this project, maybe more if my daily schedule permits.

Ok, so I actually starting writing this renderer several days ago and I think i've gotten a faily good starting point.

A few general points on the platform:
* Cross platform C++ code
* SDL for windowing
* Visual Studio C++ 2005 IDE/compiler

Progress thus far...

I began with a fairly good idea of the kind of framework this renderer would sit in. I didn't want to deal with anything lower level than filling in the colour of pixels in a window, so I opted to use SDL and its windowing/surface filling functionality. It also has the added benefit of being cross platform (not to get ahead of myself or anything).

The next thing I needed was a good math library. I didn't want to write a new math library from scratch (boring).. but I also didn't want to take the easy way out. The compromise was to rip out the math code/pseudo-library I had written in a previously failed attempt at an OpenGL game. The code was by no means a complete math library, it only had classes for Vector2, Vector3 and Matrix4x4, but it was a decent starting point and allowed me to bypass the boring early stages of writing a math library.

To be continued...

Clapfoot

Clapfoot

 

The plan...

So for the longest time i've been trying to find time to write a software 3d renderer "for fun" and i've finally have the time to start such a project. The main goal of this project will be to solidify my knowledge of fundamental rasterization rendering techniques. Off the top of my head, the basic subjects I want to cover are:

1) Transformation
- world
- view
- clipping
- culling
- projection
- viewport

2) Rasterization
- scan conversion
- depth-buffer
- diffuse lighting
- texture mapping

Clapfoot

Clapfoot

Sign in to follow this  
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!