Look up coordinate transformations, perspective projection, and rasterization. Basically, you'll be reimplementing parts of the GPU pipeline, in the form of a software renderer. Knowledge of linear algebra helps a lot if you want to understand half of the code you're writing.

It typically goes roughly like this:

- transform each vertex to world coordinates, if it's not already done, this is useful if you want to have multiple instances of the same mesh without duplicating vertices, for instance suppose you want to have two bunnies side to side, you could duplicate each vertex to make two separate bunnies, or you could instead just reuse the same mesh, but simply moving each vertex in the second bunny off to the side (translation)

- transform each vertex to camera coordinates (rotating the world around the camera, so that you're facing what you want to face)

- transform each of those vertices to perspective coordinates (so that vertices further away from the camera tend to the point at infinity, giving the illusion of depth)

- from this, work out where each vertex would appear on the screen, in normalized screen coordinates (with coordinates ranging from -1 to +1 in X and Y dimensions) this is the step where you project your 3D vertices on your screen

- upscale these vertex locations to your desired resolution (e.g. 1024 * 768)

- for each pixel on the screen, work out which triangle it belongs to (depending on how you defined triangles, e.g. triangle list or triangle strip, or even using indices) this is the rasterization step

- shade the pixel (often you interpolate important data like normals, etc.. from the three vertices of the triangle)

As you can see, it can be a lot of work, and that's a very high-level overview (I ignored the depth and stencil buffers, as well as a lot of other stuff) but it's a good learning exercise. Make sure to take it step by step so that you don't get overwhelmed!

### Show differencesHistory of post edits

### #2Bacterius

Posted 05 December 2012 - 10:07 PM

Look up coordinate transformations, perspective projection, and rasterization. Basically, you'll be reimplementing parts of the GPU pipeline, in the form of a software renderer. Knowledge of linear algebra helps a lot if you want to understand half of the code you're writing.

It typically goes roughly like this:

- transform each vertex to world coordinates, if it's not already done, this is useful if you want to have multiple instances of the same mesh without duplicating vertices, for instance suppose you want to have two bunnies side to side, you could duplicate each vertex to make two separate bunnies, or you could instead just reuse the same mesh, but simply moving each vertex in the second bunny off to the side (translation)

- transform each vertex to camera coordinates (rotating the world around the camera, so that you're facing what you want to face)

- transform each of those vertices to perspective coordinates (so that vertices further away from the camera tend to the point at infinity, giving the illusion of depth)

- from this, work out where each vertex would appear on the screen, in normalized screen coordinates (with coordinates ranging from -1 to +1 in X and Y dimensions) this is the step where you project your 3D vertices on your screen

- upscale these vertex locations to your desired resolution (e.g. 1024 * 768)

- for each pixel on the screen, work out which triangle it belongs to (depending on how you defined triangles, e.g. triangle list or triangle strip, or even using indices) this is the rasterization step

- shade the pixel (often you interpolate important data like normals, etc.. from the three vertices of the triangle)

As you can see, it can be a lot of work, but it's a good learning exercise. Make sure to take it step by step so that you don't get overwhelmed!

It typically goes roughly like this:

- transform each vertex to world coordinates, if it's not already done, this is useful if you want to have multiple instances of the same mesh without duplicating vertices, for instance suppose you want to have two bunnies side to side, you could duplicate each vertex to make two separate bunnies, or you could instead just reuse the same mesh, but simply moving each vertex in the second bunny off to the side (translation)

- transform each vertex to camera coordinates (rotating the world around the camera, so that you're facing what you want to face)

- transform each of those vertices to perspective coordinates (so that vertices further away from the camera tend to the point at infinity, giving the illusion of depth)

- from this, work out where each vertex would appear on the screen, in normalized screen coordinates (with coordinates ranging from -1 to +1 in X and Y dimensions) this is the step where you project your 3D vertices on your screen

- upscale these vertex locations to your desired resolution (e.g. 1024 * 768)

- for each pixel on the screen, work out which triangle it belongs to (depending on how you defined triangles, e.g. triangle list or triangle strip, or even using indices) this is the rasterization step

- shade the pixel (often you interpolate important data like normals, etc.. from the three vertices of the triangle)

As you can see, it can be a lot of work, but it's a good learning exercise. Make sure to take it step by step so that you don't get overwhelmed!

### #1Bacterius

Posted 05 December 2012 - 10:04 PM

Look up coordinate transformations, perspective projection, and rasterization. Basically, you'll be reimplementing parts of the GPU pipeline, in the form of a software renderer. Knowledge of linear algebra helps a lot if you want to understand half of the code you're writing.

It typically goes roughly like this:

- transform each vertex to camera coordinates (rotating the world around the camera, so that you're facing what you want to face)

- transform each of those vertices to perspective coordinates (so that vertices further away from the camera tend to the point at infinity, giving the illusion of depth)

- from this, work out where each vertex would appear on the screen, in normalized screen coordinates (with coordinates ranging from -1 to +1 in X and Y dimensions) this is the step where you project your 3D vertices on your screen

- upscale these vertex locations to your desired resolution (e.g. 1024 * 768)

- for each pixel on the screen, work out which triangle it belongs to (depending on how you defined triangles, e.g. triangle list or triangle strip, or even using indices) this is the rasterization step

- shade the pixel (often you interpolate important data like normals, etc.. from the three vertices of the triangle)

As you can see, it can be a lot of work, but it's a good learning exercise. Make sure to take it step by step so that you don't get overwhelmed!

It typically goes roughly like this:

- transform each vertex to camera coordinates (rotating the world around the camera, so that you're facing what you want to face)

- transform each of those vertices to perspective coordinates (so that vertices further away from the camera tend to the point at infinity, giving the illusion of depth)

- from this, work out where each vertex would appear on the screen, in normalized screen coordinates (with coordinates ranging from -1 to +1 in X and Y dimensions) this is the step where you project your 3D vertices on your screen

- upscale these vertex locations to your desired resolution (e.g. 1024 * 768)

- for each pixel on the screen, work out which triangle it belongs to (depending on how you defined triangles, e.g. triangle list or triangle strip, or even using indices) this is the rasterization step

- shade the pixel (often you interpolate important data like normals, etc.. from the three vertices of the triangle)

As you can see, it can be a lot of work, but it's a good learning exercise. Make sure to take it step by step so that you don't get overwhelmed!