3D algorithm

Started by
19 comments, last by Radikalizm 11 years, 4 months ago
Take your mind off of GDI for now, completely forget it exists as you won't be needing it if you want to build your renderer from the ground up (unless you want to use it to present your final image to a window).

Let's imagine this situation:
You have a 3D model you want to render, let's make it a cube to keep things simple. Let's assume your model is completely made up out of vertices, we're leaving indices behind for the time being. To make things even simpler we're just going to assume that your vertices are just plain old positions (being 3D vectors), we're not going to worry about things such as normals, UV-coords, colors, etc., just plain old positions.

Now let's say you want to render this model at a certain position with a certain rotation and a certain scale. You also want all the pixels occupied by this model on-screen to have a certain color, let's take red for example.

Let's have a look at the requirements to realize all of this, you'll need:

  1. A data structure which can contain your model's info. In our simple setting here this can be a plain old array containing vertices, and as mentioned our vertices are regular positions right now. We're going to assume each 3 following vertices will make up a triangle.
  2. A way of representing where you want your model to end up relative to the world center, and which rotation and scale it should have. This is your world transformation as you have probably already figured out by yourself. This world transformation is a single matrix containing all of these 3 aspects at once.
  3. A way of representing where you are in the world and how you're looking at the world. This is mostly abstracted away behind the concept of a camera, which is a data structure which holds its own world position, look-at vector and a vector which defines which way is up. These 3 vectors will be used to generate our second important transformation matrix, being the view transformation.
  4. A way of projecting what we 'see' through our camera onto our final image. Projections can be done in a lot of ways to achieve different results, but what you probably want is a perspective projection. Such a perspective projection is defined mostly by a field of view (FoV) and an aspect ratio. This information will be stored in our final important transformation matrix, being the projection transformation. This projection transformation will map a 3D position to a 2D position where each component ranges between [-1, 1]
  5. A way of storing our final image in full color. This can be done by creating a texture data structure, which basically is just a 2D array of arbitrary values with a resolution of your choosing (this will be the resolution of your final image). How wide these values are, and how many values you need to define a single color-element is defined by the color format you're going to be using. For simple applications the R8G8B8 format, which is a format which defines 3 color channels per color element (red, green and blue) each with 8-bits (being 1 byte) per channel, will do just fine. You'll be creating such a texture which will act as a screen buffer for you to render to.


Ok so, now we've defined our requirements, the only thing we need now is an overview of how we're going to use these to get our final image from our model.
I'll provide you with a simplified overview of what you should do:

  • Tell our renderer that we want to render to our screen buffer (see #5 in our previous overview). The renderer could've created this screen buffer itself, or you could create it yourself and pass it on (eg. renderer->setRenderTarget(some_texture))
  • We now want to get our screen-coordinates for each vertex in our model, this in itself happens in a few steps. I'm going to give you some pseudo-code to explain the process:

[source lang="plain"]for each vertex in our model do
position = transform(vertex_position, world_transformation) // This transforms our vertex from local space to world space
position = transform(position, view_transformation) // This transforms our world space position to a view space (camera or eye space) position
position = transform(position, projection_transformation) // This transforms our view space position to a screen space position
[/source]

  • This gives us a bunch of positions of which we only want the first 2 components (X and Y) right now for our simple setup. As mentioned X and Y will both be in the range [-1, 1], but this won't do if we want to determine which pixels we want to write to. To fix this we're going to apply 2 simple transformations. The first one will transform our range from [-1, 1] to [0, 1]; you do this by applying this formula: n = (n + 1) * 0.5. Our second transformation will scale up our [0, 1] range to our chosen screen buffer resolution, so this is just a simple multiplication of your X and Y components by your screen buffer width and height respectively.
  • We now have a bunch of screen coordinates which directly map to pixels in our screen buffer, this means we can now set colors for the pixels we want to write to. We assumed that our vertices would be ordered so they would make up triangles, so for each group of 3 positions we'll first have to create a triangle. Once we have this triangle we only need to go over it's surface to determine which pixels the triangle should cover. Remember that we just wanted to color everything red, so for all pixels making up our triangle we'll set the red channel to 255 (we're working in RGB8!) in our screen buffer.
  • Once you've done this for every group of 3 transformed vertices your screen buffer will now contain your projected image of your cube model. The only thing left to do now is to present it to the screen, which is where a library like GDI or D2D can come into play.


That about covers it I think, could be that I left out a few details or that I made some errors, but please don't shoot me for that.

EDIT:

I want to make note of some things I left out which were not needed for such an extremely simple example, but which will play a major part once you get further in your renderer. To name a few:
-Usage of a Z-buffer for depth testing (really important for ordering and avoiding overdraw when rendering multiple objects)
-Indices (all kinds of uses, from determining triangle winding order to reducing vertex buffer footprints)
-Materials, lighting, texturing and all that stuff
-Culling and clipping
-Probably a million more things which I can't immediately think of right now

I gets all your texture budgets!

This topic is closed to new replies.

Advertisement