• Create Account

### #ActualKhatharr

Posted 22 December 2012 - 11:54 PM

If you understand linear algebra then the basic idea is that there is no actual 3D space. There's just vertexes that represent primitives based on their order and the primitive type (the most common primitive type is a triangle -> 3 vertexes).  The vertexes are stored in "model space", which is what you're probably messing with in Blender if you're making a model. The pipeline uses 3 matrix transforms on your model's vertexes to get them into position for rasterization (conversion to pixel colors). The first changes the model coordinates into "world" coordinates and also applies scaling and rotation. The second rotates the world space such that the "camera" is at the origin and facing along the z axis. (Note that it's moving the world rather than the camera.) The final transform corrects the perspective, since the viewing angle creates a sort of rectangular cone where the near plane is smaller than the far plane. It basically stretches the theoretical space so that the near and far planes are the same size. This causes nearer objects to appear larger than distant ones. Once this is done it just maps the pixel positions to the near and far plane and then goes through all the primitives, checking to see if a pixel intersects the primitive being rendered. If it does then the point of intersection determines the color to place in the pixel.

Matrices are used to describe these three transforms because once the three transforms are set they can be multiplied together to create a single transform to apply to all the vertexes for the model, greatly reducing the number of calculations that the GPU must perform.

There are a lot of optimizations that exist in most renderers. The most common are probably viewport clipping, depth buffering and face culling. Viewport clipping simply means that stuff outside the viewport gets ignored, which is sensible because you wouldn't be able to see it anyway. face culling is slightly more complex, but modelers should definitely know about this one:

When you create a model you need to know whether to use clockwise or widdershins "vertex winding". Basically what that means is that you want the vertexes for every primitive to be either clockwise or widdershins when viewed from the outside of the model. This is because once a model is positioned for rendering the culling test gets applied. If your program is using clockwise culling then any primitive whose verts are in widdershins order when viewed from the camera position will get dropped, since they're assumed to be facing away from the camera. For instance, imagine a single triangle with verts A, B and C, defined in clockwise order starting at the top. Once the triangle is positioned relative to the camera if the vertexes still appear in clockwise order from that perspective then the triangle is facing the camera. If they've become widdershins then the triangle is facing away and shouldn't be rendered. This saves a huge amount of work for the GPU. If you're making models for someone else then ask them what winding order to use. If you're just making them for yourself then just pick an order and stick with it for all your models for that project (which one you choose doesn't matter).

Finally the depth buffer doesn't really have to do with modelling, but it's a good thing to know about if you're involved with rendering. Basically if you have a 640x480 viewport that you're rending to you'll probably have a 640x480 depth buffer lurking in the background. When the rasterizer finds a collision with a primitive for a pixel it stores the color value in the color buffer and it stores the z depth of the collision in the depth buffer. However, before it does this it checks that z value against the z value for that pixel in the depth buffer. If the comparison indicates that the new color is 'underneath' the existing one then the pixel doesn't get rendered. Because of this it's best to render 3D scenes from front-to-back in order to avoid as many color operations as possible (since the nearer objects will 'hide' rather than 'overwrite' the farther objects). The exception is when you're rendering something transparent, in which case it needs to be rendered after the objects behind it so that it can blend with their colors. Because of this it's good to keep in mind that transparency in models requires a bit more work for the rendering to do. It's not a huge hit but you don't want to get carried away - and let your developer know if you use transparency in a model.

### #2Khatharr

Posted 22 December 2012 - 11:54 PM

If you understand linear algebra then the basic idea is that there is no actual 3D space. There's just vertexes that represent primitives based on their order and the primitive type (the most common primitive type is a triangle -> 3 vertexes).  The vertexes are stored in "model space", which is what you're probably messing with in Blender if you're making a model. The pipeline uses 3 matrix transforms on your model's vertexes to get them into position for rasterization (conversion to pixel colors). The first changes the model coordinates into "world" coordinates and also applies scaling and rotation. The second rotates the world space such that the "camera" is at the origin and facing along the z axis. (Note that it's moving the world rather than the camera.) The final transform corrects the perspective, since the viewing angle creates a sort of rectangular cone where the near plane is smaller than the far plane. It basically stretches the theoretical space so that the near and far planes are the same size. This causes nearer objects to appear larger than distant ones. Once this is done it just maps the pixel positions to the near and far plane and then goes through all the primitives, checking to see if a pixel intersects the primitive being rendered. If it does then the point of intersection determines the color to place in the pixel.

Matrices are used to describe these three transforms because once the three transforms are set they can be multiplied together to create a single transform to apply to all the vertextices for the model, greatly reducing the number of calculations that the GPU must perform.

There are a lot of optimizations that exist in most renderers. The most common are probably viewport clipping, depth buffering and face culling. Viewport clipping simply means that stuff outside the viewport gets ignored, which is sensible because you wouldn't be able to see it anyway. face culling is slightly more complex, but modelers should definitely know about this one:

When you create a model you need to know whether to use clockwise or widdershins "vertex winding". Basically what that means is that you want the vertexes for every primitive to be either clockwise or widdershins when viewed from the outside of the model. This is because once a model is positioned for rendering the culling test gets applied. If your program is using clockwise culling then any primitive whose verts are in widdershins order when viewed from the camera position will get dropped, since they're assumed to be facing away from the camera. For instance, imagine a single triangle with verts A, B and C, defined in clockwise order starting at the top. Once the triangle is positioned relative to the camera if the vertexes still appear in clockwise order from that perspective then the triangle is facing the camera. If they've become widdershins then the triangle is facing away and shouldn't be rendered. This saves a huge amount of work for the GPU. If you're making models for someone else then ask them what winding order to use. If you're just making them for yourself then just pick an order and stick with it for all your models for that project (which one you choose doesn't matter).

Finally the depth buffer doesn't really have to do with modelling, but it's a good thing to know about if you're involved with rendering. Basically if you have a 640x480 viewport that you're rending to you'll probably have a 640x480 depth buffer lurking in the background. When the rasterizer finds a collision with a primitive for a pixel it stores the color value in the color buffer and it stores the z depth of the collision in the depth buffer. However, before it does this it checks that z value against the z value for that pixel in the depth buffer. If the comparison indicates that the new color is 'underneath' the existing one then the pixel doesn't get rendered. Because of this it's best to render 3D scenes from front-to-back in order to avoid as many color operations as possible (since the nearer objects will 'hide' rather than 'overwrite' the farther objects). The exception is when you're rendering something transparent, in which case it needs to be rendered after the objects behind it so that it can blend with their colors. Because of this it's good to keep in mind that transparency in models requires a bit more work for the rendering to do. It's not a huge hit but you don't want to get carried away - and let your developer know if you use transparency in a model.

### #1Khatharr

Posted 22 December 2012 - 11:40 PM

If you understand linear algebra then the basic idea is that there is no actual 3D space. There's just vertexes that represent primitives based on their order and the primitive type (the most common primitive type is a triangle -> 3 vertexes).  The vertexes are stored in "model space", which is what you're probably messing with in Blender if you're making a model. The pipeline uses 3 matrix transforms on your model's vertexes to get them into position for rasterization (conversion to pixel colors). The first changes the model coordinates into "world" coordinates and also applies scaling and rotation. The second rotates the world space such that the "camera" is at the origin and facing along the z axis. (Note that it's moving the world rather than the camera.) The final transform corrects the perspective, since the viewing angle creates a sort of rectangular cone where the near plane is smaller than the far plane. It basically stretches the theoretical space so that the near and far planes are the same size. This causes nearer objects to appear larger than distant ones. Once this is done it just maps the pixel positions to the near and far plane and then goes through all the primitives, checking to see if a pixel intersects the primitive being rendered. If it does then the point of intersection determines the color to place in the pixel.

There are a lot of optimizations that exist in most renderers. The most common are probably viewport clipping, depth buffering and face culling. Viewport clipping simply means that stuff outside the viewport gets ignored, which is sensible because you wouldn't be able to see it anyway. face culling is slightly more complex, but modelers should definitely know about this one:

When you create a model you need to know whether to use clockwise or widdershins "vertex winding". Basically what that means is that you want the vertexes for every primitive to be either clockwise or widdershins when viewed from the outside of the model. This is because once a model is positioned for rendering the culling test gets applied. If your program is using clockwise culling then any primitive whose verts are in widdershins order when viewed from the camera position will get dropped, since they're assumed to be facing away from the camera. For instance, imagine a single triangle with verts A, B and C, defined in clockwise order starting at the top. Once the triangle is positioned relative to the camera if the vertexes still appear in clockwise order from that perspective then the triangle is facing the camera. If they've become widdershins then the triangle is facing away and shouldn't be rendered. This saves a huge amount of work for the GPU. If you're making models for someone else then ask them what winding order to use. If you're just making them for yourself then just pick an order and stick with it for all your models for that project (which one you choose doesn't matter).

Finally the depth buffer doesn't really have to do with modelling, but it's a good thing to know about if you're involved with rendering. Basically if you have a 640x480 viewport that you're rending to you'll probably have a 640x480 depth buffer lurking in the background. When the rasterizer finds a collision with a primitive for a pixel it stores the color value in the color buffer and it stores the z depth of the collision in the depth buffer. However, before it does this it checks that z value against the z value for that pixel in the depth buffer. If the comparison indicates that the new color is 'underneath' the existing one then the pixel doesn't get rendered. Because of this it's best to render 3D scenes from front-to-back in order to avoid as many color operations as possible (since the nearer objects will 'hide' rather than 'overwrite' the farther objects). The exception is when you're rendering something transparent, in which case it needs to be rendered after the objects behind it so that it can blend with their colors. Because of this it's good to keep in mind that transparency in models requires a bit more work for the rendering to do. It's not a huge hit but you don't want to get carried away - and let your developer know if you use transparency in a model.

PARTNERS