Project vertices to raster space on the CPU

This topic is 590 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

Recommended Posts

I am doing CPU rasterization and I would like to manually transform triangle vertices all the way from model space to raster space (where they directly correspond to pixels in the image). Currently I just multiply them with a MVP matrix. Which coordinates exactly am I in at that point? The vertices should then be equal to the vertices usually leaving the vertex shader. But the rendering pipeline does some steps after that I think I have to do manually now. What are they exactly? Division by the w coordinate, viewport transform or something else? I would appreciate an answer with specific steps. Thanks.

for (Uint32 i = 0; i < mesh.vertexCount; ++i)
{
mesh.vertices[i].position = modelViewProjection * mesh.vertices[i].position;  // I am using glm
// What else?
}


Share on other sites

Thanks a lot for the explanation. Your SW renderer is especially helpful.

I have a question about triangle clipping. Why do we have to do it even if we use the halfspace triangle algorithm? Is there a problem with vertices that are behind the near plane? Or is it always just a matter of performance?

Edited by pseudomarvin

Share on other sites

Regarding clipping, imagine few following situations:

1. Triangle is rasterized

Triangle with vertices - (-0.5, -0.5, 0.0) (0.5, -0.5, 0.0), (0.0, 0.5, 0.0)

For this scenario - nothing is clipped, assuming screen resolution is (800, 600), you are rasterizing triangle between (200, 150) and (600, 450) on screen. Each pixel in that area is tested and you do pixel processing for that.

2. Triangle is completely clipped

Triangle with vertices - (-10.0, 0.0, 0.0) (-2.0, 0.0, 0.0) (-4.0, 1.0, 0.0)

For this scenario, - triangle doesnt intersect our screen at all, we have to clip it, as with our screen (800, 600) it would get rasterized between (-3600, 300) and (-400, 600). Each pixel in this are is out of bounds of our screen, therefore we can't rasterize it (either we would waste cycles looping through pixels out of viewport, or just attemped to write, which would result in invalid memory access and crash of the application).

3. Triangle is partially clipped

Triangle with vertices - (-2.0, 0.0, 0.0) (0.0, 0.0, 0.0) (0.0, 1.0, 0.0)

For this scenario, there is similar problem to previous one - projected triangle is between (-400, 300) and (400, 600) - so either we waste cycles or end up in invalid memory access. There are actually 2 solutions:

1. We could technically make loop just between (0, 300) and (400, 600). This is NOT implemented in my source. It is a correct solution, yet I wanted to force myself to understand how clipping algorithms works (and in some scenarios it might be even more viable to use it -> huge triangle, covering just few pixels on screen -> we are not attempting to calculate huge empty area) - F.e. (-100, 0, 0) (0.1, 0, 0) (-50, 2, 0)

2. Doing the clipping - in the described scenario, the clipping actually doesn't create 1 triangle, but 2 triangles (and we need to rasterize both), the triangles are:
(-1.0, 0.0, 0.0) (0.0, 0.0, 0.0) (0.0, 1.0, 0.0)
(-1.0, 0.0, 0.0) (0.0, 1.0, 0.0) (-1.0, 0.5, 0.0)

We already know that both of the projected triangles are within screen boundaries, we just need to rasterize both of them. Compared to solution 1, you need to do redundant clipping calculations. On the other hand you will only loop over the actual area of triangles. In worst I think you can get 7 triangles.

Vertices behind the near plane should actually be discarded on pixel-shader level as a depth-test (and at line 833 in device.c - you can see my solution). I basically check whether depth buffer is attached to framebuffer, and if that is true, do lequal comparison.

Also, in any good rendering engine most of the triangles that would be fully clipped will already be gone by some frustum culling algorithm - so in general the most problematic case is when your triangle intersects with your clipping coordinates boundaries.

Share on other sites

Thanks again for a very detailed answer. I have been trying to implement the first solution you proposed (without implementing a special clipping algorithm) with mixed results. Looping just in the valid screen space coordinates works fine. But it fails for the cases when the triangles are behind the camera (partially or completely). I have tried to solve this by interpolating the vertex NDC for every pixel in the triangle and doing a clipping test in screen space. Basically I only let pixels with NDC x,y,z coordinates in the range [-1, 1] be rasterized. However I wasn't sure how to do the interpolation. According to the spec https://www.opengl.org/registry/doc/glspec44.core.pdf page 427 formula 14.9 is used for vertex attribute interpolation. But then depth is interpolated using formula 14.10. So I tried both versions.

//w0, w1, w2 are the barycentric weights of the vertices.

// Version 1 (formula 14.9 for x,y)
float x_ndc = (w0*v0_NDC.x/v0_NDC.w + w1*v1_NDC.x/v1_NDC.w + w2*v2_NDC.x/v2_NDC.w) /
(w0/v0_NDC.w + w1/v1_NDC.w + w2/v2_NDC.w);
float y_ndc = (w0*v0_NDC.y/v0_NDC.w + w1*v1_NDC.y/v1_NDC.w + w2*v2_NDC.y/v2_NDC.w) /
(w0/v0_NDC.w + w1/w1_NDC.w + w2/v2_NDC.w);
float z_ndc = w0 * v0_NDC.z + w1 * v1_NDC.z + w2 * v2_NDC.z;

// Version 2 (formula 14.10 for x,y)
float x_ndc = w0 * v0_NDC.x + w1 * v1_NDC.x + w2 * v2_NDC.x;
float y_ndc = w0 * v0_NDC.y + w1 * v1_NDC.y + w2 * v2_NDC.y;
float z_ndc = w0 * v0_NDC.z + w1 * v1_NDC.z + w2 * v2_NDC.z;

//The clipping + depth test always looks like this:

if (-1.0f < z_ndc && z_ndc < 1.0f && z_ndc < currentDepth &&
1.0f < y_ndc && y_ndc < 1.0f &&
-1.0f < x_ndc && x_ndc < 1.0f)


Results in gifs: https://imgur.com/a/4N01p

1) Strange things happen when the second cube is behind the camera or when I go into a cube.

2) Strange artifacts are not visible but as the camera approaches vertices, they start disappearing. And since this is the perspective correct interpolation of attributesvertices (nearer to the camera?) have greater weight so as soon as a vertex gets clipped this information is interpolated with strong weight to the triangle pixels.

Is all of this expected or have I done something wrong?

I am starting to think it might have been easier to implement Sutherland-Hodman in the first place  :D .

Edited by pseudomarvin

Share on other sites

Well it seems that you can't really do rasterization correctly without clipping against the z = 0 plane in camera or clip space, unless you do rasterization itself in homogenous coordinates.

Share on other sites

NDC -> Viewport:

* Transform 'x' from (-1, 1) to (0, viewportWidth)

* Transform 'y' from (-1, 1) to (0, viewportHeight)

This fails if your viewport doesn't start at (0, 0), as I found out. I had 2 half-screens next to each other, and the right part maps x from (-1, 1) to (viewportWidth/2, viewportWidth)

Share on other sites
NDC -> Viewport:

* Transform 'x' from (-1, 1) to (0, viewportWidth)

* Transform 'y' from (-1, 1) to (0, viewportHeight)

This fails if your viewport doesn't start at (0, 0), as I found out. I had 2 half-screens next to each other, and the right part maps x from (-1, 1) to (viewportWidth/2, viewportWidth)

Thanks, I didn't deal with dual-screens that much up to this point (I'm still using single display, I feel more focused that way). Could it also depend on how are your displays set up in system, and of course how your driver treats screens (e.g. whether they are set as 2 separate surfaces, vs. as 1 "continuous" surface)?

Anyways to be perfectly correct you should actually set it as:

• For X (-1, 1) to (viewportStart.x, viewportStart.x + viewportSize.x)
• For Y (-1, 1) to (viewportStart.y, viewportStart.y + viewportSize.y)

Which is actually what OpenGL does, or even D3D12 in their D3D12_VIEWPORT structure.

1. 1
2. 2
3. 3
Rutin
22
4. 4
JoeJ
16
5. 5

• 14
• 29
• 13
• 11
• 11
• Forum Statistics

• Total Topics
631774
• Total Posts
3002292
×