# What's the frustum-to-pixels process?

This topic is 2773 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

Hiya,

As the subject line asks, what's the general process by which an engine takes the frustum coordinates (and actually, this assumes that data has already been put into -1,-1,-1 1,1,1 view box coords) and gets all the way to pixels on da screen?

Thanks!

##### Share on other sites
After that you apply the viewport transform. For D3D it looks like this:
 X = (X + 1) * Viewport.Width * 0.5 + Viewport.TopLeftX Y = (1 - Y) * Viewport.Height * 0.5 + Viewport.TopLeftY Z = Viewport.MinDepth + Z * (Viewport.MaxDepth - Viewport.MinDepth) 

After this you have a screen space coordinate in pixels.

##### Share on other sites
Hidden
Ah... Thanks MJP. Much obliged.

Thanks MJP! Very much obliged. I guess my next Q is...

Once the screen space in coordinates is achieved, what is the general description of the pixel-painting process? I know a lot of things like texture mapping and lighting and shading (obviously) enter here; I'm simply looking for an overall summation, whether or not any math is included, of the process so I can understand the conceptual logic of the "painting part".

##### Share on other sites
It's a process called scanline rasterization.

EDIT: This generally happens for you automagically if you're talking to a 3D API like Direct3D and there are some additional quirks/implementation details when doing things in hardware, but I'll omit that for brevity. They're likely not useful until you already have an understanding anyways-- it's just some clever cheating that could be a little confusing.

##### Share on other sites
In very over-simplified terms, the hardware will essentially map a triangle's 3 vertices to screen space and then figure out which pixels are "covered" by that triangle. Usually this is done by testing whether the triangle overlaps with a single point at the center of the pixel.

[attachment=7697:IC520311.png]

For each of those covered pixels, a single instance of the current pixel shader is executed. A pixel shader is program that's run on the GPU, and it's primary job is to figure out what value should be written to the pixel that it's assigned to. So a really simple pixel shader might just return a value of (1, 0, 0), which would show up as the color red if put on the screen. However a more complex pixel shader can sample textures, calculate lighting, apply normal mapping, etc. This is all done by interpolating values from the 3 triangle vertices. So for instance, it might interpolate the world space position of each vertex and use that for lighting values. Or the vertices might have texture coordinates, which are used for sampling textures.

In reality it's a little more complex than this, since there are more rules and hardware-specific optimizations to consider. But that's the general idea.

##### Share on other sites
It's important to note here that we're getting deep into hardware-specific implementation details that individual GPU vendors are extremely unlikely to publicly divulge (gotta maintain that competitive edge!) While there is a lot of common theory behind it, and while searching for a pipeline diagram (here's one for D3D9) will give you a conceptual overview, nobody outside of the hardware vendors really knows the full details of exactly what is being done on any given GPU.

• ### Game Developer Survey

We are looking for qualified game developers to participate in a 10-minute online survey. Qualified participants will be offered a \$15 incentive for your time and insights. Click here to start!

• 10
• 18
• 13
• 9
• 9