Screen-aligned quad - Why?

Started by
13 comments, last by discman1028 17 years, 12 months ago
Hi all, I am reading "Shaders for Game Programmers and Artists" (I'm the former), and I'm coming across the chapter of filtering (blur effects, etc). It says that one should render to a temporary texture, map the texture to a screen-aligned quad, and then one could use the pixel shader on it to do various effects. What I don't understand is, couldn't you just render to the normal render target, and still use the pixel shader subsequently to produce the same effect? Thanks for any explanation.
--== discman1028 ==--
Advertisement
rendering to a screen aligned quad gives you a 1-1 mapping from screen pixels to texture texels. This prevents sadness that you would get from doing it your proposed way.
You can't generally use the backbuffer as a texture in a subsequent rendering pass. Thus, you have to have the render data in a texture to be able to use it in post-process. However, some games (Half-Life 2 comes to mind) render to the backbuffer, then copy the results to a texture that's the same size. That way, you still get any FSAA that may be turned on (since you cannot, at this time, create a render target texture with anti-aliasing).

Once you have this data as a texture, you can use it in a pixel shader. The full-screen quad is used as a method of getting the texels of the render texture to the correct spot on-screen. A pixel shader has to be run on geometry (you can't just arbitrarily run it on the screen), which is why you need the quad. And pixel shaders use textures as input, which is why you need a texture.

Hope that helps,

Josh
Since your blur/bloom texture or whatever can be smaller in size, i find that using the stretch rect method (d3d) works well in this case. Just stretch your backbuffer to a smaller target, then do your blur methods using the smaller texture.. faster than blurring a big fullsize tex.
Use a screen filling triangle rather than a real quadrilateral (which would be split into two triangles). Modern hardware executes the pixel shader on pixel quads containing four pixels, So if you have two triangles filling the screen,the pixel shader is executed twice on the pixel-quads along the edge shared by the two triangles.
A single triangle large enough to cover a (rectangular) screen will obviously be larger than the screen. Thus, it will be clipped. When clipped, it will be reduced to multiple triangles (probably two). Since clipping occurs prior to rasterization (and thus, prior to the execution of the pixel shader) this is basically pointless.

I don't really know what you're getting at with the pixel-quad thing. The shader runs per-fragment. Are you trying to suggest that a fragment is a four-pixel block and every four-pixel block gets the shader run once (and thus, gets a single color)? I find that hard to believe as it would effectively half the resolution of the render target since its basically pixel-doubling.
The AP is correct, mostly. Rendering a big triangle that covers the screen will be slightly faster, but only because you're transforming one less vertex. Nvidia hardware, and recent ATI hardware, render quads natively so you don't get the 2x2 fragment overlap along the diagonal. On older hardware it can definitely happen, though.

One other thing to note is that Nvidia hardware does not clip triangles, but only scissors them. Thus, no extra vertices are created if you use a screen-filling triangle. I'm not sure what ATI hardware does with this nowadays.
In any case this seems academic, because the bottleneck here is fragments, not vertices, you can render a giant quad made of 400 polys and you wont see any real difference. (unless of course you are rendering a large amount of quads).

That's why i suggested using Strechrect to shink down your backbuffer texture before applying you shaders, because thats the way to speed up your methods.

Plus, using a giant triangle to render a screen aligned texture is annoying becasue you have to calculate special texture coords.
Quote:Original post by Drilian
A pixel shader has to be run on geometry (you can't just arbitrarily run it on the screen), which is why you need the quad.


Thanks, that was what I was missing. :) I thought it ran on all pixels... which doesn't make sense.
--== discman1028 ==--
Actually... couldn't you potentially just combine all meshes into one (your "scene" mesh) and then use the pixel shader on that?
--== discman1028 ==--

This topic is closed to new replies.

Advertisement