Jump to content
  • Advertisement
Sign in to follow this  
discman1028

Screen-aligned quad - Why?

This topic is 4538 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi all, I am reading "Shaders for Game Programmers and Artists" (I'm the former), and I'm coming across the chapter of filtering (blur effects, etc). It says that one should render to a temporary texture, map the texture to a screen-aligned quad, and then one could use the pixel shader on it to do various effects. What I don't understand is, couldn't you just render to the normal render target, and still use the pixel shader subsequently to produce the same effect? Thanks for any explanation.

Share this post


Link to post
Share on other sites
Advertisement
rendering to a screen aligned quad gives you a 1-1 mapping from screen pixels to texture texels. This prevents sadness that you would get from doing it your proposed way.

Share this post


Link to post
Share on other sites
You can't generally use the backbuffer as a texture in a subsequent rendering pass. Thus, you have to have the render data in a texture to be able to use it in post-process. However, some games (Half-Life 2 comes to mind) render to the backbuffer, then copy the results to a texture that's the same size. That way, you still get any FSAA that may be turned on (since you cannot, at this time, create a render target texture with anti-aliasing).

Once you have this data as a texture, you can use it in a pixel shader. The full-screen quad is used as a method of getting the texels of the render texture to the correct spot on-screen. A pixel shader has to be run on geometry (you can't just arbitrarily run it on the screen), which is why you need the quad. And pixel shaders use textures as input, which is why you need a texture.

Hope that helps,

Josh

Share this post


Link to post
Share on other sites
Since your blur/bloom texture or whatever can be smaller in size, i find that using the stretch rect method (d3d) works well in this case. Just stretch your backbuffer to a smaller target, then do your blur methods using the smaller texture.. faster than blurring a big fullsize tex.

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
Use a screen filling triangle rather than a real quadrilateral (which would be split into two triangles). Modern hardware executes the pixel shader on pixel quads containing four pixels, So if you have two triangles filling the screen,the pixel shader is executed twice on the pixel-quads along the edge shared by the two triangles.

Share this post


Link to post
Share on other sites
A single triangle large enough to cover a (rectangular) screen will obviously be larger than the screen. Thus, it will be clipped. When clipped, it will be reduced to multiple triangles (probably two). Since clipping occurs prior to rasterization (and thus, prior to the execution of the pixel shader) this is basically pointless.

I don't really know what you're getting at with the pixel-quad thing. The shader runs per-fragment. Are you trying to suggest that a fragment is a four-pixel block and every four-pixel block gets the shader run once (and thus, gets a single color)? I find that hard to believe as it would effectively half the resolution of the render target since its basically pixel-doubling.

Share this post


Link to post
Share on other sites
The AP is correct, mostly. Rendering a big triangle that covers the screen will be slightly faster, but only because you're transforming one less vertex. Nvidia hardware, and recent ATI hardware, render quads natively so you don't get the 2x2 fragment overlap along the diagonal. On older hardware it can definitely happen, though.

One other thing to note is that Nvidia hardware does not clip triangles, but only scissors them. Thus, no extra vertices are created if you use a screen-filling triangle. I'm not sure what ATI hardware does with this nowadays.

Share this post


Link to post
Share on other sites
In any case this seems academic, because the bottleneck here is fragments, not vertices, you can render a giant quad made of 400 polys and you wont see any real difference. (unless of course you are rendering a large amount of quads).

That's why i suggested using Strechrect to shink down your backbuffer texture before applying you shaders, because thats the way to speed up your methods.

Plus, using a giant triangle to render a screen aligned texture is annoying becasue you have to calculate special texture coords.

Share this post


Link to post
Share on other sites
Quote:
Original post by Drilian
A pixel shader has to be run on geometry (you can't just arbitrarily run it on the screen), which is why you need the quad.


Thanks, that was what I was missing. :) I thought it ran on all pixels... which doesn't make sense.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!