Jump to content
  • Advertisement
Sign in to follow this  
Cypher19

Idea: Partially deferred shading

This topic is 4835 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Okay, the following assumes that you know about deferred shading. Benefits, and flaws, including lack of antialiasing. Well, I came up with an idea today that might be able to fix that problem. First, render the normal/diffuse/whathaveyou textures normally, but when rendering the scene, instead of just doing a quad that renders the scene with a pixel shader grabbing texture values like crazy as it goes, it uses the actual scene geometry. In the vertex shader, the geometry has one texture coordinate that is calculated by taking the post-perspective location, and transforming it into texture space. Then, in the PS, the appropriate texel is looked up using it. The benefit of this, in theory, is that since hardware uses geometry for multisamping and not pixels, it will perform antialiasing normally. I have a demo at home that can do this[1] but I can't connect to it, as it is 300km away. Anyways, until I can get some screenies, what does everyone think of the idea? [1] A lighting and shadowing demo that, for some reason, I've never tried antialiasing with. The shadowing is stored as a seperate texture, and added on using the algorithm I described above.

Share this post


Link to post
Share on other sites
Advertisement
One of the major benefits of deferred shading is the reduction in vertex processing load (i.e. process your vertices once to generate the g-buffer, and then do as many lights as you want without having to process loads of polys for each one). You'd lose that benefit with this.

I really think your only option for antialiasing with deferred shading is to use supersampling. Hopefully we should start seeing hardware support for it soon (e.g. using a larger framebuffer than the screen resolution and downsampling on-the-fly as the data is fed to the display, saving you from having to keep around a real-resolution framebuffer as well).

Share this post


Link to post
Share on other sites
It is? I thought that the biggest benefit was better batching, less movement of textures, fewer state changes, no recalculation of things like normals due to normal mapping, and so on. It's not as if vertex processing is at a premium on high end graphics cards nowadays anyways.

Edit: Besides, I know I'm going to need it from a functionality point of view when I start using a shadow map atlas. I'm going to need to use a lot of the input registers for that, and I fear I won't have enough left over for effects like bump mapping.

Share this post


Link to post
Share on other sites
One reasonable solution for antialiasing is to, after having rendered the whole scene using traditional deferred shading, perform some postprocesing like this:
- detect edges based on discontinuities in position and normals stored in your render targets
- perform blur filter for pixels found to be edges (depending on your screen resolution you should set appropriate filter radius)

Share this post


Link to post
Share on other sites
I was just about to mention edge-detect blur, but MickeyMouse was quicker :) As we already have normal/depth, why not use it.

Other random thoughts about the topic: Cypher's original idea does have its uses, for example, an often overlooked fact is, that per-pixel lighting with an arbitary number of local lightsources (hint: rts) on a reasonably batched landscape with terrain splatting is a fill/vertex-rate nightmare. I spent quite some time figuring out how to do it in the fallback path (eg no deferred), and came to the same conclusion as cypher: lay down the base color with splatting in a buffer, and draw the "light passes" as he described. I've tried the "naive" method of re-drawing the layers using each light - as you can imagine, it wasn't too fast even on high-end cards. (I have a batch resolution of 17x17)

There is one issue with this method, namely the zbuffer. You have two choices:
1. draw to the backbuffer (AAd) and stretchrect to the texture. (this will cause a "double-aa" effect because the stretchrect resolves, and the light passes will do too)
2. do a z-only pass into the backbuffer, and render to the texture directly (with
an associated non-aa zbuffer)

I'm using the first, as we're pretty vram/cpu limited.

Share this post


Link to post
Share on other sites
Quote:
Original post by MickeyMouse
One reasonable solution for antialiasing is to, after having rendered the whole scene using traditional deferred shading, perform some postprocesing like this:
- detect edges based on discontinuities in position and normals stored in your render targets
- perform blur filter for pixels found to be edges (depending on your screen resolution you should set appropriate filter radius)


I actually did try something like this at one point, using a rotated grid AA method and doing some sampling, and it just turns into a blurry, chunky mess. If you want some idea of what it looked like, open up photoshop, draw something, shrink it to half its size, and then stretch it back up to its full size. Looks pretty crappy, don't it?

Also, rept, can you elaborate on what issue you're referring to?


Oh, and I tried to get antialiasing working in my program last night, but for some stupid reason it wouldn't work. I'll have to look into it more, but seeing as how it takes ~10x as long to perform any task due to the remote desktop connection I'll just wait until I'm back home again.

Share this post


Link to post
Share on other sites
Quote:
Original post by Cypher19
Quote:
Original post by MickeyMouse
One reasonable solution for antialiasing is to, after having rendered the whole scene using traditional deferred shading, perform some postprocesing like this:
- detect edges based on discontinuities in position and normals stored in your render targets
- perform blur filter for pixels found to be edges (depending on your screen resolution you should set appropriate filter radius)


I actually did try something like this at one point, using a rotated grid AA method and doing some sampling, and it just turns into a blurry, chunky mess. If you want some idea of what it looked like, open up photoshop, draw something, shrink it to half its size, and then stretch it back up to its full size. Looks pretty crappy, don't it?


Indeed.
But we're not talking about blurring the whole screen content. Have you tried blurring only edges?

I read (in GPU Gems 2) about this solution being succesfully used in STALKER - the first game to use deferred shading. I have no experience with this, but I'll surely try this soon.

Share this post


Link to post
Share on other sites
Well, the edges are what I was focusing on when I did it, actually.

And Stalker uses it? Now I'll REALLY have to get GPU Gems 2...

Share this post


Link to post
Share on other sites
Quote:
Original post by Cypher19
Also, rept, can you elaborate on what issue you're referring to?



Sure. Imagine a situation where you have - for example - 3 passes (4 textures each) of terrain splats to render. Now add 4 lights influencing this batch, and with the naive method you're at 12 passes. (Calculating lighting for each pass individually, i'm not considering vertex lighting now)

This is just slow. Ideally, lighting should be done after splatting has already finished laying down the colors. With deferred shading, it's a given, but without, it's not that simple anymore.

Another solution i'm considering, is this: in my implementation, splatting only happens on the closest LOD of the terrain. On the rest, i'm using a big texture that has all the splats on it. (far LODs dont need the high resolution splatting provides) Because of these conditions, i could do splatting in texture-space, and use the resulting texture for rendering the actual batch.

Share this post


Link to post
Share on other sites
Huh? Nonono, I meant in regards to this:
Quote:
There is one issue with this method, namely the zbuffer. You have two choices:
1. draw to the backbuffer (AAd) and stretchrect to the texture. (this will cause a "double-aa" effect because the stretchrect resolves, and the light passes will do too)
2. do a z-only pass into the backbuffer, and render to the texture directly (with
an associated non-aa zbuffer)


What issue with the z-buffer?

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!