Some thoughts about renderers and deferred shading

Started by
94 comments, last by AndyTX 17 years, 10 months ago
I already mentioned jittered grids, quincunx, etc. all of which work great with deferred shading, raytracing, or whatever (as they make no assumptions about the nature of the aliasing)... they also don't require lots of extra memory storage (as rendering large sizes and downsampling would). They do require re-rendering the scene with subtly different projection matrices but that will probably be plenty-fast in the future.
Advertisement
Caosstec: this does not work, because the screenspace blur will be the same size throughout the image ... so objects that are far away would have the same blur as objects that are very near.

AndyTX: I am curious: what do you mean with subtly different projection matrices?
Quote:Original post by wolf
AndyTX: I am curious: what do you mean with subtly different projection matrices?

Just displacing the framebuffer pixels (jittering)... so really more of a different *post-projection* matrix, but usually it's easiest just to tack it on to projection.
Quote:Original post by CaossTec
Antialiasing is indeed one of the greatest drawbacks of DS, but they are not as hard to simulate as you point. I had found that using a simple edge detection filter and then just blurring those few edges with a 3x3 kernel produce very convincing results with out sacrifying performance.


That's exactly what the first thing I mentioned in my post was, and I'll be shocked if your solution looks as good as what hardware AA provides. I honestly that it is the #1 worst solution to AA+DS in existence, and I find it ludicrous that ANYONE in the graphics industry actually takes it seriously, considering the results that a 3x3 blur gives compared to what hardware AA does.

Quote:AndyTX: I am curious: what do you mean with subtly different projection matrices?


I think he's referring to having two sets of G-buffers that are each screen-resolution size, and the projection matrices each have sub-pixel offsets (I don't know the math behind it, but it'd be worth checking out functions in D3DX like D3DXMatrixPerspectiveOffCenter). It'd be interesting to see how something like quincunx AA would work as a postprocess. I never did entertain that idea before, and it might work. On the same token though, I don't exactly recall having a superb experience with Quincunx AA back on my old GeForce4. Also, I'm curious as to what you mean, AndyTX, by a jittered grid in this context and how you'd implement it.

Quote:Just displacing the framebuffer pixels (jittering)... so really more of a different *post-projection* matrix, but usually it's easiest just to tack it on to projection.


But where are you making that jittering? Inbetween frames? During a post-process?
Thinking about the problems we got with AntiAliasing:

Here is an image of a sample scene, the bottom shows the edges.
The polygons with brighter surface are nearer to the camera.
Can t you find the edges though another pass and build a map with high values for
PixelShader driven anti aliasing and low values to skip antialiasing there.
Like some sort of alpha test.
The gray rectangle underneath represents such an edge map

You don t have to use extra large render targets this way and you can easily skip this process for high resolutions where anti aliasing effects are hardly noticeable
http://www.8ung.at/basiror/theironcross.html
Quote:Original post by Cypher19
But where are you making that jittering? Inbetween frames? During a post-process?

It's just brute force super-sampling.

Render totally separate frames with an pixel offset post-projection (ordered, jittered, whatever) and blend (weights can be fixed, gaussian, bilinear, whatever).

Of course this is rather expensive for complex scenes, but it doesn't require extra memory and effectively anti-aliases everything (depth discontinutities, high-frequency textures, shaders, etc). The results should be better than rendering a large image and downsampling (a box filter), and require significantly less memory. However each sample requires a shaded rendering pass which could hurt a vertex or cpu-limitted application more than just increasing the framebuffer resolution.
Quote:Original post by Basiror
Can t you find the edges though another pass and build a map with high values for
PixelShader driven anti aliasing and low values to skip antialiasing there.

The problem is that to eliminate rasterization aliasing requires re-rasterizing, which one can't do in the pixel shader. Ultimately image space post-process antialiasing isn't going to produce adequete results.
Quote:Original post by wolf
Caosstec: this does not work, because the screenspace blur will be the same size throughout the image ... so objects that are far away would have the same blur as objects that are very near.


What do you mean by the same blur? Because he is using an edge detection filter, he's only blurring the slight edges of objects, and so objects far away will have a slight blur around them, while objects up close will have more blurring because of the larger edge and width of the edge.

I just did some googling on deferred shading (not too experienced with it myself), but this paper describes that anti-aliasing technique in section 3.5.1. I have no practical experience with it, but I don't see why it wouldn't work.

Edit: ah, a lot of people responded before I submitted this... my comment is kind of obsolete now. ignore it.
Quote:Original post by AndyTX
Quote:Original post by Cypher19
But where are you making that jittering? Inbetween frames? During a post-process?

It's just brute force super-sampling.

Render totally separate frames with an pixel offset post-projection (ordered, jittered, whatever) and blend (weights can be fixed, gaussian, bilinear, whatever).

Of course this is rather expensive for complex scenes, but it doesn't require extra memory and effectively anti-aliases everything (depth discontinutities, high-frequency textures, shaders, etc). The results should be better than rendering a large image and downsampling (a box filter), and require significantly less memory. However each sample requires a shaded rendering pass which could hurt a vertex or cpu-limitted application more than just increasing the framebuffer resolution.


Am I correct in assuming that that idea is a fair bit like RTHDRIBL's motion blur/AA feature, minus the position updates between re-rendering?
Quote:Original post by Cypher19
Am I correct in assuming that that idea is a fair bit like RTHDRIBL's motion blur/AA feature, minus the position updates between re-rendering?

It could be... I'm not certain how it is done in that demo (any explanation other than looking through the source?). In any case it's just super-sampling - each fragment will be the composite result of several rasterization passes. All I'm noting is that it can be done without requiring extra memory, and non-box filters can also be implemented to some extent (although probably only linear filters will work due to using alpha blending to composite).

This topic is closed to new replies.

Advertisement