Jump to content

View more

Image of the Day

WIP title screen for #DeathOfAPartisan #screenshotsaturday #gamedev https://t.co/qJNhfZCvd4
IOTD | Top Screenshots

The latest, straight to your Inbox.

Subscribe to GameDev.net Direct to receive the latest updates and exclusive content.

Sign up now

Is SuperSampling really a bad choice when going deferred ?

4: Adsense

Old topic!

Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
4 replies to this topic

#1 TiPiou   Members   


Posted 31 March 2014 - 02:24 AM

Hi everyone,


I'm currently reviewing the techniques I could use to achieve the overall look I target for my current game project. I have a weakness for deferred shading, not because I'm planning on using so many lights, but because I find the decoupling of rasterization and lighting much cleaner that way.


So, I was planning on using deferred shading. And I'm aware of some issues using it, ie. more complex application of antialiasing (when I'm not even sure to grasp it even with forward renderers), as well as being unable to use an unified technique for semi-transparent areas.


Here comes supersampling. This is an old technique already, but it is often dismissed as being too expensive. Yet, in my mind, it comes with several advantages :

- it handles edge-antialiasing quite "out of the box"

- it does somewhat help with the high-frequency specular aliasing issue as described there by MJP.

- I'm thinking it could also be used to fake transparency in some stochastic manner (75% alpha would mean that 1 out of 4 supersampled fragments would get covered, etc.). If done well, maybe you wouldn't even need to sort your transparent objects from back to front !


So, all in, all, given that it solves so many issues all-in-one, is it *really* that expensive, using current hardware, as compared to the alternatives ?


PS :

I guess this is quite a general question that anybody implementing a deferred shader could ask.

In my case, I'm planning for a cell-shaded final look. I don't really know yet if I'll go with some way to detect almost-perpendicular normals and mark them as my outlines, or if I'll be using a sobel filter in a per-pixel post process. Sobel would maybe help with edge-AA by itself, and maybe prevent the transparency trick I'm thinking of, if done directly on the supersampled buffers. What do you think ?


Thank you for your insights smile.png

Edited by TiPiou, 31 March 2014 - 02:30 AM.

Follow NeREIDS development on my blog : fa-nacht.rmyzen.net/

#2 Hodgman   Moderators   


Posted 31 March 2014 - 02:36 AM


Well, it has the same cost as simply increasing your resolution. Assuming that pixel-shading is your main cost, then 16x more samples = 16x more cost.

Say that your lighting shader takes 8ms normally, with 16x SSAA, it takes 128ms!


The 'standard' solution it to enable MSAA when generating your g-buffer, which is basically super-sampling, but only on geometric edges. Then before doing the lighting, you look at the sub-samples in each pixel, and either classify them as needing to be lit per-sample, or lit per-pixel (just use one sample's for the whole pixel). Hopefully 90%+ of your pixels fall into the latter category, and you perform lighting on them as usual with no extra cost. For the rest, you run the lighting shader once for each sample in each pixel, averaging those results together per pixel.

In other words, it's basically selective SSAA - only performing super-sampling some of the time.


Regarding sobel/etc for AA, these kinds of post-process AA filters are fairly common (whether you use MSAA or not!) - FXAA, MLAA, SMAA, etc...



As for stochastic translucency -- there's some stochastic techniques out there, but there's also "depth peeling" type techniques, where you use MSAA samples to hold a number of material layers. E.g. with 8x msaa, you could hold 8 layers on top of each other. In opaque areas, you could just shade/light one sample and ignore the rest, in translucent areas you could shade each layer and blend them together. Check out the "Stencil Routed K-Buffer" paper.

I have a similar idea here, which uses a stippling pattern (e.g. 4 layers in a block of 2x2 pixels): http://www.gamedev.net/topic/647731-stippled-deferred-translucency/

#3 TiPiou   Members   


Posted 31 March 2014 - 03:11 AM

I guess my reluctancy to go with MSAA is that I have no clue at the moment, on how to parse sub-samples for a given pixel.


Isn't the 2x2 block approach what is referred to as "inferred rendering" or something ? I found some explanation on implementing a simpler version of it (1 layer only) within this paper from GDC vault too. I guess it does work...  except again, It's a lot of tricks using filters...

and multiple passes...

and maybe freaking stencil...


And all this stuff scares me, to be honest. Thus I found the brute-force approach of plain old SuperSampling attractive in that regard. But I'll definitely dig into the matter some more. Thank you ;)


I find interresting also that you envision sobel as an AA filter by itself.

Follow NeREIDS development on my blog : fa-nacht.rmyzen.net/

#4 pcmaster   Members   


Posted 31 March 2014 - 09:09 AM

You could do a custom adaptive SSAO as HW MSAA does. That is render into 2x2, 3x3 (ha!) or 4x4 bigger target (and perhaps a 1x1 target, too, for faster look-up? or just a custom "down-sampled" version?) and before doing any lighting, just identify blocks that need detailed lighting/shading... Performance won't be as good as with HW MSAA, of course :(

#5 TiPiou   Members   


Posted 01 April 2014 - 09:06 AM

Hi there again.


@pcmaster ; When you say SSAO, are you referring to the global illum trick ? If so, I'm not planning on using it at the moment. If not, well, I don't know of any AA technique with that name ^^. Assuming you meant SuperSampling, yeah, I guess this test could be done (in fact I've come across a paper explaining just that, IIRC), but I was under the impression that even if *I* didn't know how to access the HW MSAA info, some of the GC gurus out there did.

Anyway, that seems quite expensive too ?

See, when Hodgman replied with a 4x4 SSAA example for performance I found it quite overkill, but the fact is, from my tests, 2x2 ain't enough anyway to be really pleasing to the eye, which means it needs more samples, and "more" is beginning to grow into a really huge number really fast (not to mention the colossal G-Buffer). Also, HW MSAA benefits from having a slightly tilted kernel to overcome the pixel grid alignment, and I don't know yet how to apply that same technique using supersampling.

You say 3x3 ^^, well, I had even considered the possibility to go for 2x3 or 3x4 but that was not so convincing still.


I'd say that for the time being I'll stick with no hardware AA, and no supersampling, as a per-pixel post-process like FXAA seems quite powerful on its own against different kinds of aliasing.


@Hodgman : I'm wondering if FXAA by itself can help with bluring out the apparent "grid" in that transparency hack within a 2x2 block ? In the slides presenting the one-layer solution I've linked above, they seem to imply that their blurring comes out-of-the-box from their edge filter, or maybe I've misunderstood something. Would that be such a filter as FXAA ?


Also, if I may digress from my OP, I've started to play around with sobel filters for edge detection in order to get black outlines around my objects (for a toon aspect). I'm currently filtering only against the depth buffer for that. Althought those first results are promising, they also come with artifacts where entire planes pass the Sobel when they're almost parallel to the view axis (having large depth deltas between pixels). I don't think I could really get rid of this issue using sobel only.

Moreover, Sobel doesn't help with AA at all as all 4 ([edit] : 2, sorry) pixels detecting a given edge along a given axis get almost the same sobel output.

It is also quite saddening that such edge detection against depth-buffer would be independent of FXAA edge detection (which works on Red and Blue channels from the final scene if I'm not mistaken).

=> Here comes a new question : does anybody know of an unified edge detection technique that would mark most of the pixels in need for edge AA as the same pixels where to apply a toon black outline ? (And which could hopefully output such outline as antialiased in the process).

Edited by TiPiou, 02 April 2014 - 01:14 AM.

Follow NeREIDS development on my blog : fa-nacht.rmyzen.net/

Old topic!

Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.