Regarding MSAA in Killzone

Started by
16 comments, last by Zipster 16 years, 8 months ago
Deferred Rendering in Killzone (8.26 MB) I'm guessing that a lot of you have seen the above PDF (or maybe the original presentation). It basically goes through their deferred renderer implementation, nothing too new or fancy. But what caught my attention was slide 39, where they claim to have implemented MSAA. Naturally this piqued my interest (since Yann L posted a thread a few months ago asking about combining deferred rendering and MSAA, and the consensus was that it was either impossible to do without bad artifacts or less efficient than super-sampling). On slide 21 they claim that a pro of their G-Buffer layout is that it allows MSAA, however my understanding is that you can't multisample data such as normals or depth because the lighting would be full of artifacts. I don't see how their structure or layout offers any improvement. Secondly, on slide 39, they say that they read from the G-Buffer and run the lighting shader for "both samples"... both samples of what? I might just not be familiar with how graphics are done are PS3 hardware, but how are they actually accessing individual samples? I'm at a slight disadvantage because I haven't looked at PS3 hardware in any real depth beyond it's general architecture, and I wasn't at the original presentation where a lot of this might have been explained in more detail, so hopefully someone can help me out and clarify exactly what it is Guerrilla is doing with MSAA.
Advertisement
Quote:Original post by Zipster
... Yann L posted a thread a few months ago asking about combining deferred rendering and MSAA, and the consensus was that it was either impossible to do without bad artifacts or less efficient than super-sampling


I haven't seen this discussion, but in my experience this indeed holds true on the PC platform. The problem stems from limitations in the API/hardware though and isn't inherent to the technique, so it might be possible to take advantage of the hardware's MSAA implementation on the PS in this scenario.

On slide 18 however, they mention they use something called "Quincunx MSAA", which might hint that they're using a software (shader) based solution. I'm not familiar with this technique, but a quick search on google seems to indicate it's an older technique using 1x2 samples which NVidia pattented for the GeForce3 (more here and here). I may be completely off-track, but I'd guess they implemented this 'simple' technique for the G-buffer reads in the lighting shader, which would explain the "both samples" on slide 39.
Rim van Wersch [ MDXInfo ] [ XNAInfo ] [ YouTube ] - Do yourself a favor and bookmark this excellent free online D3D/shader book!
The way it works is:
1. render with MS into the G-Buffer ... this buffer is 2560x720
2. then render with MS into the light or accumulation buffer that is also 2560x720 ... because the data in the G-Buffer is MS'ed you fetch two of those pixels at once by just averaging them or using Quincunx (described in Real-Time Rendering).
The reason why MS works in-between the light buffer and the G-Buffer is that you render your lights as geometry.
So what you get in the light buffer is "low" quality color data from the G-Buffer but high quality light data.
If you think about a deferred renderer on the PS3 do not forget that the RSX is actually a 7600 GT in PC graphics terms. It has a 128-bit memory bus and is very bandwidth limited. Additionally you do not have much memory to spare on a G-Buffer and a light buffer ... so overall it is not the solution you want to do on hardware that was build for a Z pre-pass renderer. here are the drawbacks:
- you loose memory
- you loose the ability to setup a really good material / meta-material system
- you loose performance because hardware's sweetspot is a Z pre-pass renderer
- you can't use many lights anyway because the memory bandwidth limitation keeps you from using many shadows and so what are many lights without shadows?

Just my 2 cents :-)

Zipster: you helped me quit often in the past thanks for this, ask me more via PM :-)
Nope, please keep it public. I think this is a quite interesting topic. Maybe something new come up that has not been covered in the threads before. And thanks for the link, I didn't know of it before.
----------
Gonna try that "Indie" stuff I keep hearing about. Let's start with Splatter.
Interesting presentation, thanks for the link.

My guess is that they're actually doing some type of SSAA like wolf describes (i.e. rendering double/quad-sized framebuffer) since people have just taken to using the term "MSAA" for all of these techniques nowadays. It's also possible that they're just not doing it correctly, but don't notice due to their particular lighting setup.

Quincunx generally refers to an AA method that shares samples between adjacent pixels (arguably conceptually similar to R600's "wide tent" filter). It's an efficient way to get some extra AA but has been criticized for bluring out high-frequency texture detail which is true to some extent, and largely responsible for why it was removed from NVIDIA's newer drivers.

And wolf I agree that deferred rendering seems a bit odd for the RSX, but perhaps still a win if they have a ton of local lights. I disagree that it limits your material system at all though (I've actually found it easier to work with if anything). The points about the rather weak memory subsystem of RSX are certainly relevant though...

As an aside, "proper" MSAA is completely possible and efficient in DX10 since you can now grab the pre-resolved MSAA samples. This may also be possible on the 360, but I don't know for sure.
Quote:My guess is that they're actually doing some type of SSAA like wolf describes (i.e. rendering double/quad-sized framebuffer) since people have just taken to using the term "MSAA" for all of these techniques nowadays. It's also possible that they're just not doing it correctly, but don't notice due to their particular lighting setup.
wolf knows that they do it :-) ... I work currently on a different PS3 and 360 title that uses deferred rendering ... guess which one :-)
oh and regarding deferred rendering on the 360. There are different arguments against it but I would not advise to use it there either :-) ...

[Edited by - wolf on August 14, 2007 8:05:40 AM]
Dx10 high-end cards is the only place where you can really make deferred rendering look very good and I assume that with a wider distribution of this target platform, deferred rendering will also be more common there.
Quote:Original post by wolf
Dx10 high-end cards is the only place where you can really make deferred rendering look very good and I assume that with a wider distribution of this target platform, deferred rendering will also be more common there.

DX10 hardware and API do make deferred rendering even more attractive... I'd argue that STALKER did a pretty decent job on DX9 and I believe "Tabla Rasa" also supports a DX9 path. Both actually have good chapters in GPU Gems 2 and 3 respectively describing the trade-offs in their approaches. My experience meshes pretty well with the latter chapter (indeed there are parts of the chapter that I could well have written the arguments and structuring were so similar :)), and I certainly suspect that moving forward deferred rendering will become pretty popular, particularly in DX10+ hardware.
Quote:I believe "Tabla Rasa" also supports a DX9 path
no it does not. Check out the GPU Gems 3 article it says that they support deferred rendering only for DX10 ... I assume you have access to it.

Quote:forward deferred rendering will become pretty popular, particularly in DX10+ hardware.
for people who need to make money with games it won't be very popular for the next two years. Up until then the install base might be big enough to pay for it :-)
In general most renderers are anyway hybrids ... most deferred renderers need to render transparent stuff with a forward approach (Killzone 2, GTA IV even Tabula Rasa .. yep I read the article).
Most forward renderer use deferred elements like the Z pre-pass or deferred shadows ... so the lines are already blurry and they will become more blurry.

This topic is closed to new replies.

Advertisement