Regarding MSAA in Killzone

Started by
16 comments, last by Zipster 16 years, 8 months ago
Quote:Original post by wolf
no it does not. Check out the GPU Gems 3 article it says that they support deferred rendering only for DX10 ... I assume you have access to it.

Ah yes fair enough. Still, there's not much critical in the DX10 API (other than the MSAA stuff) that makes the API necessary, although the hardware certainly handles it more efficiently.

Quote:for people who need to make money with games it won't be very popular for the next two years. Up until then the install base might be big enough to pay for it :-)

Oh sure, but since I'm in research I can happily ignore all but the newest and upcoming hardware :) 2 years may be a full game to dev studios, but it's really not that far off overall.

And yes, many renderers are becoming hybrids and I agree with that design whole-heartedly: do what's the most efficient! However I also firmly believe that the light volume rendering/compositing aspect of deferred rendering is one of the most compelling reasons to use it, so once consumer hardware catches up, I suspect many games will begin to use that design as well (particularly those that try to do realistic, dynamic lighting/GI).
Advertisement
Ah, Ok, so they're basically doing super-sampling but just calling it multi-sampling and getting away with it because that's the general term people use to describe these kinds of methods :) They had me going for a second there!

Quote:Original post by wolf
The way it works is:
1. render with MS into the G-Buffer ... this buffer is 2560x720
2. then render with MS into the light or accumulation buffer that is also 2560x720 ... because the data in the G-Buffer is MS'ed you fetch two of those pixels at once by just averaging them or using Quincunx (described in Real-Time Rendering).
The reason why MS works in-between the light buffer and the G-Buffer is that you render your lights as geometry.
So what you get in the light buffer is "low" quality color data from the G-Buffer but high quality light data.

I was always under the impression that you didn't want to perform any kind of filtering on data from the G-Buffer, because stuff like normals and depth wouldn't interpolate right on edges and produce artifacts. So how are they getting away with averaging?

Also does 2560x720 (1280x720 without MS) mean Killzone only supports widescreen? Or that just one resolution they use?

Quote:Original post by AndyTX
As an aside, "proper" MSAA is completely possible and efficient in DX10 since you can now grab the pre-resolved MSAA samples. This may also be possible on the 360, but I don't know for sure.

I remember you mentioned in Yahn's thread that even though it was possible to fetch the values, there's wasn't a flag to indicate which ones were the same, so it would really boil down to super-sampling on edge pixels at least?

Thanks everyone for the replies so far :-)
Quote:Original post by Zipster
I was always under the impression that you didn't want to perform any kind of filtering on data from the G-Buffer, because stuff like normals and depth wouldn't interpolate right on edges and produce artifacts. So how are they getting away with averaging?

The only way that they could be doing this "correctly" is to actually accumulate lighting on the "wide" framebuffer (super-sampled) and then average the resulting colours (MSAA "resolve"). They may however be doing it incorrectly and just don't notice, but in my experience the artifacts from that approach are pretty obvious...

Quote:Original post by AndyTX
I remember you mentioned in Yahn's thread that even though it was possible to fetch the values, there's wasn't a flag to indicate which ones were the same, so it would really boil down to super-sampling on edge pixels at least?

Yes it would be nice to get access to the hardware "compressed" MSAA flags, but upon further reflection comparing the sub-sample depths for equality should be sufficient (and quite cheap). And yes you're effectively going to be super-sampling the edges, but that's actually the same cost as with standard MSAA (since you light and shade each of the pixels in each sub-sample separately). Indeed it's exactly what you want to do and should be quite efficient :)
Quote:comparing the sub-sample depths for equality should be sufficient (and quite cheap).

You don't have the guarantee of equality for non-edge samples AFAIK; the hardware will always interpolate depth at full resolution to ensure AA along the boundary of intersecting/abutting triangles.
Quote:Ah, Ok, so they're basically doing super-sampling but just calling it multi-sampling and getting away with it because that's the general term people use to describe these kinds of methods :) They had me going for a second there!
The idea is to run the pixel shader only 1280x720 time while you write into a 2560x720 light buffer ... so it is a kind of MS ... maybe only possible on a console platform where you are not limited by OpenGL or D3D and have direct access to a very thin software layer (libgcm) above the hardware.

I would be interested if this works as well on DX10 ... if anyone gives it a try.
Quote:Original post by SnowKrash
You don't have the guarantee of equality for non-edge samples AFAIK; the hardware will always interpolate depth at full resolution to ensure AA along the boundary of intersecting/abutting triangles.

Ah yes, that's a good point. So it would work if you're writing out depth yourself, but probably not if you're reading it from the MSAA depth buffer... but then again, I'm not totally sure how reading from an MSAA depth buffer would work then, so maybe it's unsupported anyways.

For deferred shading, it'd probably be best to just write out screen view space depth and use that to compare.

[Edit] Humus comes through with all of the answers :) Indeed you currently can't sample MSAA depth buffers, and he also suggests an API for checking the equality of sub-samples in a specific pixel

[Edited by - AndyTX on August 15, 2007 10:21:31 AM]
Quote:Original post by Zipster
Ah, Ok, so they're basically doing super-sampling but just calling it multi-sampling and getting away with it because that's the general term people use to describe these kinds of methods :) They had me going for a second there!

You're unlikely to get clarification about what they're doing on a public forum, but trust me on this one: they have some very smart guys over at Guerilla and they very much know the difference between multisampling and supersampling!
Quote:Original post by Christer Ericson
Quote:Original post by Zipster
Ah, Ok, so they're basically doing super-sampling but just calling it multi-sampling and getting away with it because that's the general term people use to describe these kinds of methods :) They had me going for a second there!

You're unlikely to get clarification about what they're doing on a public forum, but trust me on this one: they have some very smart guys over at Guerilla and they very much know the difference between multisampling and supersampling!

Oh I don't doubt it - you have either be insane or a genius to develop for the PS3, and they don't sound crazy to me! ;)

I just thought they might have been pulling a fast one by calling it multi-sampling when they really were doing super-sampling, because as AndyTX said the term "multi-sampling" is sometimes used to refer to the whole class of methods as opposed to any particular one, and I wasn't sure if they were being sneaky in that regard. But you're right, it's unlikely I'd get any response on a public forum...

This topic is closed to new replies.

Advertisement