SSAO without deferred rendering

Started by
26 comments, last by Hurp 15 years, 10 months ago
Hello, I have been reading about Screen Space Ambient Occlusion for a while now and have been trying to apply it. However, after reading some example source code it seems as if everyone does it with deferred processing. I was wondering if anyone has done this without using deferred rendering or post processing. Does anyone know if that is possible?
Advertisement
It does not depend on any renderer architecture. You just need a depth buffer ...
You just need to have the results of the ambient-occlusion pass available when you render the ambient term, it doesn't matter at all how you're actually rendering it. So what happens usually is you'll do your SSAO pass, and the result of that will be in a buffer that has the same dimensions as the screen. Then as you render your ambient term, you'll sample the occlusion factor from this buffer by converting the pixel's screen-space position into a 2D texture coordinate. Usually you can just multiply the occlusion factor with the ambient term and you're good to go, but I suppose you could do something fancier if you wanted.
While a a deffered system makes it a lot easier, Crysis, the game it was first developed for, wasnt a deffered renderer...however, they did a depth-only pass first before othr rendering...so thats what you will need to do.. if done right it can improve frame rates if you are using complex shaders.
Quote:Original post by wolf
It does not depend on any renderer architecture. You just need a depth buffer ...

So it does depend on using z-buffer rasterization ;)
Quote:Original post by AndyTX
So it does depend on using z-buffer rasterization ;)

Or ray-tracing as long as you fill in a depth buffer ;P
Quote:Original post by Hodgman
Or ray-tracing as long as you fill in a depth buffer ;P

Hehe, true :D Of course if you can trace arbitrary rays SSAO might not be the best choice of algorithm.
Well, how would I go filling out the information for the depth buffer. What I was thinking about was doing a multiple pass technique where pass p0 would find out the depth information and pass p1 would use it. However, I have never done multiple passes before and I do not see how you could get a color from one pixel shader to another. I tried making a global variable, however, that did not work.
The simplest way to it is to do a pass where you render depth to a floating-point texture. Then when you're doing your SSAO pass, you simply sample that depth texture in the shader like any normal texture. The details of doing this will vary a little bit depending on which API you're using, but the core concept will be the same.
Quote:Original post by AndyTX
Quote:Original post by Hodgman
Or ray-tracing as long as you fill in a depth buffer ;P

Hehe, true :D Of course if you can trace arbitrary rays SSAO might not be the best choice of algorithm.


Why not? Sampling an image ~30 times will be cheaper than, say, shooting 30 rays in random directions [grin]

This topic is closed to new replies.

Advertisement