Sign in to follow this  
Hurp

SSAO without deferred rendering

Recommended Posts

Hello, I have been reading about Screen Space Ambient Occlusion for a while now and have been trying to apply it. However, after reading some example source code it seems as if everyone does it with deferred processing. I was wondering if anyone has done this without using deferred rendering or post processing. Does anyone know if that is possible?

Share this post


Link to post
Share on other sites
It does not depend on any renderer architecture. You just need a depth buffer ...

Share this post


Link to post
Share on other sites
You just need to have the results of the ambient-occlusion pass available when you render the ambient term, it doesn't matter at all how you're actually rendering it. So what happens usually is you'll do your SSAO pass, and the result of that will be in a buffer that has the same dimensions as the screen. Then as you render your ambient term, you'll sample the occlusion factor from this buffer by converting the pixel's screen-space position into a 2D texture coordinate. Usually you can just multiply the occlusion factor with the ambient term and you're good to go, but I suppose you could do something fancier if you wanted.

Share this post


Link to post
Share on other sites
While a a deffered system makes it a lot easier, Crysis, the game it was first developed for, wasnt a deffered renderer...however, they did a depth-only pass first before othr rendering...so thats what you will need to do.. if done right it can improve frame rates if you are using complex shaders.

Share this post


Link to post
Share on other sites
Quote:
Original post by wolf
It does not depend on any renderer architecture. You just need a depth buffer ...

So it does depend on using z-buffer rasterization ;)

Share this post


Link to post
Share on other sites
Quote:
Original post by Hodgman
Or ray-tracing as long as you fill in a depth buffer ;P

Hehe, true :D Of course if you can trace arbitrary rays SSAO might not be the best choice of algorithm.

Share this post


Link to post
Share on other sites
Well, how would I go filling out the information for the depth buffer. What I was thinking about was doing a multiple pass technique where pass p0 would find out the depth information and pass p1 would use it. However, I have never done multiple passes before and I do not see how you could get a color from one pixel shader to another. I tried making a global variable, however, that did not work.

Share this post


Link to post
Share on other sites
The simplest way to it is to do a pass where you render depth to a floating-point texture. Then when you're doing your SSAO pass, you simply sample that depth texture in the shader like any normal texture. The details of doing this will vary a little bit depending on which API you're using, but the core concept will be the same.

Share this post


Link to post
Share on other sites
Quote:
Original post by AndyTX
Quote:
Original post by Hodgman
Or ray-tracing as long as you fill in a depth buffer ;P

Hehe, true :D Of course if you can trace arbitrary rays SSAO might not be the best choice of algorithm.


Why not? Sampling an image ~30 times will be cheaper than, say, shooting 30 rays in random directions [grin]

Share this post


Link to post
Share on other sites
Quote:
Original post by MJP
The simplest way to it is to do a pass where you render depth to a floating-point texture. Then when you're doing your SSAO pass, you simply sample that depth texture in the shader like any normal texture. The details of doing this will vary a little bit depending on which API you're using, but the core concept will be the same.


Well I read the "Reconstructing the 3D position depth" thread and other readings on it and I always notice everyone already has the texture needed so it seems. Or is it that the pass everything into a "floating point texture"? If so, how is this possible?

Share this post


Link to post
Share on other sites
Quote:
Original post by Cypher19
Why not? Sampling an image ~30 times will be cheaper than, say, shooting 30 rays in random directions [grin]

Certainly faster, but to be fair SSAO is a neat little trick for now but I doubt it'll be of much use once we start to get into fancier GI algorithms. It just approaches the problem in a totally wrong way (sampling objects/rays that are "near" to an object and visible in screen space? ouch!)... while I agree that it does provide some neat effects and I certainly use it on occasion, it's not the final solution, or probably not even a part of the final solution. It's almost more of an interesting effect for NPR stuff than realistic rendering though.

Share this post


Link to post
Share on other sites
Quote:
Original post by Hurp
Well I read the "Reconstructing the 3D position depth" thread and other readings on it and I always notice everyone already has the texture needed so it seems. Or is it that the pass everything into a "floating point texture"? If so, how is this possible?


They have it either because they explicitly rendered it (which was the case for my renderer) or because they accessed the device's z-buffer. Like I said just rendering depth to a floating-point render target is probably the simplest way to do it due to API restrictions, in which case you just render all your geometry using a simple pixel shader that outputs depth instead of a color. Then once you have the results of that pass in a texture, you bind it as an input and sample it in your SSAO pass.

Share this post


Link to post
Share on other sites
Quote:
Original post by AndyTX
..SSAO is a neat little trick for now but I doubt it'll be of much use once we start to get into fancier GI algorithms.


I sure hope so! I like it in that there's a pretty decent quality/performance tradeoff, but I almost can't stand it due to the utterly wrong results it produces sometimes. Plus for decent results you need to constantly tweak the parameters for your scene, which for me always feels more like guesswork than anything else. But unfortunately I'm stuck with it (as well as screen-space DOF, my other pet-peeve) because better techniques are just too expensive and because I'm not clever enough to come up with something else. [depressed]

Share this post


Link to post
Share on other sites
Quote:
Original post by MJPBut unfortunately I'm stuck with it (as well as screen-space DOF, my other pet-peeve) because better techniques are just too expensive and because I'm not clever enough to come up with something else. [depressed]

Indeed, isn't that always the case in graphics? :) Still, I have faith that someone will come up with better stuff and hardware and software will continue to improve. I just have more future interest in stuff like instant radiosity as a basis for that than SSAO. That said, screen-space effects are pretty cheap and often do a "good enough" job until we can afford to do something a bit more "correct".

Share this post


Link to post
Share on other sites
How would I render to a floating-point render target? I think I may just be missing a step as I am rendering to a render target, however, I do not know if it is a floating-point render target. Right now my main render loop looks like this


m_pD3D->Clear(0,0,128);
m_pD3D->DeviceBegin();

// change to the actual view.
m_pD3D->GetDirect3DDevice()->SetTransform(D3DTS_VIEW, &m_pCam->GetViewMatrix());
m_pD3D->GetDirect3DDevice()->SetTransform(D3DTS_PROJECTION, &m_pCam->GetProjectionMatrix());


// AO
m_pD3D->GetDirect3DDevice()->GetRenderTarget(0, &m_pBackbuffer);
LPDIRECT3DSURFACE9 pSurface = NULL;
m_pRenderTarget->GetSurfaceLevel(0, &pSurface);

m_pD3D->GetDirect3DDevice()->SetRenderTarget(0, pSurface);
pSurface->Release();

// Render Objects
m_pOM->RenderAll();

m_pD3D->DeviceEnd();

// AO
m_pD3D->GetDirect3DDevice()->SetRenderTarget(0, m_pBackbuffer);
m_pBackbuffer->Release();
m_pOM->m_tDepthTexture = m_pRenderTarget;


// Second Pass (After AO)
m_pD3D->Clear(0,0,128);
m_pD3D->DeviceBegin();

// change to the actual view.
m_pD3D->GetDirect3DDevice()->SetTransform(D3DTS_VIEW, &m_pCam->GetViewMatrix());
m_pD3D->GetDirect3DDevice()->SetTransform(D3DTS_PROJECTION, &m_pCam->GetProjectionMatrix());


// Render everything.
m_pOM->RenderAll(false);

m_pD3D->DeviceEnd();
m_pD3D->Present();


My RenderAll sections just renders a mesh and sets up the depth texture use, ect. However, as mentioned before my main concern (because this all works) is saving to a floating point render target? How can I do this? Here are a few variable definitions that I use.

LPDIRECT3DTEXTURE9 m_pRenderTarget;
LPDIRECT3DSURFACE9 m_pBackbuffer;

device->CreateTexture(nWidth, nHeight, 1, D3DUSAGE_RENDERTARGET, D3DFMT_A8R8G8B8, D3DPOOL_DEFAULT, &m_pRenderTarget, NULL);

Share this post


Link to post
Share on other sites
You just need to specify a floating-point format in place of D3DFMT_A8R8G8B8, when you create the render target texture. Something like D3DFMT_R32F should do the trick. You can see all specified floating-point formats in the documentation for D3DFORMAT, under "Floating-Point Formats" and "IEEE Formats".

Share this post


Link to post
Share on other sites
Ah ok, thanks a lot for the helping. Just as a wondering thought, does it matter if I use D3DFMT_R32F or D3DFMT_A32B32G32R32F? Does the later just use more memory?

Share this post


Link to post
Share on other sites
Quote:
Original post by Hurp
Ah ok, thanks a lot for the helping. Just as a wondering thought, does it matter if I use D3DFMT_R32F or D3DFMT_A32B32G32R32F? Does the later just use more memory?


It will use up 4x the memory and bandwidth for reading and writing. In light of this, you should only use that format when need to store 4 values per pixel and you really need a lot of precision for it. In this case you only need to store one high-precision value (the depth), so you should stick with R32F.

Share this post


Link to post
Share on other sites
I am still reading a lot about this technique and while I do understand a lot more then before. I am just a bit fuzzy on one (big) issue. I can tell why everyone is returning the color in (depth, 0, 0, 1) fashion, however, what happens after that? All I have now is a blue outline of something, how could I actually get the "shadowing" effect that AO is suppose to reproduce?

Share this post


Link to post
Share on other sites
Once you have depth stored into a texture, you feed it into a full-screen shader that determines an ambient occlusion factor for each pixel. The basic gist of this shader is that it figures out the positions of various pixels at random points around the pixel being currently rendered. To get these positions, you have to do a calculation that takes the depth you stored in the depth buffer and gives you world-space (or view-space) position. There's a whole lot of discussion of that particular topic in this thread. Later on there's also some HLSL code for the SSAO pass.

Share this post


Link to post
Share on other sites
I have continued to read about this technique and I still do not understand how one goes from a (ao, 0, 0, 1) to applying that to show the "shadowed" area. Would anyone be able to explain this or maybe post an example project?

Share this post


Link to post
Share on other sites
Quote:
Original post by Hurp
I have continued to read about this technique and I still do not understand how one goes from a (ao, 0, 0, 1) to applying that to show the "shadowed" area. Would anyone be able to explain this or maybe post an example project?


When you render the ambient light term, you'd multiply it by your SSAO.

For example:

// if SSAO was stored in the red component of a texture
float3 ambientTerm = ambientLight * diffuseColour * tex2D(ssaoTexture, uv).r;

Share this post


Link to post
Share on other sites
I have been working on this and it seems that for some reason that my actual camera position will change the effect I have. I have not written much of that actual AO, so I believe it is part of my writing to the depth buffer. I read the reconstructing the depth buffer and I have my pixel shader really simple. Can anyone tell me if this is all I need to have?


VS_OUTPUT VS(VS_INPUT IN, out float4 outPos : POSITION)
{
VS_OUTPUT OUT;

outPos = mul(float4(IN.position, 1.0f), worldViewProjectionMatrix);
OUT.position = outPos;
OUT.texCoord = IN.texCoord;

return OUT;
}

float4 DepthPass(VS_OUTPUT IN) : COLOR
{
float fFar = 500.0f;
float fDC = IN.position.w / fFar;
return float4(fDC, 0.0f, 0.0f, 1.0f);;
}

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this