Another SSAO thread.

Started by
39 comments, last by Cineska 15 years, 10 months ago
Quote:Original post by Hurp
I have also been having a problem with this and I can not seem to figure out why. I noticed in most examples that the depth only pass and just rendering that texture shows red (since it is just writing to the red component), however, I get blue and not figure out why. It also seems to depend on the camera position for some reason. Does anyone happen to know why?

Shader Code:
*** Source Snippet Removed ***

Render Code (general algorithm)
*** Source Snippet Removed ***

pD3Dev->CreateTexture(nWidth, nHeight, 1, D3DUSAGE_RENDERTARGET, D3DFMT_R32F, D3DPOOL_DEFAULT, &m_pRenderTarget, NULL);
Is how I make my render target also.


How are you viewing your depth texture? In PIX, or are you rendering to the screen? When you render to an R32F surface, the green, blue, and alpha components are interpreted as having a value of 1.0. Therefore you'll want to only view the red channel if viewing in PIX, or set the green and blue components to the value of the red component if rendering to the screen.

Advertisement
Quote:Original post by Hurp
I have also been having a problem with this and I can not seem to figure out why. I noticed in most examples that the depth only pass and just rendering that texture shows red (since it is just writing to the red component), however, I get blue and not figure out why. It also seems to depend on the camera position for some reason. Does anyone happen to know why?

Shader Code:
*** Source Snippet Removed ***

Render Code (general algorithm)
*** Source Snippet Removed ***

pD3Dev->CreateTexture(nWidth, nHeight, 1, D3DUSAGE_RENDERTARGET, D3DFMT_R32F, D3DPOOL_DEFAULT, &m_pRenderTarget, NULL);
Is how I make my render target also.


Several things wrong:
Quote:
OUT.depthPosition = mul(float4(IN.position, 1.0f), worldMatrix);


This causes the position to be in world space (and thus its not consistent when you move the camera); you want it in view-space (worldViewMatrix) for linear depth.

Quote:
float fDC = IN.depthPosition.z / IN.depthPosition.w;


This is if you're using post-projection depth (worldViewProjection multiplied). Otherwise you want the length of IN.depthPosition.xyz (or just the .z component, if you use the alternate method of constructing the view-space XY position)
Ok, I have the blurring done: http://azazeldev.org/ssao/3.jpg

Agi, I do still have something I don't understand however. The bias you mentioned for example, but also, if you look at the above screenshot. If I move that man in to the centre of the box-room, there is still occlusion on the walls from him. I'm guessing I need to check the depth, but I'm not sure how to go about doing that.. any ideas?
-...-Squeeged third eye.
MJP and agi_shi, thank you for the help. I made the suggested changes, however, I still have an issue. It no longer is based on the camera (it seems), however, for the most part everything still renders white in a room that has a ship in it. I also look at the texture on screen. Are you suggesting that the code looks like this? I thought that maybe there is a problem with way I maybe created my render target or the order in which I do things.

VS_OUTPUT VS(VS_INPUT IN, out float4 outPos : POSITION){	VS_OUTPUT OUT;	outPos = mul(float4(IN.position, 1.0f), worldViewProjectionMatrix);      	OUT.position = outPos;  	OUT.texCoord = IN.texCoord;        	return OUT;}float4 DepthPass(VS_OUTPUT IN) : COLOR{	float fDC = length(IN.position.xyz);	return float4(fDC, fDC, fDC, 1.0f);}float4 RegularPass(VS_OUTPUT IN) : COLOR{	float4 ambientColor = tex2D(sDepth, IN.texCoord);	float4 returnColor = float4(ambientColor.r, ambientColor.r, ambientColor.r, 1.0f);  	return returnColor;}
Quote:Original post by AriusMyst
Ok, I have the blurring done: http://azazeldev.org/ssao/3.jpg

Agi, I do still have something I don't understand however. The bias you mentioned for example,

The point of the bias towards the normal is to minimize cases where half of your samples in the sphere are self-occlusion artifacts. Meaning, a flat plane, or a curved object, for example, would be incorrectly occluded by itself.
Quote:but also, if you look at the above screenshot. If I move that man in to the centre of the box-room, there is still occlusion on the walls from him. I'm guessing I need to check the depth, but I'm not sure how to go about doing that.. any ideas?

Well, the idea is to have an occlusion fall off that looks right for your scene. Thus, when the depth is far away enough, the object will not account for occlusion.

@ Hurp: It seems you combined both of my (separate) suggestions. Either do it as you are now, and use the pos.z / pos.w thing; or transform the position into view space (not the final projection-space) and then do the length() as you have it.

EDIT: Also, that screen shot looks very nice! What method did you use for blurring? (also, I assume you check for the edges, how [wink]?) One more thing, seems like you're applying the SSAO to all light, right? (not just ambient)

[Edited by - agi_shi on June 11, 2008 3:42:05 PM]
agi_shi: I tried sticking to just one way (pos.z/pos.w as I see this is used often) and I am still having a problem where it is camera based. Do you believe there is any way it could be a part of my rendering code and not my shader? Here is the shader I am using.

float4x4 worldViewProjectionMatrix;float4x4 worldMatrix;float4x4 viewMatrix;texture tDepthTexture;sampler2D sDepth = sampler_state{	Texture = <tDepthTexture>;   	MagFilter = NONE;    	MinFilter = NONE;    	MipFilter = NONE;};struct VS_INPUT{	float3 position : POSITION;	float2 texCoord : TEXCOORD0;	float3 normal : NORMAL;};struct VS_OUTPUT{  	float4 position : TEXCOORD0;	float2 texCoord : TEXCOORD1;};VS_OUTPUT VS(VS_INPUT IN, out float4 outPos : POSITION){	VS_OUTPUT OUT;	outPos = mul(float4(IN.position, 1.0f), worldViewProjectionMatrix);      	OUT.position = outPos;  	OUT.texCoord = IN.texCoord;        	return OUT;}float4 DepthPass(VS_OUTPUT IN) : COLOR{	float fDC = IN.position.z / IN.position.w;	return float4(fDC, fDC, fDC, 1.0f);}float4 RegularPass(VS_OUTPUT IN) : COLOR{	float4 ambientColor = tex2D(sDepth, IN.texCoord);	float4 returnColor = float4(ambientColor.r, ambientColor.r, ambientColor.r, 1.0f);  	return returnColor;}technique DepthPass{	pass	{		VertexShader = compile vs_2_0 VS();		PixelShader = compile ps_2_0 DepthPass();	}}technique AfterAmbient{	pass	{		VertexShader = compile vs_2_0 VS();		PixelShader = compile ps_2_0 RegularPass();	}}
The method I'm using for the blur is pretty simple, though, at first I just used my HDRR blurring - but that looked awful. Here is what I'm doing.

        float2 Direction = float2(1.0f/1024.0f, 0);        texture Deepa;        texture SSAO;        sampler2D DeepSample = sampler_state        {                Texture = (Deepa);        };                sampler2D SSAOSample = sampler_state        {                Texture = (SSAO);        };                struct a2v        {                float4 Position: POSITION0;                float2 UV      : TEXCOORD0;        };        struct v2f        {                float4 Position: POSITION0;                float2 UV      : TEXCOORD0;        };                void vp(in a2v IN, out v2f OUT)        {                OUT.Position = IN.Position;                OUT.UV       = IN.UV;        }                float4 fp(in v2f IN): COLOR0        {                float  Deep = tex2D(DeepSample, IN.UV).r;                float3 Norm = tex2D(DeepSample, IN.UV).rgb;                float  AO   = tex2D(SSAOSample, IN.UV).r;                                float  Num  = 1;                int    Sam  = 32;                                for(int i = -Sam/2; i <= Sam/2; i+=1)                {                        float2 nUV    = float2(IN.UV + i * Direction.xy);                                                float  Sample = tex2D(SSAOSample, nUV).r;                        float3 sNorm  = tex2D(DeepSample, nUV).rgb;                                                if(dot(Norm, sNorm) > 0.99)                        {                                Num += (Sam/2 - abs(i));                                AO  += Sample * (Sam/2 - abs(i));                        }                }                                return AO / Num;        }                technique Blur        {                pass p0                {                        VertexShader = compile vs_3_0 vp();                        PixelShader  = compile ps_3_0 fp();                }        }


Obviously I run it twice, once for each direction. I also have one with less samples for SM2( I'm writing this for the TV3D community and some of them need the lower quality version ).

I included a simple depth check that seems to work well: http://azazeldev.org/ssao/depth.avi

I'm still not sure about the self-occlusion you're mentioning. The walls are annoying, but they don't seem so bad now this depth check is in play.
-...-Squeeged third eye.
Actually, scrap that. The higher the resolution the worse the walls get: http://azazeldev.org/ssao/planes.jpg

I'm not sure what you mean by the bias, I've tried doing this:

                float  Deep        = tex2D(DeepSample, nUV).a;		float3 Bias        = tex2D(DeepSample, nUV).rgb;                float3 Norm        = tex2D(RandSample, IN.UV * 200).rgb * Bias;


I figured that was wrong, but thought I'd give it a shot anyway. I tried reading your thread here( http://www.gamedev.net/community/forums/topic.asp?topic_id=497072 ) but not sure if that is the thread you meant.
-...-Squeeged third eye.
Quote:Original post by AriusMyst
Actually, scrap that. The higher the resolution the worse the walls get: http://azazeldev.org/ssao/planes.jpg

I'm not sure what you mean by the bias, I've tried doing this:

*** Source Snippet Removed ***

I figured that was wrong, but thought I'd give it a shot anyway. I tried reading your thread here( http://www.gamedev.net/community/forums/topic.asp?topic_id=497072 ) but not sure if that is the thread you meant.


Here is the code I use to generate a (pseudo-)normal from the view-space position:
float3 computeViewSpaceNormal(const float4 VP) {    float3 b = normalize(ddx(VP.xyz));    float3 t = normalize(ddy(VP.xyz));    return cross(t, b);}


Now that you have the normal, you want to bias each random direction (not the random offset from a normal map):
// bias the random direction away from the normal// this tends to minimize self occlusionconst float DIR_BIAS_AMOUNT = 1.5;randomDirection += viewSpaceNormal * DIR_BIAS_AMOUNT;


From looking at your code, we're talking about your "Ray" variable in your loop.
Ahh yes, I understand now. I was being dense. Thanks again Agi, you've been very helpful :).
-...-Squeeged third eye.

This topic is closed to new replies.

Advertisement