SSAO woes

Started by
14 comments, last by MJP 16 years ago
Harry, if you figure out what the problem is please let us know what you did to solve it. I have a similar problem with my implementation of SSAO.

Thanks,
Drak
Advertisement
I don't know if this well help you guys out at all but in that thread I linked I posted a "cleaned-up" version of Inigo's algorithm in HLSL. I used some more verbose variables names named a few of the parameters that you can tweak, which might make things easier to understand. I also mentioned in that thread what parameters I used to get the results for the screenshots I posted. Only catch is I didn't implement the "reflect using a random texture" part, but that's not too hard.

#define NUM_SAMPLE_POINTS	32float2 depthBufferDimensions;float4 sampleOffsets [NUM_SAMPLE_POINTS];float  sampleRadius;float  distanceScale;float4x4 projectionMatrix;sampler2D depthSampler : register( s2 );struct VS_INPUT{    float4 position : POSITION0;		//pre-transformed vertex positions     float2 texPos : TEXCOORD0;			//texture coordinates    float3 viewDirection : TEXCOORD1;	//location of corresponding furstum corner (vertex location in eye-space)};struct VS_OUTPUT{    float4 position : POSITION0;    float2 texPos : TEXCOORD0;    float3 viewDirection : TEXCOORD1;};struct PS_OUTPUT{    float4 color : COLOR0;};void vs (in VS_INPUT IN, out VS_OUTPUT OUT){    //just pass through the data from the vertex    OUT.position = IN.position;    OUT.texPos = IN.texPos;    OUT.viewDirection = IN.viewDirection;}void ps (in VS_OUTPUT IN, out PS_OUTPUT OUT){    //reconstruct eye-space position from the depth buffer    float pixelDepth = tex2D(depthSampler, IN.texPos).r;    float3 pixelPosEyeSpace = pixelDepth * IN.viewDirection;    //loop through our sample locations, accumulating the occlusion    float result = 0.0f;    for (int i = 0; i < NUM_SAMPLE_POINTS; i++)    {        //determine the eye-space and clip-space locations of our current sample point	float4 samplePointEyeSpace = float4(pixelPosEyeSpace + (sampleOffsets * sampleRadius), 1.0f);        float4 samplePointClipSpace = mul(samplePointEyeSpace, projectionMatrix);	//determine the texture coordinate of our current sample point	float2 sampleTexCoord = 0.5f * samplePointClipSpace.xy/samplePointClipSpace.w + float2(0.5f, 0.5f);	//Flip around the y-coordinate and offset by half a pixel	//This is for Direct3D9 only!!!!  Not necessary in OpenGL	sampleTexCoord.y = 1.0f - sampleTexCoord.y;	float2 offset = 0.5f / depthBufferDimensions;	sampleTexCoord -= offset;        	//read the depth of our sample point from the depth buffer        float sampleDepth = tex2D(depthSampler, sampleTexCoord);	//compute our occulusion factor        float occlusionFactor = distanceScale* max(pixelDepth - sampleDepth, 0.0f);        result += 1.0f / (1.0f + occlusionFactor * occlusionFactor);   }   OUT.color = float4(result/NUM_SAMPLE_POINTS, 1.0f, 1.0f, 1.0f);}
Ok, based on the other thread that MJP posted a link to (thanks MJP), I have attempted my own version of SSAO. I am getting very strange artifacts and I am sure it has to do with my far frustum corner calculation. Any help would be appreciated.

Here is the rendermonkey project:
http://www.2shared.com/file/3086783/5b84fc90/ssao.html

Thanks,
Drak


I don't have rendermonkey on this computer so I can't look at your project, but I can tell you how I reconstruct position from depth.

Like it says in the comments, viewDirection is the location of corresponding corner of the view frustum, in view space. This value is calculated from the parameters I use to create my perspective projection matrix, which are the vertical fov, aspect ratio, distance to the near clip plane, and distance to the far clip plane. The code looks something like this, with index 0 of the array being the top left corner and then going clockwise:

float farY = tan(fov / 2) * farZ;float farX = farY * aspectRatio;frustumCorners[0] = {-farX, farY, farZ};frustumCorners[1] = {farX, farY, farZ};frustumCorners[2] = {farX, -farY, farZ};frustumCorners[3] = {-farX, -farY, farZ};


My depth buffer is created by storing linearized view-space Z, which is calculated like this:

// Vertex shaderOUT.vPositionVS = mul( IN.vPositionOS, matWorldView );// Pixel shaderOUT.depth = IN.vPositionVS.z / fFrustumFarZ;
My tip is to avoid the transformation into viewspace (eyespace).
At first, I tried to use Inigo's method, but after understanding what the nature of the calculation is, I decided to just do it in projected space. My best explenation for this choice is: you won't gain any additional information in viewspace, so there is no point in the additional transformation. By avoiding the projection to viewspace the amound of code is reduced (which is always a good thing).
You can see my implementation in the image of the day section and decide for yourself if the implementation in projectedspace is good enough or not.
Quote:Original post by doronf
My tip is to avoid the transformation into viewspace (eyespace).
At first, I tried to use Inigo's method, but after understanding what the nature of the calculation is, I decided to just do it in projected space. My best explenation for this choice is: you won't gain any additional information in viewspace, so there is no point in the additional transformation. By avoiding the projection to viewspace the amound of code is reduced (which is always a good thing).
You can see my implementation in the image of the day section and decide for yourself if the implementation in projected space is good enough or not.


Not performing your calculations in linear space means that the parameters you use will produce different results depending on the size of the projection volume and where you're sampling within your volume. IMO it's not worth saving the shader math, especially for complex scenes.


This topic is closed to new replies.

Advertisement