Jump to content
  • Advertisement
Sign in to follow this  
DCOneFourSeven

DX9.0c HLSL SSAO

This topic is 3094 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I'm trying to implement SSAO into my DirectX application for my final year university project. However I'm completely stuck. So far I've found a few tutorials and examples but none are really complete and only offer implementation details on the ssao shader itself.

This one in particular http://archive.gamedev.net/reference/articles/article2753.asp

I understand that two render passes are required prior to this shader being used. One to generate depth information and one to generate normal information. I also understand that the occlusion factor that results from this shader program has to be merged with my scene afterwards. (I'll worry about that when I come to it.)

[font="Arial"]The problem I'm having is the part where the author assumes I have normal and depth textures already. I don't. In the article it states that "[size=2]you can store per-pixel position directly in a floating point buffer". To me that says that you're storing the XYZW values of any vertex in world-view space in the RGBA values (respectively) of a texture. [/font]
[font="Arial"][size=2]
[/font]
[font="Arial"][size=2]First question: is that correct?[/font]

Share this post


Link to post
Share on other sites
Advertisement
Yes.

You need to store VIEW-SPACE position (xyz) and VIEW-SPACE normalized normals (xyz). You don't need to fill the alpha channels of render targets here. RGBs are enough for you.

Remember: For normal vectors, perform normalization process in pixel shader.

I understand that two render passes are required prior to this shader being used. One to generate depth information and one to generate normal information. [/quote]
Actually, you can fill both render targets in the same rendering pass. For example, set RT0 for positon and RT1 for normals; and in your pixel shader, you can render position to COLOR0 and normals to COLOR1:

struct MyOutput
{
float4 posBuffer : COLOR0;
float4 norBuffer : COLOR1;
};

MyOutput myPixelShader (MYINPUT psIn)
{
MyOutput output = (MyOutput) 0;

//blah blah here

viewSpaceNormalizedNormals = 0.5 * normalize (psIn.VSNormal) + 0.5;

output.posBuffer = float4 (viewSpacePos.xyz, 1);
output.norBuffer = float4 (viewSpaceNormalizedNormals.xyz, 1);

return output;
}

Share this post


Link to post
Share on other sites
It's simpler than it sounds. Basically you have a single pass before you perform SSAO where you render normals + depth out to a render target. For starting out, using an D3DFMT_A16B16G16R16F and storing normal XYZ in the RGB and depth in the alpha should be fine. The shaders you use for this are really simple:

struct VSInput
{
float4 Position : POSITION:
float3 Normal : NORMAL;
};

struct VSOutput
{
float4 Position : POSITION;
float3 Normal : TEXCOORD0;
float Depth : TEXCOORD1;
};

VSOutput VSDepthNormal(in VSInput input)
{
VSOutput output;

output.Position = mul(input.Position, WorldViewProjection);

// You can store world space or view space normals, for SSAO you probably want view space
output.Normal = mul(input.Position.xyz, (float3x3)WorldView);

// View space Z is a good value to store for depth
output.Depth = mul(input.Position, WorldView).z;
}

float PSDepthNormal(in VSOutput input) : COLOR0
{
return float4(input.Normal, input.Depth);
}

Share this post


Link to post
Share on other sites
[color="#1C2837"]Ok this is the shader I've constructed for this purpose. I've yet to test it (I've less than an hour before my shift starts)uniform extern float4x4 WorldViewProjection;
uniform extern float4x4 WorldView;
uniform extern float4x4 FinalXForms[35];

struct VSIN
{
float4 Position:POSITION0;
float3 Normal:NORMAL0;
};

struct VSANIIN
{
float4 Position:POSITION0;
float3 Normal:NORMAL0;
float weight0 : BLENDWEIGHT0;
int4 boneIndex : BLENDINDICES0;
};

struct VSOUT
{
// This is the standard VS projected point
float4 Position:POSITION0;
// The data we shall pass to the PS
float3 Normal:TEXCOORD0;
float4 PosData:TEXCOORD1;
float Depth:TEXCOORD2;
};

VSOUT DVertexShader(VSIN input)
{
VSOUT output = (VSOUT)0;

output.Position = mul(input.Position, WorldViewProjection);

output.PosData = mul(input.Position, WorldView);

// You can store world space or view space normals, for SSAO you probably want view space
output.Normal = mul(input.Normal, (float3x3)WorldView);

// View space Z is a good value to store for depth
output.Depth = mul(input.Position, WorldView).z;

return output;
}

VSOUT DVertexShaderAni(VSANIIN input)
{
VSOUT output = (VSOUT)0;

// Do the vertex blending calculation for posL and normalL.
float weight1 = 1.0f - input.weight0;

float4 p = input.weight0 * mul(input.Position, FinalXForms[input.boneIndex[0]]);
p += weight1 * mul(input.Position, FinalXForms[input.boneIndex[1]]);
p.w = 1.0f;

// We can use the same matrix to transform normals since we are assuming
// no scaling (i.e., rigid-body).
float4 n = input.weight0 * mul(float4(input.Normal, 0.0f), FinalXForms[input.boneIndex[0]]);
n += weight1 * mul(float4(input.Normal, 0.0f), FinalXForms[input.boneIndex[1]]);
n.w = 0.0f;

output.Position = mul(p, WorldViewProjection);

output.PosData = mul(p, WorldView);

// You can store world space or view space normals, for SSAO you probably want view space
output.Normal = mul(n, (float3x3)WorldView);

// View space Z is a good value to store for depth
output.Depth = mul(p, WorldView).z;

return output;
}

float4 DPixelShader(VSOUT input) : COLOR0
{
return float4(input.PosData.xyz, 1);
}

float4 NPixelShader(VSOUT input) : COLOR0
{
float3 viewSpaceNormalizedNormals = 0.5 * normalize (input.Normal) + 0.5;

return float4(viewSpaceNormalizedNormals, 1);
}

technique DrawPosition
{
pass P0
{
// Specify the vertex and pixel shader associated with this pass.
vertexShader = compile vs_2_0 DVertexShader();
pixelShader = compile ps_2_0 DPixelShader();
}
}

technique DrawPositionAni
{
pass P0
{
// Specify the vertex and pixel shader associated with this pass.
vertexShader = compile vs_2_0 DVertexShaderAni();
pixelShader = compile ps_2_0 DPixelShader();
}
}

technique DrawNormal
{
pass P0
{
// Specify the vertex and pixel shader associated with this pass.
vertexShader = compile vs_2_0 DVertexShader();
pixelShader = compile ps_2_0 NPixelShader();
}
}

technique DrawNormalAni
{
pass P0
{
// Specify the vertex and pixel shader associated with this pass.
vertexShader = compile vs_2_0 DVertexShaderAni();
pixelShader = compile ps_2_0 NPixelShader();
}
}


[color="#1C2837"]Two things, in MJP's post the position data was used to calculate the normal information, if that was incorrect then I've made the change in my shader however I wanted to double check.
Also, the SSAO program detailed in the link in my first post uses two samplers to retrieve depth and normal information for use. Would the output of this shader still work as an input for that if I stored the depth information in the alpha channel. It looks like it requires the xyz position data in rgb channels but this is my first time ever implementing SSAO and I'd like to be sure before I throw myself in to the deep end.

Share this post


Link to post
Share on other sites
Yeah sorry, that was a typo. It should use input.Normal, like you have it.

I'm not familiar with that sample, so I'll take a look at it and see what you need to do. If you need position also, the easiest way to do it would be to have your prepass output to two render targets, and put position in the XYZ of the second RT.

Share this post


Link to post
Share on other sites
I've updated the shader I'm using above, for verification. I also decided to draw out the results of drawing the normals to screen and got these results. Not sure if anyone has tried this before and can verify if this is correct or not, I'm not so sure, because whenever anything sits in the top right corner of the screen, it comes out just pure white. Deeply confused.

Edit: Sorry the blue/pink image is the position data being drawn, the new image I've added underneath is the normals, which looks correct to me. Not sure about anyone else.

[attachment=1386:normals.png]
[attachment=1388:normals.png]

Share this post


Link to post
Share on other sites
I have a weird problem with the shader in the link on my first post now. Whenever I try and use the "ao" value I receive a compiler error stating
"Error x4500: Overlapping register semantics not yet implemented 's0' "

I've tried everything I can think of to fix this but I've no idea now. Googling the error doesn't turn up any useful results either. Has anyone seen this before?

Share this post


Link to post
Share on other sites

I have a weird problem with the shader in the link on my first post now. Whenever I try and use the "ao" value I receive a compiler error stating
"Error x4500: Overlapping register semantics not yet implemented 's0' "

Can you show us the code in question and tell what line triggers the error ?

Just a guess: SM 2 is rather restrictive in the pixel shader on what you can do with the texcoords before you sample. If hardware allows, check if you can run it with SM 3.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!