DX11 SSAO - Is this right? Again...

Started by
15 comments, last by Jason Z 10 years, 12 months ago

Are you familiar with the various spaces in the rendering pipeline? For example, your transform typically goes from object space to model space to view space, and they are all simple transformations that just change the orientation and location of the origin of their previous spaces.

The projection matrix is different though, since it warps the geometry of the scene so that a frustum shaped chunk of the scene fits into a cube. This non linear behavior is what I suspect is your issue.

So the steps in the process that you need to implement in order to find out if this is the case are all in your shader:

  1. For the pixel currently being calculated, find its view space position. You will need to instrument your shader for this - either passing the view space position in your attributes, or passing an inverse projection matrix in your constant buffers.
  2. When you do the offsets in your depth samples, these are now being applied to that view space position. They will also be in your regular units as well (i.e. meters or whatever unit you use), so it is more logical to think about how large the radius is.
  3. However, to look up the where that 3D view space offset location is in your depth buffer, you need to re-project the point and find its location in the depth buffer. This can either use the projection matrix, or you can just do the simple math version on the xy coordinates (since those are what is needed to find the depth buffer location).

Have you tried to implement any of these steps yet? If so, which ones are you getting hung up on?

Advertisement

So I pass the inverse projection matrix to the post process shader (with ssao)?

And then I'm stuck in the 3rd step. So I'm supposed to somehow edit this:


const float2 vec[4] = {float2(1,0),float2(-1,0),
		float2(0,1),float2(0,-1)};

float3 p = getPosition(input.Tex);
float3 n = getNormal(input.Tex);
float2 rand = getRandom(input.Tex);

float ao = 0.0f;
float rad = g_sample_rad/p.z; // g_s_r

//**SSAO Calculation**//
int iterations = 1;
for (int j = 0; j < iterations; ++j)
{
	float2 coord1 = reflect(vec[j],rand)*rad;
	float2 coord2 = float2(coord1.x*0.707 - coord1.y*0.707,
				coord1.x*0.707 + coord1.y*0.707);
  
	ao += doAmbientOcclusion(input.Tex,coord1*0.25, p, n);
	ao += doAmbientOcclusion(input.Tex,coord2*0.5, p, n);
	ao += doAmbientOcclusion(input.Tex,coord1*0.75, p, n);
	ao += doAmbientOcclusion(input.Tex,coord2, p, n);
}
ao/=(float)iterations*4.0;
color.rgb *= ao;

But exactly how?

FastCall22: "I want to make the distinction that my laptop is a whore-box that connects to different network"

Blog about... stuff (GDNet, WordPress): www.gamedev.net/blog/1882-the-cuboid-zone/, cuboidzone.wordpress.com/

I'm sorry, but I am not going to write the shader for you. Do you have specific questions about how it works?

It's more that I don't understand exactly what I'm supposed to do...

FastCall22: "I want to make the distinction that my laptop is a whore-box that connects to different network"

Blog about... stuff (GDNet, WordPress): www.gamedev.net/blog/1882-the-cuboid-zone/, cuboidzone.wordpress.com/

Do you have specific questions about how it works?

That's why I'm asking you if you have any specific questions about how it works! That is also why I listed the process in steps, so that you can direct questions about a particular portion of the process. You need to think about each step, and ask us a question about it - there are many people here willing to help, but I doubt anyone is going to just write the shader for you and say here is your solution.

If you have absolutely no idea what those process steps mean, then ask a question about them, don't ask for a code example showing it.

Sorry for the trouble!

It's in the step 3:

However, to look up the where that 3D view space offset location is in your depth buffer, you need to re-project the point and find its location in the depth buffer. This can either use the projection matrix, or you can just do the simple math version on the xy coordinates (since those are what is needed to find the depth buffer location).

So how can i re-project a certain point and then find it's position in my depth buffer?

FastCall22: "I want to make the distinction that my laptop is a whore-box that connects to different network"

Blog about... stuff (GDNet, WordPress): www.gamedev.net/blog/1882-the-cuboid-zone/, cuboidzone.wordpress.com/

In this case, you can either directly use a projection matrix (which must be by itself with no view matrix multiplied) that is supplied through a constant buffer, or you can do some of the math that the projection matrix normally does in your own code. The latter is more efficient, since you are only worried about the xy coordinates so you know where to sample the buffer.

So to do the math on only the xy coordinates, try taking a look at the formula for the projection matrix that you are using, and write out the equation for only the x and y. This will guide you on what math is required to get back to clips space coordinates. Once you have these clip space coordinates, you just need to remap them to texture coordinates and sample the texture.

If you aren't too familiar with matrix math, then check out the Wikipedia page for how a vector is multiplied by a matrix, and give it a shot. You can always post questions here if something isn't clear to you.

This topic is closed to new replies.

Advertisement