[D3D12] SSAO Demo

Started by
9 comments, last by nbertoa 7 years, 2 months ago

Hi community

I want to share a new update of my little DirectX 12 rendering engine. I implemented Screen Space Ambient Occlusion and improved camera movement.

Post

Video

Hope you find it useful!

Advertisement

Hmm... i can't see any AO there - it just looks like a directional light mounted to camara?

The first video shows the ambient accessibility buffer.

The second video shows the scene where only ambient light is used (ambient factor * diffuse albedo * ambient accessibility)

The third video shows the scene with additional light from irradiance diffuse & specular environment cube maps.

There is no extra light in any scene (i.e. there is no directional light mounted to the camera)

I agree with JoeJ, something seems a bit strange with your results. It seems to be view dependent, with the accessibility being higher in places where N dot V is higher. It particularly stands out on the floor, which is darkened despite having any obstructions.

Either way I'm not trying to pick on your work, just trying to help in case there's a bug. :)

@MJP and @Joej, the intention of my posts is to receive feedback about my work (to learn and improve my knowledge). Do not worry! I receive all critics as constructive feedback.

I use normal vector buffer, where normals are stored in view space. Could that affect final result?

I guess all AO samples return the result of being visible. In the video the camera rotates around the scene but there never shows up a typical artefact caused by missing SS information.

So it seems the sampling has no effect and the N dot V dependent look might become from elsewhere.

Or depth sampling does not work for some reason? Or all samples at zero distance? Not sure, but i wonder what causes banding if the bug would be something like that.

@JoeJ

I modified occlusion radius and a lot of problems appeared. I recorded a video that shows them.

Clearly, it shows that there is view dependent problem (when I move the camera, ambient accessibility changes a lot)

This the code that computes ambient occlusion:


float SSAOVersion1(
	const float3 sampleKernel,
	const float3x3 sampleKernelMatrix, 
	const float4x4 projMatrix,
	const float occlusionRadius,
	const float3 fragPosV,
	Texture2D<float> depthTex)
{
	// Get sample position
	float3 sampleV = mul(sampleKernel, sampleKernelMatrix);
	sampleV = sampleV * occlusionRadius + fragPosV;

	// Project sample position
	float4 sampleH = float4(sampleV, 1.0f);
	sampleH = mul(sampleH, projMatrix);
	sampleH.xy /= sampleH.w;
	sampleH.xy = sampleH.xy * 0.5 + 0.5;

	// Get sample depth
	float sampleDepthV = depthTex.Load(float3(sampleH.xy, 0));
	sampleDepthV = NdcDepthToViewDepth(sampleDepthV, projMatrix);

	// Range check and ambient occlusion factor
	const float rangeCheck = abs(fragPosV.z - sampleDepthV) < occlusionRadius ? 1.0 : 0.0;
	return (sampleDepthV <= sampleV.z ? 1.0 : 0.0) * rangeCheck;
}

where sampleKernelMatrix is:


	// Construct a change-of-basis matrix to reorient our sample kernel
	// along the origin's normal.
	const float3 noiseVec = NoiseTexture.Sample(TexSampler, NOISE_SCALE * input.mTexCoordO).xyz * 2.0f - 1.0f ;
	const float3 tangentV = normalize(noiseVec - normalV * dot(noiseVec, normalV));
	const float3 bitangentV = cross(normalV, tangentV);
	const float3x3 sampleKernelMatrix = float3x3(tangentV, bitangentV, normalV);

This is the method that generates sample kernel


	// Sample kernel for ambient occlusion. The requirements are that:
	// - Sample positions fall within the unit hemisphere
	// - Sample positions are more densely clustered towards the origin.
	//   This effectively attenuates the occlusion contribution
	//   according to distance from the kernel centre (samples closer
	//   to a point occlude it more than samples further away).
	void GenerateSampleKernel(const std::uint32_t numSamples, std::vector<XMFLOAT3>& kernels) {
		ASSERT(numSamples > 0U);

		kernels.resize(numSamples);
		XMFLOAT3* data(kernels.data());
		XMVECTOR vec;
		const float numSamplesF = static_cast<float>(numSamples);
		for (std::uint32_t i = 0U; i < numSamples; ++i) {
			XMFLOAT3& elem = data[i];

			// Create sample points on the surface of a hemisphere
			// oriented along the z axis
			const float x = MathUtils::RandF(-1.0f, 1.0f);
			const float y = MathUtils::RandF(-1.0f, 1.0f);
			const float z = MathUtils::RandF(-1.0f, 0.0f);
			elem = XMFLOAT3(x, y, z);
			vec = XMLoadFloat3(&elem);
			vec = XMVector3Normalize(vec);

			// Accelerating interpolation function to falloff 
			// from the distance from the origin.
			float scale = i / numSamplesF;
			scale = MathUtils::Lerp(0.1f, 1.0f, scale * scale);
			vec = XMVectorScale(vec, scale);
			XMStoreFloat3(&elem, vec);
		}
	}

and this is the method that generates noise vectors


	// Generate a set of random values used to rotate the sample kernel,
	// which will effectively increase the sample count and minimize 
	// the 'banding' artifacts.
	void GenerateNoise(const std::uint32_t numSamples, std::vector<XMFLOAT4>& noises) {
		ASSERT(numSamples > 0U);

		noises.resize(numSamples);
		XMFLOAT4* data(noises.data());
		XMVECTOR vec;
		for (std::uint32_t i = 0U; i < numSamples; ++i) {
			XMFLOAT4& elem = data[i];

			// Create sample points on the surface of a hemisphere
			// oriented along the z axis
			const float x = MathUtils::RandF(-1.0f, 1.0f);
			const float y = MathUtils::RandF(-1.0f, 1.0f);
			const float z = 0.0f;			
			elem = XMFLOAT4(x, y, z, 0.0f);
			vec = XMLoadFloat4(&elem);
			vec = XMVector4Normalize(vec);
			XMStoreFloat4(&elem, vec);
			XMFLOAT3 mappedVec = MathUtils::MapF1(XMFLOAT3(elem.x, elem.y, elem.z));
			elem.x = mappedVec.x;
			elem.y = mappedVec.y;
			elem.z = mappedVec.z;
		}
	}

I attached a screenshot of a wall showing more errors in the ambient accessibility computation.

Any ideas?

Finally, I fixed the ambient occlusion algorithm. I attached some screenshots with the results.

The problem was here


// Project sample position
float4 sampleH = float4(sampleV, 1.0f);
sampleH = mul(sampleH, projMatrix);
sampleH.xy /= sampleH.w;

// Get sample depth
float sampleDepthV = depthTex.Load(float3(sampleH.xy, 0));
sampleDepthV = NdcDepthToViewDepth(sampleDepthV, projMatrix);

because I was using Load() with sampleH that is in NDC space, not in viewport space.

So I did the following


// Convert sample position to NDC and sample depth at that position in depth buffer.
float4 samplePosH = mul(samplePosV, gFrameCBuffer.mP);
samplePosH.xy /= samplePosH.w;
	
const int2 sampleViewportSpace = NdcToViewportCoordinates(samplePosH.xy, 0.0f, 0.0f, SCREEN_WIDTH, SCREEN_HEIGHT);
const float sampleDepthNDC = Depth.Load(int3(sampleViewportSpace, 0));

I made a new post and a video

This topic is closed to new replies.

Advertisement