# DX11 having problems debugging SSAO

## Recommended Posts

Posted (edited)

Please look at my new post in this thread where I supply new information!

I'm trying to implement SSAO in my 'engine' (based on this article) but I'm getting odd results. I know I'm doing something wrong but I can't figure out what's causing the particular issue im having at the moment.

" rel="external">Here's a video of what it looks like . The rendered output is the SSAO map.

As you can see the result is heavily altered depending on the camera (although it seems to be unaffected my camera translation). The fact that the occlusion itself isn't correct isn't much of a problem at this stage, since I've hardcoded a lot of stuff that shouldn't be. E.g. I don't have a random-vector texture, all I do is use one of the sample vectors in order to construct the TBN matrix.
One issue at a time...

//SSAO VS

struct VS_IN
{
float3 pos : POSITION;
float3 ray : VIEWRAY;
};

struct VS_OUT
{
float4 pos : SV_POSITION;
float4 ray : VIEWRAY;
};

VS_OUT VS_main( VS_IN input )
{
VS_OUT output;
output.pos = float4(input.pos, 1.0f);  //already in NDC space, pass through
output.ray = float4(input.ray, 0.0f); //interpolate view ray
return output;
}

Texture2D depthTexture  : register(t0);
Texture2D normalTexture : register(t1);

struct VS_OUT
{
float4 pos : SV_POSITION;
float4 ray : VIEWRAY;
};

cbuffer	cbViewProj : register(b0)
{
float4x4 view;
float4x4 projection;
}

float4 PS_main(VS_OUT input) : SV_TARGET
{
//Generate samples
float3 kernel[8];

kernel[0] = float3(1.0f, 1.0f, 1.0f);
kernel[1] = float3(-1.0f, -1.0f, 0.0f);

kernel[2] = float3(-1.0f, 1.0f, 1.0f);
kernel[3] = float3(1.0f, -1.0f, 0.0f);

kernel[4] = float3(1.0f, 1.0f, 0.0f);
kernel[5] = float3(-1.0f, -1.0f, 1.0f);

kernel[6] = float3(-1.0f, 1.0f, .0f);
kernel[7] = float3(1.0f, -1.0f, 1.0f);

//Get texcoord using SV_POSITION
int3 texCoord = int3(input.pos.xy, 0);

//Fragment viewspace position (non-linear depth)
float3 origin = input.ray.xyz * (depthTexture.Load(texCoord).r);

//world space normal transformed to view space and normalized
float3 normal = normalize(mul(view, float4(normalTexture.Load(texCoord).xyz, 0.0f)));

//Grab arbitrary vector for construction of TBN matrix
float3 rvec = kernel[3];
float3 tangent = normalize(rvec - normal * dot(rvec, normal));
float3 bitangent = cross(normal, tangent);
float3x3 tbn = float3x3(tangent, bitangent, normal);

float occlusion = 0.0;
for (int i = 0; i < 8; ++i) {
// get sample position:
float3 samp = mul(tbn, kernel[i]);
samp = samp * 1.0f + origin;

// project sample position:
float4 offset = float4(samp, 1.0);
offset = mul(projection, offset);
offset.xy /= offset.w;
offset.xy = offset.xy * 0.5 + 0.5;

// get sample depth. (again, non-linear depth)

// range check & accumulate:
occlusion += (sampleDepth <= samp.z ? 1.0 : 0.0);
}

//Average occlusion
occlusion /= 8.0;

return min(occlusion, 1.0f);
}

I'm fairly sure my matrices are correct (view and projection) and that the input rays are correct.
I don't think the non-linear depth is the problem here either, but what do I know  I haven't fixed the linear depth mostly because I don't really understand how it's done...

Any ideas are very appreciated!

Edited by GreenGodDiary

##### Share on other sites
Posted (edited)

Bumping with new information. I'm getting quite desperate, if someone could help me out I would be forever greatful<3

I have revamped my way of constructing the view space position. Instead of directly binding my DepthStencil as a shader resource (which thinking back made no sense to do), I'm now in the G-buffer pass outputting 'positionVS.z / FarClipDistance' to a texture and using that, and remaking my viewRays in the following way: (1000.0f is FarClipDistance)

		//create corner view rays
float thfov = tan(fov / 2.0);
float verts[24]
{
-1.0f, 1.0f, 0.0f, //Pos TopLeft corner
-1.0f * thfov * aspect, 1.0f * thfov, 1000.0f,	//Ray

1.0f, 1.0f, 0.0f, //Pos	TopRight corner
1.0f * thfov * aspect, 1.0f * thfov, 1000.0f,	//Ray

-1.0f, -1.0f, 0.0f,	//Pos BottomLeft corner
- 1.0f * thfov * aspect, -1.0f * thfov, 1000.0f,//Ray

1.0f, -1.0f, 0.0f, //Pos BottomRight corner
1.0f * thfov * aspect, -1.0f * thfov, 1000.0f,	//Ray
};

In my SSAO PS, I reconstruct view-space position like this:

	float3 origin = input.ray.xyz * (depthTexture.Load(texCoord).r);
origin.x *= 1000;
origin.y *= 1000;

Why do I multiply by 1000? Because it works. Why does it work? Don't know. But this gives me the same value that I had in the G-pass vertex shader. If someone knows why this works/why it shouldnt, do tell me.

Anyway, next I get the world-space normal from the G-buffer and multiply by my view matrix to get view-space normal:

	float3 normal = normalTexture.Load(texCoord).xyz;
normal = mul(view, normal);
normal = normalize(normal);

I now have a random-vector-texture that I sample.
Next I construct the TBN matrix using this vector and the view-space normal:

	float3 rvec = randomTexture.Sample(randomSampler, input.pos.xy).xyz;
rvec.z = 0.0;
rvec = normalize(rvec);
float3 tangent = normalize(rvec - normal * dot(rvec, normal));
float3 bitangent = normalize(cross(normal, tangent));
float3x3 tbn = float3x3(tangent, bitangent, normal);

This is where I'm not sure if I'm doing it right. I am doing it exactly like the article in the original post, however since he is using OpenGL maybe something is different here?
The reason this part looks suspicious to me is that when I later use it, I get values that to me don't make sense.

        float3 samp = mul(tbn, kernel[i]);
samp = samp + origin;

samp here is what looks odd to me. If the values are indeed wrong, I must be constructing my TBN matrix wrong somehow.

Next up, projecting samp in order to get the offset in NDC so that I can then sample the depth of samp:

		float4 offset = float4(samp, 1.0);
offset = mul(offset, projection);
offset.xy /= offset.w;
offset.xy = offset.xy * 0.5 + 0.5;

// get sample depth:
float sampleDepth = depthTexture.Sample(defaultSampler, offset.xy).r;

occlusion += (sampleDepth <= samp.z ? 1.0 : 0.0);

The result is still nowhere near what you'd expect. It looks slightly better than the video linked in the original post but still same story; huge odd artifacts that change heavily based on the cameras orientation.

What am I doing wrong?

help im dying

Edited by GreenGodDiary

Bump. (sorry)

##### Share on other sites

While just briefly reading the code (it's quite hard to say what is going on - your SSAO calculation doesn't look correctly to me though), here are few notes which might lead you to where the issue is:

• Make sure you know in which space you are - world space, view space, object space, etc. ... doing this wrong will be one of the reasons for view-dependent errors.
• Do NOT multiply by random constants that make it "look good" - make sure each constant has a reason why it is there. Put it in the comment.
• Compare everything - you can write how 'view space normals', 'view space position', etc. when generating G-Buffer (into another buffer), and compare against your reconstruction - this way you can proof that you have your input data correct

Now, for the SSAO:

• Make sure you're sampling in hemisphere ABOVE the point in direction of normal. From your specified vectors you will also attempt to sample in the opposite hemisphere.
• You will need some randomization (otherwise you will need a lot of samples to make SSAO look like anything resembling SSAO).
• I also recommend checking out other shaders doing SSAO - F.e. on ShaderToy - https://www.shadertoy.com/view/4ltSz2 - it might help you find what is wrong on your side (I'm intentionally adding it here, as if you compare the actual SSAO calculation, as yours does seem incorrect to me)

##### Share on other sites
8 hours ago, Vilem Otte said:

While just briefly reading the code (it's quite hard to say what is going on - your SSAO calculation doesn't look correctly to me though), here are few notes which might lead you to where the issue is:

• Make sure you know in which space you are - world space, view space, object space, etc. ... doing this wrong will be one of the reasons for view-dependent errors.
• Do NOT multiply by random constants that make it "look good" - make sure each constant has a reason why it is there. Put it in the comment.
• Compare everything - you can write how 'view space normals', 'view space position', etc. when generating G-Buffer (into another buffer), and compare against your reconstruction - this way you can proof that you have your input data correct

Now, for the SSAO:

• Make sure you're sampling in hemisphere ABOVE the point in direction of normal. From your specified vectors you will also attempt to sample in the opposite hemisphere.
• You will need some randomization (otherwise you will need a lot of samples to make SSAO look like anything resembling SSAO).
• I also recommend checking out other shaders doing SSAO - F.e. on ShaderToy - https://www.shadertoy.com/view/4ltSz2 - it might help you find what is wrong on your side (I'm intentionally adding it here, as if you compare the actual SSAO calculation, as yours does seem incorrect to me)

Thanks alot for these pointers, I will definitely look into it further using your advice.
One question though:

Quote

From your specified vectors you will also attempt to sample in the opposite hemisphere.

Are you sure this is the case? Because my kernels are in the range ([-1, 1], [-1, 1], [0, 1]), wont it exclusively sample from the "upper" hemisphere? Or am i thinking about it wrong?

Thanks again

## Create an account

Register a new account

• ### Similar Content

• By nilkun
Hello everyone!
First time posting in the forum.
I've just completed my first game ever ( C++ / SDL ), and I am feeling utterly proud. It is a small game resembling Missile Command. The code is a mess, but it is my mess! In the process of making the game, I developed my own little game engine.
My question is, where would be a good place to spread the news to at least get some people to try the game?
• By owenjr
Hi, I'm a Multimedia Engineering student. I am about to finish my dergree and I'm already thinking about what topic to cover in my final college project.
I'm interested in the procedural animation with c++ and OpenGL of creatures, something like a spider for example. Can someone tell me what are the issues I should investigate to carry it out? I understand that it has some dependence on artificial intelligence but I do not know to what extent. Can someone help me to find information about it? Thank you very much.

Examples:
- Procedural multi-legged walking animation
- Procedural Locomotion of Multi-Legged Characters in Dynamic Environments
• By Void
Hi, I'm trying to do a comparision with DirectInput GUID e.g GUID_XAxis, GUID_YAxis from a value I get from GetProperty
eg
DIPROPRANGE propRange;

DIJoystick->GetProperty (DIPROP_RANGE, &propRange.diph);
// This will crash
if (GUID_XAxis == MAKEDIPROP (propRange.diph.dwObj))
;

How should I be comparing the GUID from GetProperty?

• Hello Everyone!
I am looking for a small team to do a rendering project with me. The roles I need are:
-Character Modeller
-Environment Designer
-Environment Modeller(Found)
You can use this in your portfolio and you will be credited at the end.
If you are interested, please email me at marfo343@gmail.com. Thank you!
• By D34DPOOL
Edit Your Profile D34DPOOL 0 Threads 0 Updates 0 Messages Network Mod DB GameFront Sign Out Add jobEdit jobDeleteC# Programmer for a Unity FPS at Anywhere   Programmers located Anywhere.
Posted by D34DPOOL on May 20th, 2018
Hello, my name is Mason, and I've been working on a Quake style arena shooter about destroying boxes on and off for about a year now. I have a proof of concept with all of the basic features, but as an artist with little programming skill I've reached the end of my abilities as a programmer haha. I need someone to help fix bugs, optomize code, and to implent new features into the game. As a programmer you will have creative freedom to suggest new features and modes to add into the game if you choose to, I'm usually very open to suggestions :).
What is required:
Skill using C#
Experience with Unity
Experience using UNET (since it is a multiplayer game), or the effort and ability to learn it
Compensation:
Since the game currently has no funding, we can split whatever revenue the game makes in the future. However if you would perfer I can create 2D and/or 3D assets for whatever you need in return for your time and work.
It's a very open and chill enviornment, where you'll have relative creative freedom. I hope you are interested in joining the team, and have a good day!

To apply email me at mangemason@yahoo.com

• I am a talented 2D/3D artist with 3 years animation working experience and a Degree in Illustration and Animation. I have won a world-wide art competition hosted by SFX magazine and am looking to develop a survival game. I have some knowledge of C sharp and have notes for a survival based game with flexible storyline and PVP. Looking for developers to team up with. I can create models, animations and artwork and I have beginner knowledge of C sharp with Unity. The idea is Inventory menu based gameplay and is inspired by games like DAYZ.
Here is some early sci-fi concept art to give you an idea of the work level. Hope to work with like minded people and create something special. email me andrewparkesanim@gmail.com.
Developers who share the same passion please contact me, or if you have a similar project and want me to join your team email me.
Many thanks, Andrew.

• I made this post on Reddit. I need ideas and information on how to create the ground mesh for my specifications.
• By mike44
Hi
saw in dependency walker that my app still needs msvcp140d.dll even after disabling debug.
What did I forget in the VS2017 release settings? After setting to multithreaded dll I get linker errors.
Thanks

• So I have been playing around with yaml-cpp as I want to use YAML for most of my game data files however I am running into some pretty big performance issues and not sure if it is something I am doing or the library itself.
I created this code in order to test a moderately sized file:
Player newPlayer = Player(); newPlayer.name = "new player"; newPlayer.maximumHealth = 1000; newPlayer.currentHealth = 1; Inventory newInventory; newInventory.maximumWeight = 10.9f; for (int z = 0; z < 10000; z++) { InventoryItem* newItem = new InventoryItem(); newItem->name = "Stone"; newItem->baseValue = 1; newItem->weight = 0.1f; newInventory.items.push_back(newItem); } YAML::Node newSavedGame; newSavedGame["player"] = newPlayer; newSavedGame["inventory"] = newInventory; This is where I ran into my first issue, memory consumption.
Before I added this code, the memory usage of my game was about 22MB. After I added everything expect the YAML::Node stuff, it went up to 23MB, so far nothing unexpected. Then when I added the YAML::Node and added data to it, the memory went up to 108MB. I am not sure why when I add the class instance it only adds like 1MB of memory but then copying that data to a YAML:Node instance, it take another 85MB of memory.
So putting that issue aside, I want want to test the performance of writing out the files. the initial attempt looked like this:
void YamlUtility::saveAsFile(YAML::Node node, std::string filePath) { std::ofstream myfile; myfile.open(filePath); myfile << node << std::endl; myfile.close(); } To write out the file (that ends up to be about 570KB), it took about 8 seconds to do that. That seems really slow to me.
After read the documentation a little more I decide to try a different route using the YAML::Emitter, the implemntation looked like this:
static void buildYamlManually(std::ofstream& file, YAML::Node node) { YAML::Emitter out; out << YAML::BeginMap << YAML::Key << "player" << YAML::Value << YAML::BeginMap << YAML::Key << "name" << YAML::Value << node["player"]["name"].as<std::string>() << YAML::Key << "maximumHealth" << YAML::Value << node["player"]["maximumHealth"].as<int>() << YAML::Key << "currentHealth" << YAML::Value << node["player"]["currentHealth"].as<int>() << YAML::EndMap; out << YAML::BeginSeq; std::vector<InventoryItem*> items = node["inventory"]["items"].as<std::vector<InventoryItem*>>(); for (InventoryItem* const value : items) { out << YAML::BeginMap << YAML::Key << "name" << YAML::Value << value->name << YAML::Key << "baseValue" << YAML::Value << value->baseValue << YAML::Key << "weight" << YAML::Value << value->weight << YAML::EndMap; } out << YAML::EndSeq; out << YAML::EndMap; file << out.c_str() << std::endl; } While this did seem to improve the speed, it was still take about 7 seconds instead of 8 seconds.
Since it has been a while since I used C++ and was not sure if this was normal, I decided to for testing just write a simple method to manually generate the YAMLin this use case, that looked something like this:
static void buildYamlManually(std::ofstream& file, SavedGame savedGame) { file << "player: \n" << " name: " << savedGame.player.name << "\n maximumHealth: " << savedGame.player.maximumHealth << "\n currentHealth: " << savedGame.player.currentHealth << "\ninventory:" << "\n maximumWeight: " << savedGame.inventory.maximumWeight << "\n items:"; for (InventoryItem* const value : savedGame.inventory.items) { file << "\n - name: " << value->name << "\n baseValue: " << value->baseValue << "\n weight: " << value->weight; } } This wrote the same file and it took about 0.15 seconds which seemed a lot more to what I was expecting.
While I would expect some overhead in using yaml-cpp to manage and write out YAML files, it consuming 70X+ the amount of memory and it being 40X+ slower in writing files seems really bad.
I am not sure if I am doing something wrong with how I am using yaml-cpp that would be causing this issue or maybe it was never design to handle large files but was just wondering if anyone has any insight on what might be happening here (or an alternative to dealing with YAMLin C++)?

• 18
• 11
• 16
• 9
• 50
• ### Forum Statistics

• Total Topics
631396
• Total Posts
2999783
×

## Important Information

Participate in the game development conversation and more when you create an account on GameDev.net!

Sign me up!