Sign in to follow this  
Migi0027

DX11 DX11 - SSAO

Recommended Posts

Migi0027    4628

I'm so terribly sorry to re-re-re open this one, but I never fully succeeded at this one, SSAO. unsure.png

 

SSAO Shader:

Texture2D t_depthmap : register(t0);
Texture2D t_normalmap : register(t1);
Texture2D t_random : register(t2);
SamplerState ss;

cbuffer SSAOBuffer : register(c0)
{
	float g_scale;
	float g_bias;
	float g_sample_rad;
	float g_intensity;
	float ssaoIterations;
	float3 pppspace;

	matrix view;
};

struct VS_Output
{  
	float4 Pos : SV_POSITION;              
	float2 Tex : TEXCOORD0;
};
 
VS_Output VShader(uint id : SV_VertexID)
{
	VS_Output Output;
	Output.Tex = float2((id << 1) & 2, id & 2);
	Output.Pos = float4(Output.Tex * float2(2,-2) + float2(-1,1), 0, 1);
	return Output;
}

// Helper for modifying the saturation of a color.
float4 AdjustSaturation(float4 color, float saturation)
{
	// The constants 0.3, 0.59, and 0.11 are chosen because the
	// human eye is more sensitive to green light, and less to blue.
	float grey = dot(color, float3(0.3, 0.59, 0.11));

	return lerp(grey, color, saturation);
}

// Ambient Occlusion Stuff --------------------------------------------------

float3 getPosition(in float2 uv)
{
	return t_depthmap.Sample(ss, uv).xyz;
}

float3 getNormal(in float2 uv)
{
	return normalize(t_normalmap.Sample(ss, uv).xyz * 2.0f - 1.0f);
}

float2 getRandom(in float2 uv)
{
	//return normalize(t_random.Sample(ss, uv ).xy * 2.0f - 1.0f); // ~100FPS
	return normalize(t_random.Sample(ss, float2(600, 800) * uv / float2(60, 60)).xy * 2.0f - 1.0f);
}

float doAmbientOcclusion(in float2 tcoord,in float2 uv, in float3 p, in float3 cnorm)
{
	float3 diff = getPosition(tcoord + uv) - p;
	const float3 v = normalize(diff);
	const float d = length(diff)*g_scale;
	return max(0.0,dot(cnorm,v)-g_bias)*(1.0/(1.0+d))*g_intensity;
}

// End

float4 PShader(VS_Output input) : SV_TARGET
{
	// ADD SSAO ---------------------------------------------------------------
	const float2 vec[4] = {float2(1,0),float2(-1,0),
				float2(0,1),float2(0,-1)};

	float3 p = getPosition(input.Tex);
	float3 n = getNormal(input.Tex);
	float2 rand = getRandom(input.Tex);

	float ao = 0.0f;
	float rad = g_sample_rad/p.z; // g_s_r

	//**SSAO Calculation**//
	int iterations = 4;
	for (int j = 0; j < iterations; ++j)
	{
	  float2 coord1 = reflect(vec[j], rand)*rad;
	  float2 coord2 = float2(coord1.x*0.707 - coord1.y*0.707,
				  coord1.x*0.707 + coord1.y*0.707);
	  
	  ao += doAmbientOcclusion(input.Tex, coord1*0.25, p, n);
	  ao += doAmbientOcclusion(input.Tex, coord2*0.5, p, n);
	  ao += doAmbientOcclusion(input.Tex, coord1*0.75, p, n);
	  ao += doAmbientOcclusion(input.Tex, coord2, p, n);
	}
	ao /= (float)iterations*4.0;

	ao = saturate(ao);

	return float4(ao, ao, ao, 1.0f);
}

How I write my depth map to the ssao:

VS
output.depth = output.position.z / output.position.w;
PS
float depth = input.depth;
output.Depth = float4(depth, depth, depth, 1);

How i write my normal map to the ssao:

VS -- I think this is wrong
output.NormalW = mul(mul(worldMatrix, viewMatrix), float4(normal.xyz,0) );
PS
output.Normals = float4(input.NormalW, 1);

Render Result:

 

2ztay6g.png

 

Ehh.

 

Again, I'm so sorry for re-posting this, it's just, i really want this feature completed (well, working at least).

 

Thank You, like always...

Share this post


Link to post
Share on other sites
belfegor    2834

I think that that ssao shader needs view space position (whole three components x,y and z), as i can see you store normals that way, although:

output.NormalW = mul(mul(worldMatrix, viewMatrix), float4(normal.xyz,0) );

this mul order is right if shader packs matrices as column major, i don't know dx11  shader systems so i am not sure, but in dx9 i can force row major with this statement:

#pragma pack_matrix(row_major)

beacuse it matches my system in cpp code and i always mul vectors with matrices this way:

result = mul( vec, matrix );
Edited by belfegor

Share this post


Link to post
Share on other sites
Migi0027    4628

Ok, it is a bit better now in the sense that it doesn't just become randomly black: (SSAO Map)

 

2vjsx8h.png

 

 

But this isn't right yet, well I don't think it is, I mean look at the left of the cube, where the left side is rendered, it's black, why?

 

Ohh, and belfegor, about the matrices, you were right!

 

How I write my depth map to the ssao:

VS:

output.position = mul(position, worldMatrix);
output.position = mul(output.position, viewMatrix);
output.position = mul(output.position, projectionMatrix);

output.NormalW = mul(float4(-normal.xyz,0), mul(worldMatrix, viewMatrix));

// Store the position value in a second input value for depth value calculations.
output.depthPosition = output.position;
PS:

// Depth
float depth = 1.0f - (input.depthPosition.z / input.depthPosition.w);
output.Depth = float4(depth, depth, depth, 1.0f);

// Normals
output.Normals = float4(input.NormalW, 1); 

Now, is this the correct way to map the depth and the normal maps?

Share this post


Link to post
Share on other sites
belfegor    2834

I said output view space position, so it is like this:

VS
output.depthPosition.xyz = mul(float4(position.xyz,1), mul(worldMatrix, viewMatrix)).xyz;
PS
output.Depth = float4(input.depthPosition.xyz, 1.0f);

although you must have proper render target (or "view" whatever they call it in dx11) format that can hold at least 3 floating point components.

 

And why do you invert your normals now?

output.NormalW = mul(float4(-normal.xyz,0), mul(worldMatrix, viewMatrix));

Then when you fix those issues, play with ssao "sampling radius" to match your world scale (in case you don't get expected results).

Edited by belfegor

Share this post


Link to post
Share on other sites
belfegor    2834

Let me try this one with my renderer and i get back to you. smile.png

 

EDIT: Here is it (on and off). All parameters are same as yours except radius which i set to 0.1f (10cm in my world) since it feels more natural for me.

 

oma-2013-07-18-20-47-50-.jpg

oma-2013-07-18-20-47-53-.jpg

Edited by belfegor

Share this post


Link to post
Share on other sites
belfegor    2834

I played a little more with this and done some changes that look better then before (at least for me), so you might want to try:

1. I have normalized view space normals in texture, and i don't get why do we need to scale-bias here? Also, this fixes some ugly artifacts.

float3 getNormal(in float2 uv)
{
    //return normalize(tex2D(gbNormal_samp, uv).xyz * 2.0f - 1.0f);
    return tex2D(gbNormal_samp, uv).xyz;
}

So you might need to change your gbuffer normal to be normalized also

output.Normals = float4(normalize(input.NormalW), 1);

 

2. I had getRandom before which looks nice for me:

float3 getRandom(in float2 uv)
{
    return tex2D(rand_samp, uv * ScreenParams.xy / 4.0f).xyz * 2.0f - 1.0f;
}

3. Setting scale to 0 looks better for me, so i can even remove some calculations:

float doAmbientOcclusion(in float2 tcoord,in float2 uv, in float3 p, in float3 cnorm)
{
    float g_scale = SSAO_params.x;
    float g_bias = SSAO_params.y;
    float g_intensity = SSAO_params.z;

    float3 diff = getPosition(tcoord + uv) - p;
    const float3 v = normalize(diff);
    //const float d = length(diff)*g_scale;
    return max(0.0,dot(cnorm,v)-g_bias)*g_intensity;//(1.0/(1.0+d))*g_intensity;
}

4. I get strange "haloing" artifacts on screen corners, so i set position and normal texture UV addressing mode to mirror and it seems that fixes that issue.

 

5. Changed radius to 0.15 and intensity to ~2.5

Edited by belfegor

Share this post


Link to post
Share on other sites
Migi0027    4628

The reason for the false occlusions at the corners is simply because there is nothing in the scene there,

 

2db9ok1.png

 

But as you see in the bottom, when I go too close to the mesh, these false occlusions start to appear, and when really close, they're everywhere!

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Similar Content

    • By lonewolff
      Hi Guys,
      I am presently trying to render a video frame to a quad in DirectX 11 using Microsoft Media Foundations. But I am having a problem with 'TransferVideoFrame'.
      m_spMediaEngine->TransferVideoFrame(texture, rect, rcTarget, &m_bkgColor); The call keeps returning 'invalid parameter'.
      Some googling revealed this - https://stackoverflow.com/questions/15711054/how-do-i-grab-frames-from-a-video-stream-on-windows-8-modern-apps
      Which explains that there is (was??) a bug with the NVidia drivers back in 2013. Other searches say this bug was fixed.
      The very odd thing is, if I run the application through Visual Studio Graphical debugger the video renders perfectly to the quad with no errors, so this would suggest that my code is sound.
      I can't do as the link suggests above and try to create the device as 9_x, as this is an extension that I am creating for GameMaker 2 and I don't have access to renderer creation (only the device handle - which is fine most of the time).
      I am presently downloading VS 2017 to see if it behaves better with more recent SDK's (as I am currently using VS 2015).
      Thanks in advance
       
       
    • By AlexWIN32
      Hello!
       
      A have an issue with my point light shadows realisation.

       
      First of all, the pixel shader path:
      //.... float3 toLight = plPosW.xyz - input.posW; float3 fromLight = -toLight; //... float depthL = abs(fromLight.x); if(depthL < abs(fromLight.y)) depthL = abs(fromLight.y); if(depthL < abs(fromLight.z)) depthL = abs(fromLight.z); float4 pH = mul(float4(0.0f, 0.0f, depthL, 1.0f), lightProj); pH /= pH.w; isVisible = lightDepthTex.SampleCmpLevelZero(lightDepthSampler, normalize(fromLight), pH.z).x;

      lightProj matrix creation
      Matrix4x4 projMat = Matrix4x4::PerspectiveFovLH(0.5f * Pi, 0.01f, 1000.0f, 1.0f);  
      thats how i create Depth cube texture
       
      viewport->TopLeftX = 0.0f; viewport->TopLeftY = 0.0f; viewport->Width = static_cast<float>(1024); viewport->Height = static_cast<float>(1024); viewport->MinDepth = 0.0f; viewport->MaxDepth = 1.0f; D3D11_TEXTURE2D_DESC textureDesc; textureDesc.Width = 1024; textureDesc.Height = 1024; textureDesc.MipLevels = 1; textureDesc.ArraySize = 6; textureDesc.Format = DXGI_FORMAT_R24G8_TYPELESS; textureDesc.SampleDesc.Count = 1; textureDesc.SampleDesc.Quality = 0; textureDesc.Usage = D3D11_USAGE_DEFAULT; textureDesc.BindFlags = D3D11_BIND_DEPTH_STENCIL | D3D11_BIND_SHADER_RESOURCE; textureDesc.CPUAccessFlags = 0; textureDesc.MiscFlags = D3D11_RESOURCE_MISC_TEXTURECUBE; ID3D11Texture2D* texturePtr; HR(DeviceKeeper::GetDevice()->CreateTexture2D(&textureDesc, NULL, &texturePtr)); for(int i = 0; i < 6; ++i){ D3D11_DEPTH_STENCIL_VIEW_DESC dsvDesc; dsvDesc.Flags = 0; dsvDesc.Format = DXGI_FORMAT_D24_UNORM_S8_UINT; dsvDesc.ViewDimension = D3D11_DSV_DIMENSION_TEXTURE2DARRAY; dsvDesc.Texture2DArray = D3D11_TEX2D_ARRAY_DSV{0, i, 1}; ID3D11DepthStencilView *outDsv; HR(DeviceKeeper::GetDevice()->CreateDepthStencilView(texturePtr, &dsvDesc, &outDsv)); edgeDsv = outDsv; } D3D11_SHADER_RESOURCE_VIEW_DESC srvDesc; srvDesc.Format = DXGI_FORMAT_R24_UNORM_X8_TYPELESS; srvDesc.ViewDimension = D3D11_SRV_DIMENSION_TEXTURECUBE; srvDesc.TextureCube = D3D11_TEXCUBE_SRV{0, 1}; ID3D11ShaderResourceView *outSRV; HR(DeviceKeeper::GetDevice()->CreateShaderResourceView(texturePtr, &srvDesc, &outSRV));  
      then i create six target oriented cameras and finally draw scene to cube depth according to each camera
      Cameras creation code:  
      std::vector<Vector3> camDirs = { { 1.0f, 0.0f, 0.0f}, {-1.0f, 0.0f, 0.0f}, { 0.0f, 1.0f, 0.0f}, { 0.0f, -1.0f, 0.0f}, { 0.0f, 0.0f, 1.0f}, { 0.0f, 0.0f, -1.0f}, }; std::vector<Vector3> camUps = { {0.0f, 1.0f, 0.0f}, // +X {0.0f, 1.0f, 0.0f}, // -X {0.0f, 0.0f, -1.0f}, // +Y {0.0f, 0.0f, 1.0f}, // -Y {0.0f, 1.0f, 0.0f}, // +Z {0.0f, 1.0f, 0.0f} // -Z }; for(size_t b = 0; b < camDirs.size(); b++){ edgesCameras.SetPos(pl.GetPos()); edgesCameras.SetTarget(pl.GetPos() + camDirs); edgesCameras.SetUp(camUps); edgesCameras.SetProjMatrix(projMat); }  
      I will be very gratefull for any help!
      P.s sorry for my poor English)
       
    • By isu diss
      HRESULT FBXLoader::Open(HWND hWnd, char* Filename) { HRESULT hr = S_OK; if (FBXM) { FBXIOS = FbxIOSettings::Create(FBXM, IOSROOT); FBXM->SetIOSettings(FBXIOS); FBXI = FbxImporter::Create(FBXM, ""); if (!(FBXI->Initialize(Filename, -1, FBXIOS))) MessageBox(hWnd, (wchar_t*)FBXI->GetStatus().GetErrorString(), TEXT("ALM"), MB_OK); FBXS = FbxScene::Create(FBXM, "MCS"); if (!FBXS) MessageBox(hWnd, TEXT("Failed to create the scene"), TEXT("ALM"), MB_OK); if (!(FBXI->Import(FBXS))) MessageBox(hWnd, TEXT("Failed to import fbx file content into the scene"), TEXT("ALM"), MB_OK); if (FBXI) FBXI->Destroy(); FbxNode* MainNode = FBXS->GetRootNode(); int NumKids = MainNode->GetChildCount(); FbxNode* ChildNode = NULL; for (int i=0; i<NumKids; i++) { ChildNode = MainNode->GetChild(i); FbxNodeAttribute* NodeAttribute = ChildNode->GetNodeAttribute(); if (NodeAttribute->GetAttributeType() == FbxNodeAttribute::eMesh) { FbxMesh* Mesh = ChildNode->GetMesh(); NumVertices = Mesh->GetControlPointsCount();//number of vertices MyV = new FBXVTX[NumVertices]; for (DWORD j = 0; j < NumVertices; j++) { FbxVector4 Vertex = Mesh->GetControlPointAt(j);//Gets the control point at the specified index. MyV[j].Position = XMFLOAT3((float)Vertex.mData[0], (float)Vertex.mData[1], (float)Vertex.mData[2]); } NumIndices = Mesh->GetPolygonVertexCount();//number of indices; for cube 20 MyI = new DWORD[NumIndices]; MyI = (DWORD*)Mesh->GetPolygonVertices();//index array NumFaces = Mesh->GetPolygonCount(); MyF = new FBXFACEX[NumFaces]; for (int l=0;l<NumFaces;l++) { MyF[l].Vertices[0] = MyI[4*l]; MyF[l].Vertices[1] = MyI[4*l+1]; MyF[l].Vertices[2] = MyI[4*l+2]; MyF[l].Vertices[3] = MyI[4*l+3]; } UV = new XMFLOAT2[NumIndices]; for (int i = 0; i < Mesh->GetPolygonCount(); i++)//polygon(=mostly rectangle) count { FbxLayerElementArrayTemplate<FbxVector2>* uvVertices = NULL; Mesh->GetTextureUV(&uvVertices); for (int j = 0; j < Mesh->GetPolygonSize(i); j++)//retrieves number of vertices in a polygon { FbxVector2 uv = uvVertices->GetAt(Mesh->GetTextureUVIndex(i, j)); UV[4*i+j] = XMFLOAT2((float)uv.mData[0], (float)uv.mData[1]); } } } } } else MessageBox(hWnd, TEXT("Failed to create the FBX Manager"), TEXT("ALM"), MB_OK); return hr; } I've been trying to load fbx files(cube.fbx) into my programme. but I get this. Can someone pls help me?
       

    • By lonewolff
      Hi Guys,
      I am having a bit of a problem with a dynamic texture.
      It is creating without error and I am attempting to initialize the first pixel to white to make sure I am mapping correctly. But when I draw the texture to the quad it displays the whole quad white (instead of just one pixel).
      This is how I am creating, mapping, and setting the first pixel to white. But as mentioned, when I draw the quad, the entire quad is white.
       
      // Create dynamic texture D3D11_TEXTURE2D_DESC textureDesc = { 0 }; textureDesc.Width = 2048; textureDesc.Height = 2048; textureDesc.MipLevels = 1; textureDesc.ArraySize = 1; textureDesc.Format = DXGI_FORMAT_B8G8R8A8_UNORM; textureDesc.SampleDesc.Count = 1; textureDesc.Usage = D3D11_USAGE_DYNAMIC; textureDesc.BindFlags = D3D11_BIND_SHADER_RESOURCE; textureDesc.CPUAccessFlags = D3D11_CPU_ACCESS_WRITE; textureDesc.MiscFlags = 0; HRESULT result = d3dDevice->CreateTexture2D(&textureDesc, NULL, &textureDynamic); if (FAILED(result)) return -1; result = d3dDevice->CreateShaderResourceView(textureDynamic, 0, &textureRV); if (FAILED(result)) return -2; D3D11_MAPPED_SUBRESOURCE resource; if (FAILED(d3dContext->Map(textureDynamic, 0, D3D11_MAP_WRITE_DISCARD, 0, &resource))) return -1; memset(resource.pData, 255, 4); d3dContext->Unmap(textureDynamic, 0);  
      Hopefully I have just made an oversight somewhere.
      Any assistance would be greatly appreciated
      (If I change the 255 value to 128 the quad then turns grey, so the mapping is definitely doing something. Just can't work out why it is colouring the whole quad and not the first pixel)
    • By KaiserJohan
      Just a really quick question - is there any overhead to using DrawIndexedInstanced even for geometry you just render once vs using DrawIndexed? Or is the details obfuscated by the graphics driver?
      I would assume no but you never know  
  • Popular Now