• Advertisement
  • Popular Tags

  • Popular Now

  • Advertisement
  • Similar Content

    • By isu diss
      I'm following rastertek tutorial 14 (http://rastertek.com/tertut14.html). The problem is, slope based texturing doesn't work in my application. There are plenty of slopes in my terrain. None of them get slope color.
      float4 PSMAIN(DS_OUTPUT Input) : SV_Target { float4 grassColor; float4 slopeColor; float4 rockColor; float slope; float blendAmount; float4 textureColor; grassColor = txTerGrassy.Sample(SSTerrain, Input.TextureCoords); slopeColor = txTerMossRocky.Sample(SSTerrain, Input.TextureCoords); rockColor = txTerRocky.Sample(SSTerrain, Input.TextureCoords); // Calculate the slope of this point. slope = (1.0f - Input.LSNormal.y); if(slope < 0.2) { blendAmount = slope / 0.2f; textureColor = lerp(grassColor, slopeColor, blendAmount); } if((slope < 0.7) && (slope >= 0.2f)) { blendAmount = (slope - 0.2f) * (1.0f / (0.7f - 0.2f)); textureColor = lerp(slopeColor, rockColor, blendAmount); } if(slope >= 0.7) { textureColor = rockColor; } return float4(textureColor.rgb, 1); } Can anyone help me? Thanks.

    • By cozzie
      Hi all,
      As a part of the debug drawing system in my engine,  I want to add support for rendering simple text on screen  (aka HUD/ HUD style). From what I've read there are a few options, in short:
      1. Write your own font sprite renderer
      2. Using Direct2D/Directwrite, combine with DX11 rendertarget/ backbuffer
      3. Use an external library, like the directx toolkit etc.
      I want to go for number 2, but articles/ documentation confused me a bit. Some say you need to create a DX10 device, to be able to do this, because it doesn't directly work with the DX11 device.  But other articles tell that this was 'patched' later on and should work now.
      Can someone shed some light on this and ideally provide me an example or article on  how to set this up?
      All input is appreciated.
    • By stale
      I've just started learning about tessellation from Frank Luna's DX11 book. I'm getting some very weird behavior when I try to render a tessellated quad patch if I also render a mesh in the same frame. The tessellated quad patch renders just fine if it's the only thing I'm rendering. This is pictured below:
      However, when I attempt to render the same tessellated quad patch along with the other entities in the scene (which are simple triangle-lists), I get the following error:

      I have no idea why this is happening, and google searches have given me no leads at all. I use the following code to render the tessellated quad patch:
      ID3D11DeviceContext* dc = GetGFXDeviceContext(); dc->IASetPrimitiveTopology(D3D11_PRIMITIVE_TOPOLOGY_4_CONTROL_POINT_PATCHLIST); dc->IASetInputLayout(ShaderManager::GetInstance()->m_JQuadTess->m_InputLayout); float blendFactors[] = { 0.0f, 0.0f, 0.0f, 0.0f }; // only used with D3D11_BLEND_BLEND_FACTOR dc->RSSetState(m_rasterizerStates[RSWIREFRAME]); dc->OMSetBlendState(m_blendStates[BSNOBLEND], blendFactors, 0xffffffff); dc->OMSetDepthStencilState(m_depthStencilStates[DSDEFAULT], 0); ID3DX11EffectTechnique* activeTech = ShaderManager::GetInstance()->m_JQuadTess->Tech; D3DX11_TECHNIQUE_DESC techDesc; activeTech->GetDesc(&techDesc); for (unsigned int p = 0; p < techDesc.Passes; p++) { TerrainVisual* terrainVisual = (TerrainVisual*)entity->m_VisualComponent; UINT stride = sizeof(TerrainVertex); UINT offset = 0; GetGFXDeviceContext()->IASetVertexBuffers(0, 1, &terrainVisual->m_VB, &stride, &offset); Vector3 eyePos = Vector3(cam->m_position); Matrix rotation = Matrix::CreateFromYawPitchRoll(entity->m_rotationEuler.x, entity->m_rotationEuler.y, entity->m_rotationEuler.z); Matrix model = rotation * Matrix::CreateTranslation(entity->m_position); Matrix view = cam->GetLookAtMatrix(); Matrix MVP = model * view * m_ProjectionMatrix; ShaderManager::GetInstance()->m_JQuadTess->SetEyePosW(eyePos); ShaderManager::GetInstance()->m_JQuadTess->SetWorld(model); ShaderManager::GetInstance()->m_JQuadTess->SetWorldViewProj(MVP); activeTech->GetPassByIndex(p)->Apply(0, GetGFXDeviceContext()); GetGFXDeviceContext()->Draw(4, 0); } dc->RSSetState(0); dc->OMSetBlendState(0, blendFactors, 0xffffffff); dc->OMSetDepthStencilState(0, 0); I draw my scene by looping through the list of entities and calling the associated draw method depending on the entity's "visual type":
      for (unsigned int i = 0; i < scene->GetEntityList()->size(); i++) { Entity* entity = scene->GetEntityList()->at(i); if (entity->m_VisualComponent->m_visualType == VisualType::MESH) DrawMeshEntity(entity, cam, sun, point); else if (entity->m_VisualComponent->m_visualType == VisualType::BILLBOARD) DrawBillboardEntity(entity, cam, sun, point); else if (entity->m_VisualComponent->m_visualType == VisualType::TERRAIN) DrawTerrainEntity(entity, cam); } HR(m_swapChain->Present(0, 0)); Any help/advice would be much appreciated!
    • By KaiserJohan
      Am trying a basebones tessellation shader and getting unexpected result when increasing the tessellation factor. Am rendering a group of quads and trying to apply tessellation to them.
      OutsideTess = (1,1,1,1), InsideTess= (1,1)

      OutsideTess = (1,1,1,1), InsideTess= (2,1)

      I expected 4 triangles in the quad, not two. Any idea of whats wrong?
      struct PatchTess { float mEdgeTess[4] : SV_TessFactor; float mInsideTess[2] : SV_InsideTessFactor; }; struct VertexOut { float4 mWorldPosition : POSITION; float mTessFactor : TESS; }; struct DomainOut { float4 mWorldPosition : SV_POSITION; }; struct HullOut { float4 mWorldPosition : POSITION; }; Hull shader:
      PatchTess PatchHS(InputPatch<VertexOut, 3> inputVertices) { PatchTess patch; patch.mEdgeTess[ 0 ] = 1; patch.mEdgeTess[ 1 ] = 1; patch.mEdgeTess[ 2 ] = 1; patch.mEdgeTess[ 3 ] = 1; patch.mInsideTess[ 0 ] = 2; patch.mInsideTess[ 1 ] = 1; return patch; } [domain("quad")] [partitioning("fractional_odd")] [outputtopology("triangle_ccw")] [outputcontrolpoints(4)] [patchconstantfunc("PatchHS")] [maxtessfactor( 64.0 )] HullOut hull_main(InputPatch<VertexOut, 3> verticeData, uint index : SV_OutputControlPointID) { HullOut ret; ret.mWorldPosition = verticeData[index].mWorldPosition; return ret; }  
      Domain shader:
      [domain("quad")] DomainOut domain_main(PatchTess patchTess, float2 uv : SV_DomainLocation, const OutputPatch<HullOut, 4> quad) { DomainOut ret; const float MipInterval = 20.0f; ret.mWorldPosition.xz = quad[ 0 ].mWorldPosition.xz * ( 1.0f - uv.x ) * ( 1.0f - uv.y ) + quad[ 1 ].mWorldPosition.xz * uv.x * ( 1.0f - uv.y ) + quad[ 2 ].mWorldPosition.xz * ( 1.0f - uv.x ) * uv.y + quad[ 3 ].mWorldPosition.xz * uv.x * uv.y ; ret.mWorldPosition.y = quad[ 0 ].mWorldPosition.y; ret.mWorldPosition.w = 1; ret.mWorldPosition = mul( gFrameViewProj, ret.mWorldPosition ); return ret; }  
      Any ideas what could be wrong with these shaders?
    • By simco50
      I've stumbled upon Urho3D engine and found that it has a really nice and easy to read code structure.
      I think the graphics abstraction looks really interesting and I like the idea of how it defers pipeline state changes until just before the draw call to resolve redundant state changes.
      This is done by saving the state changes (blendEnabled/SRV changes/RTV changes) in member variables and just before the draw, apply the actual state changes using the graphics context.
      It looks something like this (pseudo):
      void PrepareDraw() { if(renderTargetsDirty) { pD3D11DeviceContext->OMSetRenderTarget(mCurrentRenderTargets); renderTargetsDirty = false } if(texturesDirty) { pD3D11DeviceContext->PSSetShaderResourceView(..., mCurrentSRVs); texturesDirty = false } .... //Some more state changes } This all looked like a great design at first but I've found that there is one big issue with this which I don't really understand how it is solved in their case and how I would tackle it.
      I'll explain it by example, imagine I have two rendertargets: my backbuffer RT and an offscreen RT.
      Say I want to render my backbuffer to the offscreen RT and then back to the backbuffer (Just for the sake of the example).
      You would do something like this:
      //Render to the offscreen RT pGraphics->SetRenderTarget(pOffscreenRT->GetRTV()); pGraphics->SetTexture(diffuseSlot, pDefaultRT->GetSRV()) pGraphics->DrawQuad() pGraphics->SetTexture(diffuseSlot, nullptr); //Remove the default RT from input //Render to the default (screen) RT pGraphics->SetRenderTarget(nullptr); //Default RT pGraphics->SetTexture(diffuseSlot, pOffscreenRT->GetSRV()) pGraphics->DrawQuad(); The problem here is that the second time the application loop comes around, the offscreen rendertarget is still bound as input ShaderResourceView when it gets set as a RenderTargetView because in Urho3D, the state of the RenderTargetView will always be changed before the ShaderResourceViews (see top code snippet) even when I set the SRV to nullptr before using it as a RTV like above causing errors because a resource can't be bound to both input and rendertarget.
      What is usually the solution to this?
  • Advertisement
  • Advertisement
Sign in to follow this  

DX11 DX11 - SSAO

This topic is 1707 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I'm so terribly sorry to re-re-re open this one, but I never fully succeeded at this one, SSAO. unsure.png


SSAO Shader:

Texture2D t_depthmap : register(t0);
Texture2D t_normalmap : register(t1);
Texture2D t_random : register(t2);
SamplerState ss;

cbuffer SSAOBuffer : register(c0)
	float g_scale;
	float g_bias;
	float g_sample_rad;
	float g_intensity;
	float ssaoIterations;
	float3 pppspace;

	matrix view;

struct VS_Output
	float4 Pos : SV_POSITION;              
	float2 Tex : TEXCOORD0;
VS_Output VShader(uint id : SV_VertexID)
	VS_Output Output;
	Output.Tex = float2((id << 1) & 2, id & 2);
	Output.Pos = float4(Output.Tex * float2(2,-2) + float2(-1,1), 0, 1);
	return Output;

// Helper for modifying the saturation of a color.
float4 AdjustSaturation(float4 color, float saturation)
	// The constants 0.3, 0.59, and 0.11 are chosen because the
	// human eye is more sensitive to green light, and less to blue.
	float grey = dot(color, float3(0.3, 0.59, 0.11));

	return lerp(grey, color, saturation);

// Ambient Occlusion Stuff --------------------------------------------------

float3 getPosition(in float2 uv)
	return t_depthmap.Sample(ss, uv).xyz;

float3 getNormal(in float2 uv)
	return normalize(t_normalmap.Sample(ss, uv).xyz * 2.0f - 1.0f);

float2 getRandom(in float2 uv)
	//return normalize(t_random.Sample(ss, uv ).xy * 2.0f - 1.0f); // ~100FPS
	return normalize(t_random.Sample(ss, float2(600, 800) * uv / float2(60, 60)).xy * 2.0f - 1.0f);

float doAmbientOcclusion(in float2 tcoord,in float2 uv, in float3 p, in float3 cnorm)
	float3 diff = getPosition(tcoord + uv) - p;
	const float3 v = normalize(diff);
	const float d = length(diff)*g_scale;
	return max(0.0,dot(cnorm,v)-g_bias)*(1.0/(1.0+d))*g_intensity;

// End

float4 PShader(VS_Output input) : SV_TARGET
	// ADD SSAO ---------------------------------------------------------------
	const float2 vec[4] = {float2(1,0),float2(-1,0),

	float3 p = getPosition(input.Tex);
	float3 n = getNormal(input.Tex);
	float2 rand = getRandom(input.Tex);

	float ao = 0.0f;
	float rad = g_sample_rad/p.z; // g_s_r

	//**SSAO Calculation**//
	int iterations = 4;
	for (int j = 0; j < iterations; ++j)
	  float2 coord1 = reflect(vec[j], rand)*rad;
	  float2 coord2 = float2(coord1.x*0.707 - coord1.y*0.707,
				  coord1.x*0.707 + coord1.y*0.707);
	  ao += doAmbientOcclusion(input.Tex, coord1*0.25, p, n);
	  ao += doAmbientOcclusion(input.Tex, coord2*0.5, p, n);
	  ao += doAmbientOcclusion(input.Tex, coord1*0.75, p, n);
	  ao += doAmbientOcclusion(input.Tex, coord2, p, n);
	ao /= (float)iterations*4.0;

	ao = saturate(ao);

	return float4(ao, ao, ao, 1.0f);

How I write my depth map to the ssao:

output.depth = output.position.z / output.position.w;
float depth = input.depth;
output.Depth = float4(depth, depth, depth, 1);

How i write my normal map to the ssao:

VS -- I think this is wrong
output.NormalW = mul(mul(worldMatrix, viewMatrix), float4(normal.xyz,0) );
output.Normals = float4(input.NormalW, 1);

Render Result:






Again, I'm so sorry for re-posting this, it's just, i really want this feature completed (well, working at least).


Thank You, like always...

Share this post

Link to post
Share on other sites

So how would the depth shader be written correctly to the SSAO shader?


Thank you for your interest by the way, appreciate it!

Share this post

Link to post
Share on other sites

I think that that ssao shader needs view space position (whole three components x,y and z), as i can see you store normals that way, although:

output.NormalW = mul(mul(worldMatrix, viewMatrix), float4(normal.xyz,0) );

this mul order is right if shader packs matrices as column major, i don't know dx11  shader systems so i am not sure, but in dx9 i can force row major with this statement:

#pragma pack_matrix(row_major)

beacuse it matches my system in cpp code and i always mul vectors with matrices this way:

result = mul( vec, matrix );
Edited by belfegor

Share this post

Link to post
Share on other sites

Ok, it is a bit better now in the sense that it doesn't just become randomly black: (SSAO Map)





But this isn't right yet, well I don't think it is, I mean look at the left of the cube, where the left side is rendered, it's black, why?


Ohh, and belfegor, about the matrices, you were right!


How I write my depth map to the ssao:


output.position = mul(position, worldMatrix);
output.position = mul(output.position, viewMatrix);
output.position = mul(output.position, projectionMatrix);

output.NormalW = mul(float4(-normal.xyz,0), mul(worldMatrix, viewMatrix));

// Store the position value in a second input value for depth value calculations.
output.depthPosition = output.position;

// Depth
float depth = 1.0f - (input.depthPosition.z / input.depthPosition.w);
output.Depth = float4(depth, depth, depth, 1.0f);

// Normals
output.Normals = float4(input.NormalW, 1); 

Now, is this the correct way to map the depth and the normal maps?

Share this post

Link to post
Share on other sites

I said output view space position, so it is like this:

output.depthPosition.xyz = mul(float4(position.xyz,1), mul(worldMatrix, viewMatrix)).xyz;
output.Depth = float4(input.depthPosition.xyz, 1.0f);

although you must have proper render target (or "view" whatever they call it in dx11) format that can hold at least 3 floating point components.


And why do you invert your normals now?

output.NormalW = mul(float4(-normal.xyz,0), mul(worldMatrix, viewMatrix));

Then when you fix those issues, play with ssao "sampling radius" to match your world scale (in case you don't get expected results).

Edited by belfegor

Share this post

Link to post
Share on other sites

Let me try this one with my renderer and i get back to you. smile.png


EDIT: Here is it (on and off). All parameters are same as yours except radius which i set to 0.1f (10cm in my world) since it feels more natural for me.




Edited by belfegor

Share this post

Link to post
Share on other sites
Sign in to follow this  

  • Advertisement