• Advertisement
Sign in to follow this  

Cubemap Depth Sample

This topic is 735 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

So, I have dynamic cubemap (below) and i need to sample depth from this in shader, with 2d texture i was doing it like that:
 
struct VertexIn
{
    float3 PosL    : POSITION;
    float3 NormalL : NORMAL;
    float2 Tex     : TEXCOORD;
};
 
struct VertexOut
{
   float4 PosH : SV_POSITION;
   float2 Tex  : TEXCOORD;
};
 
VertexOut VS(VertexIn vin)
{
    VertexOut vout;
 
    vout.PosH = mul(float4(vin.PosL, 1.0f), gWorldViewProj);
    vout.Tex  = vin.Tex;
 
    return vout;
}
 
 
float4 PS(VertexOut pin, uniform int index) : SV_Target
{
    float4 c = gTexture.Sample(samLinear, pin.Tex).r;
 
    return float4(c.rrr, 1);
}

 
How can i do that for cubemap like that:
 
D3D11_TEXTURE2D_DESC texDesc;
    texDesc.Width = CubeMapSize;
    texDesc.Height = CubeMapSize;
    texDesc.MipLevels = 1;
    texDesc.ArraySize = 6;
    texDesc.SampleDesc.Count = 1;
    texDesc.SampleDesc.Quality = 0;
    texDesc.Format = DXGI_FORMAT_R24G8_TYPELESS;
    texDesc.Usage = D3D11_USAGE_DEFAULT;
    texDesc.BindFlags = D3D11_BIND_RENDER_TARGET | D3D11_BIND_SHADER_RESOURCE|D3D11_BIND_DEPTH_STENCIL;
    texDesc.CPUAccessFlags = 0;
    texDesc.MiscFlags = D3D11_RESOURCE_MISC_GENERATE_MIPS | D3D11_RESOURCE_MISC_TEXTURECUBE;
 
ID3D11Texture2D* cubeTex = 0;
    HR(md3dDevice->CreateTexture2D(&texDesc, 0, &cubeTex));
 
 
 
    D3D11_RENDER_TARGET_VIEW_DESC rtvDesc;
    rtvDesc.Format = texDesc.Format;
    rtvDesc.ViewDimension = D3D11_RTV_DIMENSION_TEXTURE2DARRAY;
    rtvDesc.Texture2DArray.ArraySize = 1;
    rtvDesc.Texture2DArray.MipSlice = 0;
 
    for(int i = 0; i < 6; ++i)
    {
        rtvDesc.Texture2DArray.FirstArraySlice = i;
        HR(md3dDevice->CreateRenderTargetView(cubeTex, &rtvDesc, &mDynamicCubeMapRTV[i]));
    }
 
    D3D11_SHADER_RESOURCE_VIEW_DESC srvDesc;
    srvDesc.Format = DXGI_FORMAT_R24_UNORM_X8_TYPELESS;
    srvDesc.ViewDimension = D3D11_SRV_DIMENSION_TEXTURECUBE;
    srvDesc.TextureCube.MostDetailedMip = 0;
srvDesc.TextureCube.MipLevels = 1;
 
    HR(md3dDevice->CreateShaderResourceView(cubeTex, &srvDesc, &mDynamicCubeMapSRV));
 
    D3D11_DEPTH_STENCIL_VIEW_DESC dsvDesc;
    dsvDesc.Format = DXGI_FORMAT_D24_UNORM_S8_UINT ;
dsvDesc.Flags  = 0;
    dsvDesc.ViewDimension = D3D11_DSV_DIMENSION_TEXTURE2D;
    dsvDesc.Texture2D.MipSlice = 0;
    HR(md3dDevice->CreateDepthStencilView(cubeTex, &dsvDesc, &mDynamicCubeMapDSV));
 
ReleaseCOM(cubeTex);
 
 
mCubeMapViewport.TopLeftX = 0.0f;
    mCubeMapViewport.TopLeftY = 0.0f;
    mCubeMapViewport.Width    = (float)CubeMapSize;
    mCubeMapViewport.Height   = (float)CubeMapSize;
    mCubeMapViewport.MinDepth = 0.0f;
    mCubeMapViewport.MaxDepth = 1.0f;
Edited by MJP

Share this post


Link to post
Share on other sites
Advertisement

Something tells me you aren't checking your HRESULTs despite having wrapped your Create calls in your HR macro.

texDesc.BindFlags = D3D11_BIND_RENDER_TARGET | D3D11_BIND_SHADER_RESOURCE|D3D11_BIND_DEPTH_STENCIL;

It's never valid to have D3D11_BIND_RENDER_TARGET and D3D11_BIND_DEPTH_STENCIL at the same time, so this texture won't be created.

 

You're going to have to explain what exactly isn't working and ask a question that can actually be answered.

 

Turn on the Debug Layer using D3D11_CREATE_DEVICE_DEBUG and fix any errors first.

Edited by Adam Miles

Share this post


Link to post
Share on other sites

Thanks for reply. I need to create dynamic cubetexture, but i only need depth, i want to sample this texture in shader for point light shadows. You said 

It's never valid to have D3D11_BIND_RENDER_TARGET and D3D11_BIND_DEPTH_STENCIL at the same time, so this texture won't be created.

So how can i achive this? I need to bind render targets to faces and I need depth stencil too.

Share this post


Link to post
Share on other sites
@Adam Miles

People like me get confused a lot about sampling a depth texture. Not sure if should start a new thread, but I will try here anyway.

Consider this statement. 'A cube texture uses a unit directional vector to sample a location while normal textures need to use projected and scale biased texture coordinates to sample a location'

A cube texture is essentially an array of 2D textures, I believe. Can I 'manually' sample a cube map as I do for normal texture2D. The crucial part here is, since a point light has six depth textures in six directions, to determine which of them six depth textures to use while sampling a location. To determine this I need to find the max coordinate in the direction vector from light source to the location amd use the depth texture in that direction to sample from.

Or, instead of doing all these, we simply specify direction vector to cube texture and be done with it. So my question is, do those two methods give the same results?

Share this post


Link to post
Share on other sites

If the texture only needs to be used as a depth buffer and as a texture in your shader, then you don't need D3D11_BIND_RENDER_TARGET. That flag is for render targets (SV_TARGET0-7) and is not required for depth buffers.

Share this post


Link to post
Share on other sites

@Adam Miles

People like me get confused a lot about sampling a depth texture. Not sure if should start a new thread, but I will try here anyway.

Consider this statement. 'A cube texture uses a unit directional vector to sample a location while normal textures need to use projected and scale biased texture coordinates to sample a location'

A cube texture is essentially an array of 2D textures, I believe. Can I 'manually' sample a cube map as I do for normal texture2D. The crucial part here is, since a point light has six depth textures in six directions, to determine which of them six depth textures to use while sampling a location. To determine this I need to find the max coordinate in the direction vector from light source to the location amd use the depth texture in that direction to sample from.

Or, instead of doing all these, we simply specify direction vector to cube texture and be done with it. So my question is, do those two methods give the same results?

 

Yes, you can sample a CubeTexture (whether it's storing depth or colour) using a 3D normalised unit vector as your 'texture coordinate'. There's no need to manually calculate which face to sample and the 2D coordinate on the face, the hardware does that for you. Sample from the CubeTexture as if it were a sphere: (0,1,0) samples the middle of the +Y face, (0,-1,0) samples the middle of the -Y face etc.

Share this post


Link to post
Share on other sites

So how can I draw to each cube face without bind render targets to every face?

Can anyone give code which create cube depth texture and example how to draw depth into this texture?

I generally dont understand how one 2d depth stencil view can contain 6 cube face views. If someone could explain it i would be obligated.

Edited by widmowyfox

Share this post


Link to post
Share on other sites
You don't need a render target to write depth, you just need a depth stencil view. For rendering to the 6 faces separately, you just need 6 depth stencil views that each target a particular face. It's almost exactly like the code that you have for creating the 6 render target views, except that you create depth stencil views:
 
for(uint32_t i = 0; i < 6; ++i)
{
    D3D11_DEPTH_STENCIL_VIEW_DESC dsvDesc = { };
    dsvDesc.Format = format;
    dsvDesc.ViewDimension = D3D11_DSV_DIMENSION_TEXTURE2DARRAY;
    dsvDesc.Texture2DArray.ArraySize = 1;
    dsvDesc.Texture2DArray.FirstArraySlice = i;
    dsvDesc.Texture2DArray.MipSlice = 0;
    dsvDesc.Flags = 0;
    DXCall(device->CreateDepthStencilView(textureResource, &dsvDesc, &arraySliceDSVs[i]));
}
The other way to do it is to have 1 depth stencil view that targets the entire array, and then use SV_RenderTargetArrayIndex from a geometry shader in order to specify which slice you want to render to. Edited by MJP

Share this post


Link to post
Share on other sites

The other way to do it is to have 1 depth stencil view that targets the entire array, and then use SV_RenderTargetArrayIndex from either your vertex shader or geometry shader in order to specify which slice you want to render to.

 

Using SV_RenderTargetArrayIndex from a Vertex Shader is not supported in D3D11, so that won't be an option for the OP.

Share this post


Link to post
Share on other sites

The other way to do it is to have 1 depth stencil view that targets the entire array, and then use SV_RenderTargetArrayIndex from either your vertex shader or geometry shader in order to specify which slice you want to render to.

 
Using SV_RenderTargetArrayIndex from a Vertex Shader is not supported in D3D11, so that won't be an option for the OP.


Indeed, thank you for correcting me. Edited by MJP

Share this post


Link to post
Share on other sites

That was helpful, as i understand I can simply set render target to 0: ID3D11RenderTargetView* renderTargets[1]={0};
Now my depth cube texture is 100% correct, but I still dont know how to sample it in shader:
 
 

float4 PS(VertexOut pin, uniform int index) : SV_Target
{
float4 c = gTexture.Sample(samLinear, pin.Tex).r;

return float4(c.rrr, 1);
}

In cubetexture i cant use 2d coord system :/ 

Please, please, can someone explain me cube texture sampling methods step by step?

Edited by widmowyfox

Share this post


Link to post
Share on other sites

You need to use the (Light - Position) 3d vector to sample the cubemap..

 

This is for OpenGL, but pretty much the same in HLSL:

 

http://www.sunandblackcat.com/tipFullView.php?l=eng&topicid=36

  vec3 fromLightToFragment = u_lightPos - o_worldPosition.xyz;
   // normalized distance to the point light source
   float distanceToLight = length(fromLightToFragment);
   float currentDistanceToLight = (distanceToLight - u_nearFarPlane.x) /
            (u_nearFarPlane.y - u_nearFarPlane.x);
   currentDistanceToLight = clamp(currentDistanceToLight, 0, 1);
   // normalized direction from light source for sampling
   fromLightToFragment = normalize(fromLightToFragment);

   // sample shadow cube map
   float referenceDistanceToLight = texture(u_shadowCubeMap, -fromLightToFragment).r;
   // compare distances to determine whether the fragment is in shadow
   float shadowFactor = float(referenceDistanceToLight > currentDistanceToLight);

As you can see, he calculates a 3d vector and uses it to sample the cubemap.

Share this post


Link to post
Share on other sites

I calculate shadow factor according to the instructions:

float CalcShadowFactor(SamplerComparisonState samShadow, 
                       TextureCube shadowMap, 
					   float3 shadowPosH)
{
float3 l=float3(4.0f,2.0f,8.0f);
float3 d=l-shadowPosH;
float dis=length(d);
d=normalize(d);

	float percentLit = 0.0f;


	[unroll]
	
		percentLit += shadowMap.SampleCmpLevelZero(samShadow, 
			d,dis).r;
	
	return percentLit;
}

It simply dont work, it projects shadows to wrong places, Am I doing something wrong?

l-light world position

shadowPosH-world position calculated by  mul(float4(vin.PosL, 1.0f), gWorld).xyz

 

If all shader code is necessery:

#include "LightHelper.fx"
 
cbuffer cbPerFrame
{
	PointLight gDirLights;
	float3 gEyePosW;

	float  gFogStart;
	float  gFogRange;
	float4 gFogColor; 
};

cbuffer cbPerObject
{
	float4x4 gWorld;
	float4x4 gWorldInvTranspose;
	float4x4 gWorldViewProj;
	float4x4 gTexTransform;
	float4x4 gShadowTransform; 
	Material gMaterial;
}; 


Texture2D gDiffuseMap;
TextureCube gShadowMap;

TextureCube gCubeMap;

SamplerState samLinear
{
	Filter = MIN_MAG_MIP_LINEAR;
	AddressU = WRAP;
	AddressV = WRAP;
};

SamplerComparisonState samShadow
{
	Filter   = COMPARISON_MIN_MAG_MIP_LINEAR;
	AddressU = BORDER;
	AddressV = BORDER;
	AddressW = BORDER;
	BorderColor = float4(0.0f, 0.0f, 0.0f, 0.0f);

    ComparisonFunc = LESS_EQUAL;
};
 
struct VertexIn
{
	float3 PosL    : POSITION;
	float3 NormalL : NORMAL;
	float2 Tex     : TEXCOORD;
};

struct VertexOut
{
	float4 PosH       : SV_POSITION;
	float3 PosL: POSITION0;
    float3 PosW       : POSITION1;
    float3 NormalW    : NORMAL;
	float2 Tex        : TEXCOORD0;
	float3 ShadowPosH : TEXCOORD1;
};

VertexOut VS(VertexIn vin)
{
	VertexOut vout;
	
	
	vout.PosW    = mul(float4(vin.PosL, 1.0f), gWorld).xyz;
	vout.NormalW = mul(vin.NormalL, (float3x3)gWorldInvTranspose);
		
	
	vout.PosH = mul(float4(vin.PosL, 1.0f), gWorldViewProj);
	
	
	vout.Tex = mul(float4(vin.Tex, 0.0f, 1.0f), gTexTransform).xy;

	
	vout.ShadowPosH = vout.PosW;
	vout.PosL=vin.PosL;
	return vout;
}
 
float4 PS(VertexOut pin, 
          uniform int gLightCount, 
		  uniform bool gUseTexure, 
		  uniform bool gAlphaClip, 
		  uniform bool gFogEnabled, 
		  uniform bool gReflectionEnabled) : SV_Target
{
	
    pin.NormalW = normalize(pin.NormalW);

	
	float3 toEye = gEyePosW - pin.PosW;
float3 toEyeW = normalize(gEyePosW - pin.PosW);
	
	float distToEye = length(toEye);

	
	toEye /= distToEye;
	
    
    float4 texColor = float4(1, 1, 1, 1);
    if(gUseTexure)
	{
		// Próbkuj tekstur?.
		texColor = gDiffuseMap.Sample( samLinear, pin.Tex );

		if(gAlphaClip)
		{
		
			clip(texColor.a - 0.1f);
		}
	}
	 
	
	float4 litColor = texColor;
	if( gLightCount > 0  )
	{  
		
		float4 ambient = float4(0.0f, 0.0f, 0.0f, 0.0f);
		float4 diffuse = float4(0.0f, 0.0f, 0.0f, 0.0f);
		float4 spec    = float4(0.0f, 0.0f, 0.0f, 0.0f);

		
		float3 shadow = float3(1.0f, 1.0f, 1.0f);
	shadow[0] = CalcShadowFactor(samShadow, gShadowMap, pin.ShadowPosH);

		// Sumuj udzia? ?wiat?a z ka?dego ?ród?a.  
		[unroll]
	
			float4 A, D, S;
			ComputePointLight(gMaterial, gDirLights, pin.PosW, pin.NormalW, toEyeW, A, D, S);

			ambient += A;    
			diffuse += shadow[0]*D;
			spec    += shadow[0]*S;
		

		litColor = texColor*(ambient + diffuse) + spec;

		if( gReflectionEnabled )
		{
			float3 incident = -toEye;
			float3 reflectionVector = reflect(incident, pin.NormalW);
			float4 reflectionColor  = gCubeMap.Sample(samLinear, reflectionVector);

			litColor += gMaterial.Reflect*reflectionColor;
		}
	}
 
	

	if( gFogEnabled )
	{
		float fogLerp = saturate( (distToEye - gFogStart) / gFogRange ); 

		
		litColor = lerp(litColor, gFogColor, fogLerp);
	}

	
	litColor.a = gMaterial.Diffuse.a * texColor.a;

    return litColor;
}
//techniques bla bla bla
Edited by widmowyfox

Share this post


Link to post
Share on other sites

Have you checked that your six sides of your cubemap produces the correct shadows for their respective directions? Maybe you've gotten the axis signs wrong, so -X produces the shadowmap for +X and so on..?

 

Edit: Have you tried sampling with -d?

percentLit += shadowMap.SampleCmpLevelZero(samShadow, 
-d,dis).r;
Edited by vinterberg

Share this post


Link to post
Share on other sites

Hmmm, shadows are on totally wrong positions, not only on incorrect sides, I check depth map and its looks 100% fine, i tried with -d, the result of this was the disappearance of light. Any sugestions? Maybe upload visual project here for tests?

the result of this was the disappearance of light
 
the result of this was the disappearance of light
 
the result of this was the disappearance of light
the result of this was the disappearance of light
 
the result of this was the disappearance of light
the result of this was the disappearance of light
 
the result of this was the disappearance of light
Edited by widmowyfox

Share this post


Link to post
Share on other sites
It looks like the direction that you use to sample the cubemap is backwards. You want to do "shadowPosH - l", assuming that "l" is the world space position of your point light. The code that vinterberg posted is actually incorrect in the same way: it uses the variable name "fromLightToFragment", but it's actually computing a vector from the fragment to the light (this is why it uses "-fromLightToFragment" when sampling the cube map).

Also...if you're going to use SampleCmp to sample a depth buffer, then you can't use the distance from your point light to the surface as the comparison value. Your depth buffer will contain [0, 1] values that correspond to z/w after applying your projection matrix, not the absolute world space distance from the light to the surface. This means you need to project your light->surface vector onto the axis that corresponds to the cubemap face you'll be sampling from:

float3 shadowPos = surfacePos - lightPos;
float3 shadowDistance = length(shadowPos);
float3 shadowDir = normalize(shadowPos);

// Doing the max of the components tells us 2 things: which cubemap face we're going to use,
// and also what the projected distance is onto the major axis for that face.
float projectedDistance = max(max(abs(shadowPos.x), abs(shadowPos.y)), abs(shadowPos.z));

// Compute the project depth value that matches what would be stored in the depth buffer
// for the current cube map face. "ShadowProjection" is the projection matrix used when
// rendering to the shadow map.
float a = ShadowProjection._33;
float b = ShadowProjection._43;
float z = projectedDistance * a + b;
float dbDistance = z / projectedDistance;

return ShadowMap.SampleCmpLevelZero(PCFSampler, shadowDir, dbDistance - Bias);

Share this post


Link to post
Share on other sites

Thank you MJP that was very helpful, i tested it and yours solution work in 100%. 

Edited by widmowyfox

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement