• Advertisement

DX11 Problem with Point light shadows

Recommended Posts

Hello!
 

A have an issue with my point light shadows realisation.

Capture.PNG.3af9f51e1bde08501f21893411aa8a47.PNG

 

First of all, the pixel shader path:


//....

float3 toLight = plPosW.xyz - input.posW;

float3 fromLight = -toLight;

//...

float depthL = abs(fromLight.x);

if(depthL < abs(fromLight.y))
  depthL = abs(fromLight.y);

if(depthL < abs(fromLight.z))
  depthL = abs(fromLight.z);

float4 pH = mul(float4(0.0f, 0.0f, depthL, 1.0f), lightProj);
pH /= pH.w;

isVisible = lightDepthTex.SampleCmpLevelZero(lightDepthSampler, normalize(fromLight), pH.z).x;



lightProj matrix creation

Matrix4x4 projMat = Matrix4x4::PerspectiveFovLH(0.5f * Pi, 0.01f, 1000.0f, 1.0f);

 

thats how i create Depth cube texture
 

viewport->TopLeftX = 0.0f;
viewport->TopLeftY = 0.0f;
viewport->Width    = static_cast<float>(1024);
viewport->Height   = static_cast<float>(1024);
viewport->MinDepth = 0.0f;
viewport->MaxDepth = 1.0f;

D3D11_TEXTURE2D_DESC textureDesc;
textureDesc.Width = 1024;
textureDesc.Height = 1024;
textureDesc.MipLevels = 1;
textureDesc.ArraySize = 6;
textureDesc.Format = DXGI_FORMAT_R24G8_TYPELESS;
textureDesc.SampleDesc.Count = 1;
textureDesc.SampleDesc.Quality = 0;
textureDesc.Usage = D3D11_USAGE_DEFAULT;
textureDesc.BindFlags = D3D11_BIND_DEPTH_STENCIL | D3D11_BIND_SHADER_RESOURCE;
textureDesc.CPUAccessFlags = 0;
textureDesc.MiscFlags = D3D11_RESOURCE_MISC_TEXTURECUBE;

ID3D11Texture2D* texturePtr;
HR(DeviceKeeper::GetDevice()->CreateTexture2D(&textureDesc, NULL, &texturePtr));

for(int i = 0; i < 6; ++i){

  D3D11_DEPTH_STENCIL_VIEW_DESC dsvDesc;
  dsvDesc.Flags = 0;
  dsvDesc.Format = DXGI_FORMAT_D24_UNORM_S8_UINT;
  dsvDesc.ViewDimension = D3D11_DSV_DIMENSION_TEXTURE2DARRAY;
  dsvDesc.Texture2DArray = D3D11_TEX2D_ARRAY_DSV{0, i, 1};

  ID3D11DepthStencilView *outDsv;
  HR(DeviceKeeper::GetDevice()->CreateDepthStencilView(texturePtr, &dsvDesc, &outDsv));

  edgeDsv = outDsv;
}

D3D11_SHADER_RESOURCE_VIEW_DESC srvDesc;
srvDesc.Format = DXGI_FORMAT_R24_UNORM_X8_TYPELESS;
srvDesc.ViewDimension = D3D11_SRV_DIMENSION_TEXTURECUBE;
srvDesc.TextureCube = D3D11_TEXCUBE_SRV{0, 1};

ID3D11ShaderResourceView *outSRV;
HR(DeviceKeeper::GetDevice()->CreateShaderResourceView(texturePtr, &srvDesc, &outSRV)); 

 

then i create six target oriented cameras and finally draw scene to cube depth according to each camera

Cameras creation code:  

std::vector<Vector3> camDirs = {
  { 1.0f,  0.0f,  0.0f},
  {-1.0f,  0.0f,  0.0f},
  { 0.0f,  1.0f,  0.0f},
  { 0.0f, -1.0f,  0.0f},
  { 0.0f,  0.0f,  1.0f},
  { 0.0f,  0.0f, -1.0f},
  };

  std::vector<Vector3> camUps = {
  {0.0f, 1.0f, 0.0f},  // +X
  {0.0f, 1.0f, 0.0f},  // -X
  {0.0f, 0.0f, -1.0f}, // +Y
  {0.0f, 0.0f, 1.0f},  // -Y
  {0.0f, 1.0f, 0.0f},  // +Z
  {0.0f, 1.0f, 0.0f}   // -Z
  };

  for(size_t b = 0; b < camDirs.size(); b++){
	edgesCameras.SetPos(pl.GetPos());
	edgesCameras.SetTarget(pl.GetPos() + camDirs);
	edgesCameras.SetUp(camUps);
	edgesCameras.SetProjMatrix(projMat);
  } 

 

I will be very gratefull for any help!

P.s sorry for my poor English)

 

Edited by AlexWIN32

Share this post


Link to post
Share on other sites
Advertisement

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


  • Advertisement
  • Advertisement
  • Popular Tags

  • Advertisement
  • Popular Now

  • Similar Content

    • By Hawkblood
      I've been away for a VERY long time, so if this topic has already been discussed, I couldn't find it.
      I started using VS2017 recently and I keep getting warnings like this:
      1>c:\program files (x86)\microsoft directx sdk (june 2010)\include\d3d10.h(609): warning C4005: 'D3D10_ERROR_FILE_NOT_FOUND': macro redefinition (compiling source file test.cpp) 1>C:\Program Files (x86)\Windows Kits\10\Include\10.0.16299.0\shared\winerror.h(54103): note: see previous definition of 'D3D10_ERROR_FILE_NOT_FOUND' (compiling source file test.cpp) It pops up for various things, but the reasons are all the same. Something is already defined.....
      I have DXSDK June2010 and referencing the .lib and .h set correctly (otherwise I wouldn't get this, I'd get errors)
      Is there a way to correct this issue or do I just have to live with it?
       
      Also (a little off-topic) the compiler doesn't like to compile my code if I make very small changes.... What's up with that? Can I change it? Google is no help.
    • By d3daywan
      【DirectX9 Get shader bytecode】
      I hook DrawIndexedPrimitive
          HookCode(PPointer(g_DeviceBaseAddr + $148)^,@NewDrawIndexedPrimitive, @OldDrawIndexedPrimitive);    
          function NewDrawIndexedPrimitive(const Device:IDirect3DDevice9;_Type: TD3DPrimitiveType; BaseVertexIndex: Integer; MinVertexIndex, NumVertices, startIndex, primCount: LongWord): HResult; stdcall;
          var
              ppShader: IDirect3DVertexShader9;
              _Code:Pointer;
              _CodeLen:Cardinal;
          begin
              Device.GetVertexShader(ppShader);//<------1.Get ShaderObject(ppShader)
              ppShader.GetFunction(nil,_CodeLen);
              GetMem(_Code,_CodeLen);
              ppShader.GetFunction(_Code,_CodeLen);//<----2.Get bytecode from ShaderObject(ppShader)
              Result:=OldDrawIndexedPrimitive(Self,_Type,BaseVertexIndex,MinVertexIndex, NumVertices, startIndex, primCount);
          end;
      【How to DirectX11 Get VSShader bytecode?】
      I hook DrawIndexed
          pDrawIndexed:=PPointer(PUINT_PTR(UINT_PTR(g_ImmContext)+0)^ + 12 * SizeOf(Pointer))^;
          HookCode(pDrawIndexed,@NewDrawIndexed,@OldDrawIndexed);
          procedure NewDrawIndexed(g_Real_ImmContext:ID3D11DeviceContext;IndexCount:     UINT;StartIndexLocation: UINT;BaseVertexLocation: Integer); stdcall;
          var
              game_pVertexShader: ID3D11VertexShader;
                  ppClassInstances: ID3D11ClassInstance;
                  NumClassInstances: UINT
          begin
              g_Real_ImmContext.VSGetShader(game_pVertexShader,ppClassInstances,NumClassInstances);    //<------1.Get ShaderObject(game_pVertexShader)
              .....//<----【2.Here's how to get bytecode from ShaderObject(game_pVertexShader)?】
              OldDrawIndexed(ImmContext, IndexCount, StartIndexLocation, BaseVertexLocation);
          end;

      Another way:
      HOOK CreateVertexShader()
      but
      HOOK need to be created before the game CreateVertexShader, HOOK will not get bytecode if the game is running later,I need to get bytecode at any time like DirectX9
    • By matt77hias
      Is it ok to bind nullptr shader resource views and sample them in some shader? I.e. is the resulting behavior deterministic and consistent across GPU drivers? Or should one rather bind an SRV to a texture having just a single black texel?
    • By matt77hias
      Is it common to have more than one ID3D11Device and/or associated immediate ID3D11DeviceContext?
      If I am correct a single display subsystem (GPU, video memory, etc.) is completely determined (from a 3D rendering perspective) by a
      IDXGIAdapter (meta functionality facade); ID3D11Device (resource creation facade); ID3D11DeviceContext (pipeline facade). So given that you want to use multiple display subsystems, you will have to handle multiple of these interfaces. A concrete example would be a graphics card dedicated to rendering and a separate graphics card dedicated to computation, or combining an integrated and dedicated graphics card. All such cases seem to me quite far fetched to justify support in a majority of games. So moving one abstraction level further downstream, should a game engine even consider multiple display systems (i.e. there is just one ID3D11Device and one immediate ID3D11DeviceContext)?
    • By pcmaster
      Hi all, I have another "niche" architecture error
      On our building servers, we're using head-less machines on which we're running DX11 WARP in a console session, that is D3D_DRIVER_TYPE_WARP plus D3D_FEATURE_LEVEL_11_0. It's Windows 7 or Windows Server 2008 R2 with "Platform Update for Windows 7". Everything's been fine, it's running all kinds of complex rendering, compute shaders, UAVs, everything fine and even fast.
      The problem: Writes to a cubemap array specific slice and specific mipmap using PS+UAV seem to be dropped.
      Do note that with D3D_DRIVER_TYPE_HARDWARE it works correctly; I can reproduce the bug on any normal workstation (also Windows 7 x64) with D3D_DRIVER_TYPE_WARP.
      The shader in question is a simple average 4->1 mipmapping PS, which samples a source SRV texture and writes into a UAV like this:
       
      RWTexture2DArray<float4> array2d; array2d[int3(xy, arrayIdx)] = avg_float4_value; The output merger is set to do no RT writes, the only output is via that one UAV.
      Note again that with a normal HW driver (GeForce) it works right, but with WARP it doesn't.
      Any ideas how I could debug this, to be sure it's really WARP causing this? Do you think RenderDoc will capture also a WARP application (using their StartFrameCapture/EndFrameCapture API of course, since the there's no window nor swap chain)? EDIT: RenderDoc does make a capture even with WARP, wow
      Thanks!
  • Advertisement