• Advertisement
  • Popular Tags

  • Popular Now

  • Advertisement
  • Similar Content

    • By isu diss
      How do I fill the gap between sky and terrain? Scaling the terrain or procedural terrain rendering?

    • By Jiraya
      For a 2D game, does using a float2 for position increases performance in any way?
      I know that in the end the vertex shader will have to return a float4 anyway, but does using a float2 decreases the amount of data that will have to be sent from the CPU to the GPU?
    • By ucfchuck
      I am feeding in 16 bit unsigned integer data to process in a compute shader and i need to get a standard deviation.
      So I read in a series of samples and push them into float arrays
      float vals1[9], vals2[9], vals3[9], vals4[9]; int x = 0,y=0; for ( x = 0; x < 3; x++) { for (y = 0; y < 3; y++) { vals1[3 * x + y] = (float) (asuint(Input1[threadID.xy + int2(x - 1, y - 1)].x)); vals2[3 * x + y] = (float) (asuint(Input2[threadID.xy + int2(x - 1, y - 1)].x)); vals3[3 * x + y] = (float) (asuint(Input3[threadID.xy + int2(x - 1, y - 1)].x)); vals4[3 * x + y] = (float) (asuint(Input4[threadID.xy + int2(x - 1, y - 1)].x)); } } I can send these values out directly and the data is as expected

      Output1[threadID.xy] = (uint) (vals1[4] ); Output2[threadID.xy] = (uint) (vals2[4] ); Output3[threadID.xy] = (uint) (vals3[4] ); Output4[threadID.xy] = (uint) (vals4[4] ); however if i do anything to that data it is destroyed.
      If i add a
      vals1[4] = vals1[4]/2; 
      or a
      vals1[4] = vals[1]-vals[4];
      the data is gone and everything comes back 0.
      How does one go about converting a uint to a float and performing operations on it and then converting back to a rounded uint?
    • By fs1
      I have been trying to see how the ID3DInclude, and how its methods Open and Close work.
      I would like to add a custom path for the D3DCompile function to search for some of my includes.
      I have not found any working example. Could someone point me on how to implement these functions? I would like D3DCompile to look at a custom C:\Folder path for some of the include files.
    • By stale
      I'm continuing to learn more about terrain rendering, and so far I've managed to load in a heightmap and render it as a tessellated wireframe (following Frank Luna's DX11 book). However, I'm getting some really weird behavior where a large section of the wireframe is being rendered with a yellow color, even though my pixel shader is hard coded to output white. 

      The parts of the mesh that are discolored changes as well, as pictured below (mesh is being clipped by far plane).

      Here is my pixel shader. As mentioned, I simply hard code it to output white:
      float PS(DOUT pin) : SV_Target { return float4(1.0f, 1.0f, 1.0f, 1.0f); } I'm completely lost on what could be causing this, so any help in the right direction would be greatly appreciated. If I can help by providing more information please let me know.
  • Advertisement
  • Advertisement

DX11 Problem with Point light shadows

Recommended Posts


A have an issue with my point light shadows realisation.



First of all, the pixel shader path:


float3 toLight = plPosW.xyz - input.posW;

float3 fromLight = -toLight;


float depthL = abs(fromLight.x);

if(depthL < abs(fromLight.y))
  depthL = abs(fromLight.y);

if(depthL < abs(fromLight.z))
  depthL = abs(fromLight.z);

float4 pH = mul(float4(0.0f, 0.0f, depthL, 1.0f), lightProj);
pH /= pH.w;

isVisible = lightDepthTex.SampleCmpLevelZero(lightDepthSampler, normalize(fromLight), pH.z).x;

lightProj matrix creation

Matrix4x4 projMat = Matrix4x4::PerspectiveFovLH(0.5f * Pi, 0.01f, 1000.0f, 1.0f);


thats how i create Depth cube texture

viewport->TopLeftX = 0.0f;
viewport->TopLeftY = 0.0f;
viewport->Width    = static_cast<float>(1024);
viewport->Height   = static_cast<float>(1024);
viewport->MinDepth = 0.0f;
viewport->MaxDepth = 1.0f;

D3D11_TEXTURE2D_DESC textureDesc;
textureDesc.Width = 1024;
textureDesc.Height = 1024;
textureDesc.MipLevels = 1;
textureDesc.ArraySize = 6;
textureDesc.Format = DXGI_FORMAT_R24G8_TYPELESS;
textureDesc.SampleDesc.Count = 1;
textureDesc.SampleDesc.Quality = 0;
textureDesc.Usage = D3D11_USAGE_DEFAULT;
textureDesc.CPUAccessFlags = 0;
textureDesc.MiscFlags = D3D11_RESOURCE_MISC_TEXTURECUBE;

ID3D11Texture2D* texturePtr;
HR(DeviceKeeper::GetDevice()->CreateTexture2D(&textureDesc, NULL, &texturePtr));

for(int i = 0; i < 6; ++i){

  dsvDesc.Flags = 0;
  dsvDesc.Format = DXGI_FORMAT_D24_UNORM_S8_UINT;
  dsvDesc.ViewDimension = D3D11_DSV_DIMENSION_TEXTURE2DARRAY;
  dsvDesc.Texture2DArray = D3D11_TEX2D_ARRAY_DSV{0, i, 1};

  ID3D11DepthStencilView *outDsv;
  HR(DeviceKeeper::GetDevice()->CreateDepthStencilView(texturePtr, &dsvDesc, &outDsv));

  edgeDsv = outDsv;

srvDesc.ViewDimension = D3D11_SRV_DIMENSION_TEXTURECUBE;
srvDesc.TextureCube = D3D11_TEXCUBE_SRV{0, 1};

ID3D11ShaderResourceView *outSRV;
HR(DeviceKeeper::GetDevice()->CreateShaderResourceView(texturePtr, &srvDesc, &outSRV)); 


then i create six target oriented cameras and finally draw scene to cube depth according to each camera

Cameras creation code:  

std::vector<Vector3> camDirs = {
  { 1.0f,  0.0f,  0.0f},
  {-1.0f,  0.0f,  0.0f},
  { 0.0f,  1.0f,  0.0f},
  { 0.0f, -1.0f,  0.0f},
  { 0.0f,  0.0f,  1.0f},
  { 0.0f,  0.0f, -1.0f},

  std::vector<Vector3> camUps = {
  {0.0f, 1.0f, 0.0f},  // +X
  {0.0f, 1.0f, 0.0f},  // -X
  {0.0f, 0.0f, -1.0f}, // +Y
  {0.0f, 0.0f, 1.0f},  // -Y
  {0.0f, 1.0f, 0.0f},  // +Z
  {0.0f, 1.0f, 0.0f}   // -Z

  for(size_t b = 0; b < camDirs.size(); b++){
	edgesCameras.SetTarget(pl.GetPos() + camDirs);


I will be very gratefull for any help!

P.s sorry for my poor English)


Edited by AlexWIN32

Share this post

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

  • Advertisement