Sign in to follow this  
KarimIO

DX11 D3D + GLM Depth Reconstruction Issues

Recommended Posts

KarimIO    271
 

I'm trying to port my engine to DirectX and I'm currently having issues with depth reconstruction. It works perfectly in OpenGL (even though I use a bit of an expensive method). Every part besides the depth reconstruction works so far. I use GLM because it's a good math library that has no need to install any dependencies or anything for the user.

So basically I get my GLM matrices:

struct DefferedUBO {
    glm::mat4 view;
    glm::mat4 invProj;
    glm::vec4 eyePos;
    glm::vec4 resolution;
};

DefferedUBO deffUBOBuffer;
// ...

glm::mat4 projection = glm::perspective(engine.settings.fov, aspectRatio, 0.1f, 100.0f);
// Get My Camera
CTransform *transform = &engine.transformSystem.components[engine.entities[entityID].components[COMPONENT_TRANSFORM]];
// Get the View Matrix
glm::mat4 view = glm::lookAt(
    transform->GetPosition(),
    transform->GetPosition() + transform->GetForward(),
    transform->GetUp()
);

deffUBOBuffer.invProj = glm::inverse(projection);
deffUBOBuffer.view = glm::inverse(view);

if (engine.settings.graphicsLanguage == GRAPHICS_DIRECTX) {
    deffUBOBuffer.invProj = glm::transpose(deffUBOBuffer.invProj);
    deffUBOBuffer.view = glm::transpose(deffUBOBuffer.view);
}

// Abstracted so I can use OGL, DX, VK, or even Metal when I get around to it.
deffUBO->UpdateUniformBuffer(&deffUBOBuffer);
deffUBO->Bind());

Then in HLSL, I simply use the following:

cbuffer MatrixInfoType {
    matrix invView;
    matrix invProj;
    float4 eyePos;
    float4 resolution;
};

float4 ViewPosFromDepth(float depth, float2 TexCoord) {
    float z = depth; // * 2.0 - 1.0;

    float4 clipSpacePosition = float4(TexCoord * 2.0 - 1.0, z, 1.0);
    float4 viewSpacePosition = mul(invProj, clipSpacePosition);
    viewSpacePosition /= viewSpacePosition.w;

    return viewSpacePosition;
}

float3 WorldPosFromViewPos(float4 view) {
    float4 worldSpacePosition = mul(invView, view);

    return worldSpacePosition.xyz;
}

float3 WorldPosFromDepth(float depth, float2 TexCoord) {
    return WorldPosFromViewPos(ViewPosFromDepth(depth, TexCoord));
}

// ...

// Sample the hardware depth buffer.
float  depth    = shaderTexture[3].Sample(SampleType[0], input.texCoord).r;
float3 position = WorldPosFromDepth(depth, input.texCoord).rgb;

Here's the result:
image1

This just looks like random colors multiplied with the depth.

Ironically when I remove transposing, I get something closer to the truth, but not quite:
image2

You're looking at Crytek Sponza. As you can see, the green area moves and rotates with the bottom of the camera. I have no idea at all why.

The correct version, along with Albedo, Specular, and Normals.

image3

Share this post


Link to post
Share on other sites
Hodgman    51333

GL's NDC (post projection) Z coordinates range from -1 to 1, but D3D's range from 0 to 1.

glm::perspective will create a GL style projection matrix. You need to concatenate this with a matrix that scales z by 0.5 and translates by 0.5 to make it valid for D3D.

In normal rendering, the effect of this bug will be quite small - your near plane appearing about twice as far forward as you intended... But it will mess with depth reconstruction too.

 

Btw there should be no need to transpose your matrices on D3D - both GLSL and HLSL store 2D arrays in column-major element ordering.

Share this post


Link to post
Share on other sites
KarimIO    271

Thanks for the quick response. Firstly, GL is column major whereas directx is row major. I've already had to transpose for my first geometry stage and it works well. 

Second, will I need to change my first stage to accommodate this change as well? Also can I just multiply it by glm::translate(0,0,0.5)xProjection


EDIT: I've switched to row major vertices in DirectX using the following:

#pragma pack_matrix( row_major )

I guess DirectX just uses row major by default. I'm still having the same issues though. I tried using the following in ViewPosFromDepth:

float z = depth * 0.5 + 0.5;

 

Edited by KarimIO
Update

Share this post


Link to post
Share on other sites
Hodgman    51333
1 hour ago, KarimIO said:

Thanks for the quick response. Firstly, GL is column major whereas directx is row major. I've already had to transpose for my first geometry stage and it works well. 

Second, will I need to change my first stage to accommodate this change as well? Also can I just multiply it by glm::translate(0,0,0.5)xProjection

1- That's old info that hasn't applied since the fixed function graphics days. D3D/GL don't pick conventions for you. You can use any conventions on either API.

GLSL and HLSL both use column-major array indexing by default (but can be told to use row-major indexing such as with that pragma). Both can work with column-vector maths or row-vector maths (i.e. whether you write mul(vec,mat) or mul(mat,vec)) 

IIRC, GLM uses column-major storage and column-vector maths, and DirectXMath / D3DX use row-major storage and row-vector math... Which ironically results in them storing the exact same pattern of 64 bytes in RAM, but requires opposite multiplication order by the programmer :|

If you use the same math library, you can use the same shader code and matrix data on both APIs with no need to transpose anything. 

2- yeah whenever you produce a projection matrix you have to scale in z by 0.5 and then translate in z by 0.5. You need to do this for your vertex shaders so that the rasterization is correct. When fetching from the depth buffer, don't scale/offset the fetched value.

Share this post


Link to post
Share on other sites
KarimIO    271
1 hour ago, Hodgman said:

yeah whenever you produce a projection matrix you have to scale in z by 0.5 and then translate in z by 0.5. You need to do this for your vertex shaders so that the rasterization is correct. When fetching from the depth buffer, don't scale/offset the fetched value

I've tried this, but now it's far too zoomed in. Originally, it did look quite like my OpenGL results. Is there a reason for this?

projection *= glm::translate(glm::vec3(0.0f,0.0f,0.5f)) * glm::scale(glm::vec3(1.0f,1.0f,0.5f));

 

Edited by KarimIO
Formatting

Share this post


Link to post
Share on other sites
KarimIO    271

Okay thank you a lot, Hodgman I finally got it to work! But I do have a question, my main vertex.hlsl which takes the actual geometry and pushes it into the gbuffer requires row-major whereas the rest works fine using column major. Do you have any idea why that could be?

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Similar Content

    • By gomidas
      I am trying to add normal map to my project I have an example of a cube: 
      I have normal in my shader I think. Then I set shader resource view for texture (NOT BUMP)
                  device.ImmediateContext.PixelShader.SetShaderResource(0, textureView);             device.ImmediateContext.Draw(VerticesCount,0); What should I do to set my normal map or how it is done in dx11 generally example c++?
    • By fighting_falcon93
      Imagine that we have a vertex structure that looks like this:
      struct Vertex { XMFLOAT3 position; XMFLOAT4 color; }; The vertex shader looks like this:
      cbuffer MatrixBuffer { matrix world; matrix view; matrix projection; }; struct VertexInput { float4 position : POSITION; float4 color : COLOR; }; struct PixelInput { float4 position : SV_POSITION; float4 color : COLOR; }; PixelInput main(VertexInput input) { PixelInput output; input.position.w = 1.0f; output.position = mul(input.position, world); output.position = mul(output.position, view); output.position = mul(output.position, projection); output.color = input.color; return output; } And the pixel shader looks like this:
      struct PixelInput { float4 position : SV_POSITION; float4 color : COLOR; }; float4 main(PixelInput input) : SV_TARGET { return input.color; } Now let's create a quad consisting of 2 triangles and the vertices A, B, C and D:
      // Vertex A. vertices[0].position = XMFLOAT3(-1.0f, 1.0f, 0.0f); vertices[0].color = XMFLOAT4( 0.5f, 0.5f, 0.5f, 1.0f); // Vertex B. vertices[1].position = XMFLOAT3( 1.0f, 1.0f, 0.0f); vertices[1].color = XMFLOAT4( 0.5f, 0.5f, 0.5f, 1.0f); // Vertex C. vertices[2].position = XMFLOAT3(-1.0f, -1.0f, 0.0f); vertices[2].color = XMFLOAT4( 0.5f, 0.5f, 0.5f, 1.0f); // Vertex D. vertices[3].position = XMFLOAT3( 1.0f, -1.0f, 0.0f); vertices[3].color = XMFLOAT4( 0.5f, 0.5f, 0.5f, 1.0f); // 1st triangle. indices[0] = 0; // Vertex A. indices[1] = 3; // Vertex D. indices[2] = 2; // Vertex C. // 2nd triangle. indices[3] = 0; // Vertex A. indices[4] = 1; // Vertex B. indices[5] = 3; // Vertex D. This will result in a grey quad as shown in the image below. I've outlined the edges in red color to better illustrate the triangles:

      Now imagine that we’d want our quad to have a different color in vertex A:
      // Vertex A. vertices[0].position = XMFLOAT3(-1.0f, 1.0f, 0.0f); vertices[0].color = XMFLOAT4( 0.0f, 0.0f, 0.0f, 1.0f);
      That works as expected since there’s now an interpolation between the black color in vertex A and the grey color in vertices B, C and D. Let’s revert the previus changes and instead change the color of vertex C:
      // Vertex C. vertices[2].position = XMFLOAT3(-1.0f, -1.0f, 0.0f); vertices[2].color = XMFLOAT4( 0.0f, 0.0f, 0.0f, 1.0f);
      As you can see, the interpolation is only done half of the way across the first triangle and not across the entire quad. This is because there's no edge between vertex C and vertex B.
      Which brings us to my question:
      I want the interpolation to go across the entire quad and not only across the triangle. So regardless of which vertex we decide to change the color of, the color interpolation should always go across the entire quad. Is there any efficient way of achieving this without adding more vertices and triangles?
      An illustration of what I'm trying to achieve is shown in the image below:

       
      Background
      This is just a very brief explanation of the problems background in case that would make it easier for you to understand the problems roots and maybe help you with finding a better solution to the problem.
      I'm trying to texture a terrain mesh in DirectX11. It's working, but I'm a bit unsatisfied with the result. When changing the terrain texture of a single vertex, the interpolation with the other vertices results in a hexagon shape instead of a squared shape:

      As the red arrows illustrate, I'd like the texture to be interpolated all the way into the corners of the quads.
    • By -Tau-
      Hello, I'm close to releasing my first game to Steam however, my game keeps failing the review process because it keeps crashing. The problem is that the game doesn't crash on my computer, on my laptop, on our family computer, on fathers laptop and i also gave 3 beta keys to people i know and they said the game hasn't crashed.
      Steam reports that the game doesn't crash on startup but few frames after a level has been started.
      What could cause something like this? I have no way of debugging this as the game works fine on every computer i have.
       
      Game is written in C++, using DirectX 11 and DXUT framework.
    • By haiiry
      I'm trying to get, basically, screenshot (each 1 second, without saving) of Direct3D11 application. Code works fine on my PC(Intel CPU, Radeon GPU) but crashes after few iterations on 2 others (Intel CPU + Intel integrated GPU, Intel CPU + Nvidia GPU).
      void extractBitmap(void* texture) { if (texture) { ID3D11Texture2D* d3dtex = (ID3D11Texture2D*)texture; ID3D11Texture2D* pNewTexture = NULL; D3D11_TEXTURE2D_DESC desc; d3dtex->GetDesc(&desc); desc.BindFlags = 0; desc.CPUAccessFlags = D3D11_CPU_ACCESS_READ | D3D11_CPU_ACCESS_WRITE; desc.Usage = D3D11_USAGE_STAGING; desc.Format = DXGI_FORMAT_R8G8B8A8_UNORM_SRGB; HRESULT hRes = D3D11Device->CreateTexture2D(&desc, NULL, &pNewTexture); if (FAILED(hRes)) { printCon(std::string("CreateTexture2D FAILED:" + format_error(hRes)).c_str()); if (hRes == DXGI_ERROR_DEVICE_REMOVED) printCon(std::string("DXGI_ERROR_DEVICE_REMOVED -- " + format_error(D3D11Device->GetDeviceRemovedReason())).c_str()); } else { if (pNewTexture) { D3D11DeviceContext->CopyResource(pNewTexture, d3dtex); // Wokring with texture pNewTexture->Release(); } } } return; } D3D11SwapChain->GetBuffer(0, __uuidof(ID3D11Texture2D), reinterpret_cast< void** >(&pBackBuffer)); extractBitmap(pBackBuffer); pBackBuffer->Release(); Crash log:
      CreateTexture2D FAILED:887a0005 DXGI_ERROR_DEVICE_REMOVED -- 887a0020 Once I comment out 
      D3D11DeviceContext->CopyResource(pNewTexture, d3dtex); 
      code works fine on all 3 PC's.
    • By Fluffy10
      Hi i'm new to this forum and was wondering if there are any good places to start learning directX 11. I bought Frank D Luna's book but it's really outdated and the projects won't even compile. I was excited to start learning from this book because it gives detailed explanations on the functions being used as well as the mathematics. Are there any tutorials / courses /books that are up to date which goes over the 3D math and functions in a detailed manner? Or where does anyone here learn directX 11? I've followed some tutorials from this website http://www.directxtutorial.com/LessonList.aspx?listid=11 which did a nice job but it doesn't explain what's happening with the math so I feel like I'm not actually learning, and it only goes up until color blending. Rasteriks tutorials doesn't go over the functions much at all or the math involved either. I'd really appreciate it if anyone can point me in the right direction, I feel really lost. Thank you
  • Popular Now