Sign in to follow this  
KarimIO

DX11 D3D + GLM Depth Reconstruction Issues

Recommended Posts

 

I'm trying to port my engine to DirectX and I'm currently having issues with depth reconstruction. It works perfectly in OpenGL (even though I use a bit of an expensive method). Every part besides the depth reconstruction works so far. I use GLM because it's a good math library that has no need to install any dependencies or anything for the user.

So basically I get my GLM matrices:

struct DefferedUBO {
    glm::mat4 view;
    glm::mat4 invProj;
    glm::vec4 eyePos;
    glm::vec4 resolution;
};

DefferedUBO deffUBOBuffer;
// ...

glm::mat4 projection = glm::perspective(engine.settings.fov, aspectRatio, 0.1f, 100.0f);
// Get My Camera
CTransform *transform = &engine.transformSystem.components[engine.entities[entityID].components[COMPONENT_TRANSFORM]];
// Get the View Matrix
glm::mat4 view = glm::lookAt(
    transform->GetPosition(),
    transform->GetPosition() + transform->GetForward(),
    transform->GetUp()
);

deffUBOBuffer.invProj = glm::inverse(projection);
deffUBOBuffer.view = glm::inverse(view);

if (engine.settings.graphicsLanguage == GRAPHICS_DIRECTX) {
    deffUBOBuffer.invProj = glm::transpose(deffUBOBuffer.invProj);
    deffUBOBuffer.view = glm::transpose(deffUBOBuffer.view);
}

// Abstracted so I can use OGL, DX, VK, or even Metal when I get around to it.
deffUBO->UpdateUniformBuffer(&deffUBOBuffer);
deffUBO->Bind());

Then in HLSL, I simply use the following:

cbuffer MatrixInfoType {
    matrix invView;
    matrix invProj;
    float4 eyePos;
    float4 resolution;
};

float4 ViewPosFromDepth(float depth, float2 TexCoord) {
    float z = depth; // * 2.0 - 1.0;

    float4 clipSpacePosition = float4(TexCoord * 2.0 - 1.0, z, 1.0);
    float4 viewSpacePosition = mul(invProj, clipSpacePosition);
    viewSpacePosition /= viewSpacePosition.w;

    return viewSpacePosition;
}

float3 WorldPosFromViewPos(float4 view) {
    float4 worldSpacePosition = mul(invView, view);

    return worldSpacePosition.xyz;
}

float3 WorldPosFromDepth(float depth, float2 TexCoord) {
    return WorldPosFromViewPos(ViewPosFromDepth(depth, TexCoord));
}

// ...

// Sample the hardware depth buffer.
float  depth    = shaderTexture[3].Sample(SampleType[0], input.texCoord).r;
float3 position = WorldPosFromDepth(depth, input.texCoord).rgb;

Here's the result:
image1

This just looks like random colors multiplied with the depth.

Ironically when I remove transposing, I get something closer to the truth, but not quite:
image2

You're looking at Crytek Sponza. As you can see, the green area moves and rotates with the bottom of the camera. I have no idea at all why.

The correct version, along with Albedo, Specular, and Normals.

image3

Share this post


Link to post
Share on other sites

GL's NDC (post projection) Z coordinates range from -1 to 1, but D3D's range from 0 to 1.

glm::perspective will create a GL style projection matrix. You need to concatenate this with a matrix that scales z by 0.5 and translates by 0.5 to make it valid for D3D.

In normal rendering, the effect of this bug will be quite small - your near plane appearing about twice as far forward as you intended... But it will mess with depth reconstruction too.

 

Btw there should be no need to transpose your matrices on D3D - both GLSL and HLSL store 2D arrays in column-major element ordering.

Share this post


Link to post
Share on other sites

Thanks for the quick response. Firstly, GL is column major whereas directx is row major. I've already had to transpose for my first geometry stage and it works well. 

Second, will I need to change my first stage to accommodate this change as well? Also can I just multiply it by glm::translate(0,0,0.5)xProjection


EDIT: I've switched to row major vertices in DirectX using the following:

#pragma pack_matrix( row_major )

I guess DirectX just uses row major by default. I'm still having the same issues though. I tried using the following in ViewPosFromDepth:

float z = depth * 0.5 + 0.5;

 

Edited by KarimIO
Update

Share this post


Link to post
Share on other sites
1 hour ago, KarimIO said:

Thanks for the quick response. Firstly, GL is column major whereas directx is row major. I've already had to transpose for my first geometry stage and it works well. 

Second, will I need to change my first stage to accommodate this change as well? Also can I just multiply it by glm::translate(0,0,0.5)xProjection

1- That's old info that hasn't applied since the fixed function graphics days. D3D/GL don't pick conventions for you. You can use any conventions on either API.

GLSL and HLSL both use column-major array indexing by default (but can be told to use row-major indexing such as with that pragma). Both can work with column-vector maths or row-vector maths (i.e. whether you write mul(vec,mat) or mul(mat,vec)) 

IIRC, GLM uses column-major storage and column-vector maths, and DirectXMath / D3DX use row-major storage and row-vector math... Which ironically results in them storing the exact same pattern of 64 bytes in RAM, but requires opposite multiplication order by the programmer :|

If you use the same math library, you can use the same shader code and matrix data on both APIs with no need to transpose anything. 

2- yeah whenever you produce a projection matrix you have to scale in z by 0.5 and then translate in z by 0.5. You need to do this for your vertex shaders so that the rasterization is correct. When fetching from the depth buffer, don't scale/offset the fetched value.

Share this post


Link to post
Share on other sites
1 hour ago, Hodgman said:

yeah whenever you produce a projection matrix you have to scale in z by 0.5 and then translate in z by 0.5. You need to do this for your vertex shaders so that the rasterization is correct. When fetching from the depth buffer, don't scale/offset the fetched value

I've tried this, but now it's far too zoomed in. Originally, it did look quite like my OpenGL results. Is there a reason for this?

projection *= glm::translate(glm::vec3(0.0f,0.0f,0.5f)) * glm::scale(glm::vec3(1.0f,1.0f,0.5f));

 

Edited by KarimIO
Formatting

Share this post


Link to post
Share on other sites

Okay thank you a lot, Hodgman I finally got it to work! But I do have a question, my main vertex.hlsl which takes the actual geometry and pushes it into the gbuffer requires row-major whereas the rest works fine using column major. Do you have any idea why that could be?

Share this post


Link to post
Share on other sites

@Hodgman Sorry! Didn't see the response until now! Keep in mind as I used GLM, it's column major. Here's the code that works:

////////////////////////////////////////////////////////////////////////////////
// Filename: mainVert.vs
////////////////////////////////////////////////////////////////////////////////

/////////////
// GLOBALS //
/////////////
#pragma pack_matrix( row_major )

cbuffer MatrixBuffer
{
    matrix worldMatrix;
    matrix viewMatrix;
    matrix projectionMatrix;
};

//////////////
// TYPEDEFS //
//////////////
struct VertexInputType {
    float3 position : POSITION;
    float3 normal : NORMAL;
    float3 tangent : TANGENT;
    float2 texCoord : TEXCOORD0;
};

struct PixelInputType {
    float4 position : SV_POSITION;
    float3 worldPosition : POSITION;
    float3 normal : NORMAL;
    float3 tangent : TANGENT;
    float2 texCoord : TEXCOORD0;
};

////////////////////////////////////////////////////////////////////////////////
// Vertex Shader
////////////////////////////////////////////////////////////////////////////////
PixelInputType main(VertexInputType input) {
    float4 position;
    PixelInputType output;

    // Change the position vector to be 4 units for proper matrix calculations.
    position = float4(input.position, 1.0f);

    // Calculate the position of the vertex against the world, view, and projection matrices.
    position = mul(position, worldMatrix);
    output.worldPosition = position.xyz;
    position = mul(position, viewMatrix);
    output.position = mul(position, projectionMatrix);
    
    output.normal = normalize(mul(float4(input.normal, 1.0), worldMatrix).xyz);
    output.tangent = normalize(mul(float4(input.tangent, 1.0), worldMatrix).xyz);
    output.texCoord = float2(input.texCoord.x, -input.texCoord.y);
    
    return output;
}
////////////////////////////////////////////////////////////////////////////////
// Filename: pointLightFrag.ps
////////////////////////////////////////////////////////////////////////////////

#pragma pack_matrix( column_major )

#include "inc_transform.hlsl"
#include "inc_light.hlsl"

//////////////
// TYPEDEFS //
//////////////
struct PixelInputType {
    float4 position : SV_POSITION;
    float2 texCoord : TEXCOORD0;
    float3 viewRay : POSITION;
};

Texture2D shaderTexture[4];
SamplerState SampleType[4];

cbuffer MatrixInfoType {
    matrix invView;
    matrix invProj;
    float4 eyePos;
    float4 resolution;
};

cbuffer Light {
	float3 lightPosition;
    float lightAttenuationRadius;
	float3 lightColor;
	float lightIntensity;
};

////////////////////////////////////////////////////////////////////////////////
// Pixel Shader
////////////////////////////////////////////////////////////////////////////////
float4 main(PixelInputType input) : SV_TARGET {
    float  depth        = shaderTexture[3].Sample(SampleType[0], input.texCoord).r;
    float3 Position = WorldPosFromDepth(invProj, invView, depth, input.texCoord);
    //return float4(position, 1.0);
    /*float near = 0.1;
    float far = 100;
    float ProjectionA = far / (far - near);
    float ProjectionB = (-far * near) / (far - near);
    depth = ProjectionB / ((depth - ProjectionA));
    float4 position = float4(input.viewRay * depth, 1.0);*/
    // Convert to World Space:
    // position = mul(invView, position);
    float3 Albedo       = shaderTexture[0].Sample(SampleType[0], input.texCoord).rgb;
    float3 Normal       = shaderTexture[1].Sample(SampleType[0], input.texCoord).rgb;
    float4 Specular     = shaderTexture[2].Sample(SampleType[0], input.texCoord);

	float3 lightPow = lightColor * lightIntensity;
	float3 outColor = LightPointCalc(Albedo.rgb, Position.xyz, Specular, Normal.xyz, lightPosition, lightAttenuationRadius, lightPow, eyePos.xyz); // hdrGammaTransform()
	return float4(hdrGammaTransform(outColor), 1.0f);
}

 

Share this post


Link to post
Share on other sites

So just to make things clear, there's two ways to do matrix math on paper - putting the basis vectors in the rows, treating vectors as rows, and doing left-to-right multiplication: \(\begin{bmatrix} Vx & Vy & Vz & 1 \end{bmatrix} \cdot \begin{bmatrix} Xx & Xy & Xz & 0\\ Yx & Yy & Yz & 0\\ Zx & Zy & Zz & 0\\ Px & Py & Pz & 1 \end{bmatrix}\)

Or putting basis vectors in the columns, treating the vectors as columns, and doing right-to-left multiplication: \(\begin{bmatrix} Xx & Yx & Zx & Px\\ Xy & Yy & Zy & Py\\ Yz & Yz & Zz & Pz\\ 0 & 0 & 0 & 1 \end{bmatrix} \cdot \begin{bmatrix} Vx \\ Vy \\ Vz \\ 1 \end{bmatrix} \)

Then a completely separate issue is how you decide to store 2D arrays in linear memory. Row-major: \(\begin{bmatrix} 0&1&2&3\\ 4&5&6&7\\ 8&9&10&11\\ 12&13&14&15 \end{bmatrix}\)

Or column-major: \(\begin{bmatrix} 0&4&8&12\\ 1&5&9&13\\ 2&6&10&14\\ 3&7&11&15 \end{bmatrix}\)

That results in four different conventions for doing matrix math in a computer (row-major/column-major array indexing, and row-vector/column-vector math).

GLM uses column-vector math and column-major array indexing.

In the HLSL code that you posted, your math is written assuming row-vector math (left to right multiplication ordering), which is the opposite convention to what GLM uses. Your HLSL code also expects column-major array ordered data.

If, on the CPU side before the shader runs, you rearrange your data from column-major to row-major array ordering, then HLSL is going to accidentally interpret your data wrong -- it will read rows as columns and columns as rows... which has the same effect as doing a mathematical transpose operation, which cancels out the fact that you're using the opposite mathematical conventions.
i.e. your mathematical conventions in the vertex shader are the opposite of what GLM expects, but by also using the opposite array storage conventions, these cancel each other out and two wrongs make a right, so it works.

You should be able to get rid of all your transposing and just rewrite the VS to use right-to-left multiplication order,
e.g. position = mul(worldMatrix, position);

Share this post


Link to post
Share on other sites
On 9/23/2017 at 5:45 AM, Hodgman said:

If, on the CPU side before the shader runs, you rearrange your data from column-major to row-major array ordering, then HLSL is going to accidentally interpret your data wrong -- it will read rows as columns and columns as rows... which has the same effect as doing a mathematical transpose operation, which cancels out the fact that you're using the opposite mathematical conventions.

The problem is, I'm doing no such thing. GLM outputs the same column-major matrices, but for some reason one shader requires row major and the other column major. I understand everything you're talking about, it's just DirectX likes it one way in one shader and another way in another shader.

Share this post


Link to post
Share on other sites
6 hours ago, KarimIO said:

one shader requires row major and the other column major.

What do you mean by this exactly? How are you switching between those formats?

You don't rearrange the data at all, you've just found that you have to do the math backwards in the VS code?

Share this post


Link to post
Share on other sites
On 9/28/2017 at 1:56 AM, Hodgman said:

What do you mean by this exactly? How are you switching between those formats?

You don't rearrange the data at all, you've just found that you have to do the math backwards in the VS code?

In the main prepass vertex shader, I need to use this:

#pragma pack_matrix( row_major )

Multiplication should be column major by default because I use GLM. Yet, for one  specific file, I need to use row_major.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Forum Statistics

    • Total Topics
      628293
    • Total Posts
      2981869
  • Similar Content

    • By GreenGodDiary
      I'm attempting to implement some basic post-processing in my "engine" and the HLSL part of the Compute Shader and such I think I've understood, however I'm at a loss at how to actually get/use it's output for rendering to the screen.
      Assume I'm doing something to a UAV in my CS:
      RWTexture2D<float4> InputOutputMap : register(u0); I want that texture to essentially "be" the backbuffer.
       
      I'm pretty certain I'm doing something wrong when I create the views (what I think I'm doing is having the backbuffer be bound as render target aswell as UAV and then using it in my CS):
       
      DXGI_SWAP_CHAIN_DESC scd; ZeroMemory(&scd, sizeof(DXGI_SWAP_CHAIN_DESC)); scd.BufferCount = 1; scd.BufferDesc.Format = DXGI_FORMAT_R8G8B8A8_UNORM; scd.BufferUsage = DXGI_USAGE_RENDER_TARGET_OUTPUT | DXGI_USAGE_SHADER_INPUT | DXGI_USAGE_UNORDERED_ACCESS; scd.OutputWindow = wndHandle; scd.SampleDesc.Count = 1; scd.Windowed = TRUE; HRESULT hr = D3D11CreateDeviceAndSwapChain(NULL, D3D_DRIVER_TYPE_HARDWARE, NULL, NULL, NULL, NULL, D3D11_SDK_VERSION, &scd, &gSwapChain, &gDevice, NULL, &gDeviceContext); // get the address of the back buffer ID3D11Texture2D* pBackBuffer = nullptr; gSwapChain->GetBuffer(0, __uuidof(ID3D11Texture2D), (LPVOID*)&pBackBuffer); // use the back buffer address to create the render target gDevice->CreateRenderTargetView(pBackBuffer, NULL, &gBackbufferRTV); // set the render target as the back buffer CreateDepthStencilBuffer(); gDeviceContext->OMSetRenderTargets(1, &gBackbufferRTV, depthStencilView); //UAV for compute shader D3D11_UNORDERED_ACCESS_VIEW_DESC uavd; ZeroMemory(&uavd, sizeof(uavd)); uavd.Format = DXGI_FORMAT_R8G8B8A8_UNORM; uavd.ViewDimension = D3D11_UAV_DIMENSION_TEXTURE2D; uavd.Texture2D.MipSlice = 1; gDevice->CreateUnorderedAccessView(pBackBuffer, &uavd, &gUAV); pBackBuffer->Release();  
      After I render the scene, I dispatch like this:
      gDeviceContext->OMSetRenderTargets(0, NULL, NULL); m_vShaders["cs1"]->Bind(); gDeviceContext->CSSetUnorderedAccessViews(0, 1, &gUAV, 0); gDeviceContext->Dispatch(32, 24, 0); //hard coded ID3D11UnorderedAccessView* nullview = { nullptr }; gDeviceContext->CSSetUnorderedAccessViews(0, 1, &nullview, 0); gDeviceContext->OMSetRenderTargets(1, &gBackbufferRTV, depthStencilView); gSwapChain->Present(0, 0); Worth noting is the scene is rendered as usual, but I dont get any results from the CS (simple gaussian blur)
      I'm sure it's something fairly basic I'm doing wrong, perhaps my understanding of render targets / views / what have you is just completely wrong and my approach just makes no sense.

      If someone with more experience could point me in the right direction I would really appreciate it!

      On a side note, I'd really like to learn more about this kind of stuff. I can really see the potential of the CS aswell as rendering to textures and using them for whatever in the engine so I would love it if you know some good resources I can read about this!

      Thank you <3
       
      P.S I excluded the .hlsl since I cant imagine that being the issue, but if you think you need it to help me just ask

      P:P:S. As you can see this is my first post however I do have another account, but I can't log in with it because gamedev.net just keeps asking me to accept terms and then logs me out when I do over and over
    • By noodleBowl
      I was wondering if anyone could explain the depth buffer and the depth stencil state comparison function to me as I'm a little confused
      So I have set up a depth stencil state where the DepthFunc is set to D3D11_COMPARISON_LESS, but what am I actually comparing here? What is actually written to the buffer, the pixel that should show up in the front?
      I have these 2 quad faces, a Red Face and a Blue Face. The Blue Face is further away from the Viewer with a Z index value of -100.0f. Where the Red Face is close to the Viewer with a Z index value of 0.0f.
      When DepthFunc is set to D3D11_COMPARISON_LESS the Red Face shows up in front of the Blue Face like it should based on the Z index values. BUT if I change the DepthFunc to D3D11_COMPARISON_LESS_EQUAL the Blue Face shows in front of the Red Face. Which does not make sense to me, I would think that when the function is set to D3D11_COMPARISON_LESS_EQUAL the Red Face would still show up in front of the Blue Face as the Z index for the Red Face is still closer to the viewer
      Am I thinking of this comparison function all wrong?
      Vertex data just in case
      //Vertex date that make up the 2 faces Vertex verts[] = { //Red face Vertex(Vector4(0.0f, 0.0f, 0.0f), Color(1.0f, 0.0f, 0.0f)), Vertex(Vector4(100.0f, 100.0f, 0.0f), Color(1.0f, 0.0f, 0.0f)), Vertex(Vector4(100.0f, 0.0f, 0.0f), Color(1.0f, 0.0f, 0.0f)), Vertex(Vector4(0.0f, 0.0f, 0.0f), Color(1.0f, 0.0f, 0.0f)), Vertex(Vector4(0.0f, 100.0f, 0.0f), Color(1.0f, 0.0f, 0.0f)), Vertex(Vector4(100.0f, 100.0f, 0.0f), Color(1.0f, 0.0f, 0.0f)), //Blue face Vertex(Vector4(0.0f, 0.0f, -100.0f), Color(0.0f, 0.0f, 1.0f)), Vertex(Vector4(100.0f, 100.0f, -100.0f), Color(0.0f, 0.0f, 1.0f)), Vertex(Vector4(100.0f, 0.0f, -100.0f), Color(0.0f, 0.0f, 1.0f)), Vertex(Vector4(0.0f, 0.0f, -100.0f), Color(0.0f, 0.0f, 1.0f)), Vertex(Vector4(0.0f, 100.0f, -100.0f), Color(0.0f, 0.0f, 1.0f)), Vertex(Vector4(100.0f, 100.0f, -100.0f), Color(0.0f, 0.0f, 1.0f)), };  
    • By mellinoe
      Hi all,
      First time poster here, although I've been reading posts here for quite a while. This place has been invaluable for learning graphics programming -- thanks for a great resource!
      Right now, I'm working on a graphics abstraction layer for .NET which supports D3D11, Vulkan, and OpenGL at the moment. I have implemented most of my planned features already, and things are working well. Some remaining features that I am planning are Compute Shaders, and some flavor of read-write shader resources. At the moment, my shaders can just get simple read-only access to a uniform (or constant) buffer, a texture, or a sampler. Unfortunately, I'm having a tough time grasping the distinctions between all of the different kinds of read-write resources that are available. In D3D alone, there seem to be 5 or 6 different kinds of resources with similar but different characteristics. On top of that, I get the impression that some of them are more or less "obsoleted" by the newer kinds, and don't have much of a place in modern code. There seem to be a few pivots:
      The data source/destination (buffer or texture) Read-write or read-only Structured or unstructured (?) Ordered vs unordered (?) These are just my observations based on a lot of MSDN and OpenGL doc reading. For my library, I'm not interested in exposing every possibility to the user -- just trying to find a good "middle-ground" that can be represented cleanly across API's which is good enough for common scenarios.
      Can anyone give a sort of "overview" of the different options, and perhaps compare/contrast the concepts between Direct3D, OpenGL, and Vulkan? I'd also be very interested in hearing how other folks have abstracted these concepts in their libraries.
    • By turanszkij
      If I do a buffer update with MAP_NO_OVERWRITE or MAP_DISCARD, can I just write to the buffer after I called Unmap() on the buffer? It seems to work fine for me (Nvidia driver), but is it actually legal to do so? I have a graphics device wrapper and I don't want to expose Map/Unmap, but just have a function like void* AllocateFromRingBuffer(GPUBuffer* buffer, uint size, uint& offset); This function would just call Map on the buffer, then Unmap immediately and then return the address of the buffer. It usually does a MAP_NO_OVERWRITE, but sometimes it is a WRITE_DISCARD (when the buffer wraps around). Previously I have been using it so that the function expected the data upfront and would copy to the buffer between Map/Unmap, but now I want to extend functionality of it so that it would just return an address to write to.
    • By mister345
      Trying to write a multitexturing shader in DirectX11 - 3 textures work fine, but adding 4th gets sampled as black!
      Could you please look at the textureClass.cpp line 79? - I'm guess its D3D11_TEXTURE2D_DESC settings are wrong, 
      but no idea how to set it up right. I tried changing ArraySize from 1 to 4, but does nothing. If thats not the issue, please look
      at the LightShader_ps - maybe doing something wrong there? Otherwise, no idea.
          // Setup the description of the texture.
          textureDesc.Height = height;
          textureDesc.Width = width;
          textureDesc.MipLevels = 0;
          textureDesc.ArraySize = 1;
          textureDesc.Format = DXGI_FORMAT_R8G8B8A8_UNORM;
          textureDesc.SampleDesc.Count = 1;
          textureDesc.SampleDesc.Quality = 0;
          textureDesc.Usage = D3D11_USAGE_DEFAULT;
          textureDesc.BindFlags = D3D11_BIND_SHADER_RESOURCE | D3D11_BIND_RENDER_TARGET;
          textureDesc.CPUAccessFlags = 0;
          textureDesc.MiscFlags = D3D11_RESOURCE_MISC_GENERATE_MIPS;
      Please help, thanks.
      https://github.com/mister51213/DirectX11Engine/blob/master/DirectX11Engine/Texture.cpp
       
  • Popular Now