Jump to content
  • Advertisement
Sign in to follow this  
KarimIO

DX11 D3D + GLM Depth Reconstruction Issues

This topic is 405 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

 

I'm trying to port my engine to DirectX and I'm currently having issues with depth reconstruction. It works perfectly in OpenGL (even though I use a bit of an expensive method). Every part besides the depth reconstruction works so far. I use GLM because it's a good math library that has no need to install any dependencies or anything for the user.

So basically I get my GLM matrices:

struct DefferedUBO {
    glm::mat4 view;
    glm::mat4 invProj;
    glm::vec4 eyePos;
    glm::vec4 resolution;
};

DefferedUBO deffUBOBuffer;
// ...

glm::mat4 projection = glm::perspective(engine.settings.fov, aspectRatio, 0.1f, 100.0f);
// Get My Camera
CTransform *transform = &engine.transformSystem.components[engine.entities[entityID].components[COMPONENT_TRANSFORM]];
// Get the View Matrix
glm::mat4 view = glm::lookAt(
    transform->GetPosition(),
    transform->GetPosition() + transform->GetForward(),
    transform->GetUp()
);

deffUBOBuffer.invProj = glm::inverse(projection);
deffUBOBuffer.view = glm::inverse(view);

if (engine.settings.graphicsLanguage == GRAPHICS_DIRECTX) {
    deffUBOBuffer.invProj = glm::transpose(deffUBOBuffer.invProj);
    deffUBOBuffer.view = glm::transpose(deffUBOBuffer.view);
}

// Abstracted so I can use OGL, DX, VK, or even Metal when I get around to it.
deffUBO->UpdateUniformBuffer(&deffUBOBuffer);
deffUBO->Bind());

Then in HLSL, I simply use the following:

cbuffer MatrixInfoType {
    matrix invView;
    matrix invProj;
    float4 eyePos;
    float4 resolution;
};

float4 ViewPosFromDepth(float depth, float2 TexCoord) {
    float z = depth; // * 2.0 - 1.0;

    float4 clipSpacePosition = float4(TexCoord * 2.0 - 1.0, z, 1.0);
    float4 viewSpacePosition = mul(invProj, clipSpacePosition);
    viewSpacePosition /= viewSpacePosition.w;

    return viewSpacePosition;
}

float3 WorldPosFromViewPos(float4 view) {
    float4 worldSpacePosition = mul(invView, view);

    return worldSpacePosition.xyz;
}

float3 WorldPosFromDepth(float depth, float2 TexCoord) {
    return WorldPosFromViewPos(ViewPosFromDepth(depth, TexCoord));
}

// ...

// Sample the hardware depth buffer.
float  depth    = shaderTexture[3].Sample(SampleType[0], input.texCoord).r;
float3 position = WorldPosFromDepth(depth, input.texCoord).rgb;

Here's the result:
image1

This just looks like random colors multiplied with the depth.

Ironically when I remove transposing, I get something closer to the truth, but not quite:
image2

You're looking at Crytek Sponza. As you can see, the green area moves and rotates with the bottom of the camera. I have no idea at all why.

The correct version, along with Albedo, Specular, and Normals.

image3

Share this post


Link to post
Share on other sites
Advertisement

GL's NDC (post projection) Z coordinates range from -1 to 1, but D3D's range from 0 to 1.

glm::perspective will create a GL style projection matrix. You need to concatenate this with a matrix that scales z by 0.5 and translates by 0.5 to make it valid for D3D.

In normal rendering, the effect of this bug will be quite small - your near plane appearing about twice as far forward as you intended... But it will mess with depth reconstruction too.

 

Btw there should be no need to transpose your matrices on D3D - both GLSL and HLSL store 2D arrays in column-major element ordering.

Share this post


Link to post
Share on other sites

Thanks for the quick response. Firstly, GL is column major whereas directx is row major. I've already had to transpose for my first geometry stage and it works well. 

Second, will I need to change my first stage to accommodate this change as well? Also can I just multiply it by glm::translate(0,0,0.5)xProjection


EDIT: I've switched to row major vertices in DirectX using the following:

#pragma pack_matrix( row_major )

I guess DirectX just uses row major by default. I'm still having the same issues though. I tried using the following in ViewPosFromDepth:

float z = depth * 0.5 + 0.5;

 

Edited by KarimIO
Update

Share this post


Link to post
Share on other sites
1 hour ago, KarimIO said:

Thanks for the quick response. Firstly, GL is column major whereas directx is row major. I've already had to transpose for my first geometry stage and it works well. 

Second, will I need to change my first stage to accommodate this change as well? Also can I just multiply it by glm::translate(0,0,0.5)xProjection

1- That's old info that hasn't applied since the fixed function graphics days. D3D/GL don't pick conventions for you. You can use any conventions on either API.

GLSL and HLSL both use column-major array indexing by default (but can be told to use row-major indexing such as with that pragma). Both can work with column-vector maths or row-vector maths (i.e. whether you write mul(vec,mat) or mul(mat,vec)) 

IIRC, GLM uses column-major storage and column-vector maths, and DirectXMath / D3DX use row-major storage and row-vector math... Which ironically results in them storing the exact same pattern of 64 bytes in RAM, but requires opposite multiplication order by the programmer :|

If you use the same math library, you can use the same shader code and matrix data on both APIs with no need to transpose anything. 

2- yeah whenever you produce a projection matrix you have to scale in z by 0.5 and then translate in z by 0.5. You need to do this for your vertex shaders so that the rasterization is correct. When fetching from the depth buffer, don't scale/offset the fetched value.

Share this post


Link to post
Share on other sites
1 hour ago, Hodgman said:

yeah whenever you produce a projection matrix you have to scale in z by 0.5 and then translate in z by 0.5. You need to do this for your vertex shaders so that the rasterization is correct. When fetching from the depth buffer, don't scale/offset the fetched value

I've tried this, but now it's far too zoomed in. Originally, it did look quite like my OpenGL results. Is there a reason for this?

projection *= glm::translate(glm::vec3(0.0f,0.0f,0.5f)) * glm::scale(glm::vec3(1.0f,1.0f,0.5f));

 

Edited by KarimIO
Formatting

Share this post


Link to post
Share on other sites

If GLM uses column-vector maths, your multiplication order might be backwards there. Try:

projection = glm::translate(glm::vec3(0.0f,0.0f,0.5f)) * glm::scale(glm::vec3(1.0f,1.0f,0.5f)) * projection;
 

Share this post


Link to post
Share on other sites

Okay thank you a lot, Hodgman I finally got it to work! But I do have a question, my main vertex.hlsl which takes the actual geometry and pushes it into the gbuffer requires row-major whereas the rest works fine using column major. Do you have any idea why that could be?

Share this post


Link to post
Share on other sites

Do you use the exact same VS math in your HLSL and GLSL versions?

Post some VS shader code and we'll have a look :)

Share this post


Link to post
Share on other sites

@Hodgman Sorry! Didn't see the response until now! Keep in mind as I used GLM, it's column major. Here's the code that works:

////////////////////////////////////////////////////////////////////////////////
// Filename: mainVert.vs
////////////////////////////////////////////////////////////////////////////////

/////////////
// GLOBALS //
/////////////
#pragma pack_matrix( row_major )

cbuffer MatrixBuffer
{
    matrix worldMatrix;
    matrix viewMatrix;
    matrix projectionMatrix;
};

//////////////
// TYPEDEFS //
//////////////
struct VertexInputType {
    float3 position : POSITION;
    float3 normal : NORMAL;
    float3 tangent : TANGENT;
    float2 texCoord : TEXCOORD0;
};

struct PixelInputType {
    float4 position : SV_POSITION;
    float3 worldPosition : POSITION;
    float3 normal : NORMAL;
    float3 tangent : TANGENT;
    float2 texCoord : TEXCOORD0;
};

////////////////////////////////////////////////////////////////////////////////
// Vertex Shader
////////////////////////////////////////////////////////////////////////////////
PixelInputType main(VertexInputType input) {
    float4 position;
    PixelInputType output;

    // Change the position vector to be 4 units for proper matrix calculations.
    position = float4(input.position, 1.0f);

    // Calculate the position of the vertex against the world, view, and projection matrices.
    position = mul(position, worldMatrix);
    output.worldPosition = position.xyz;
    position = mul(position, viewMatrix);
    output.position = mul(position, projectionMatrix);
    
    output.normal = normalize(mul(float4(input.normal, 1.0), worldMatrix).xyz);
    output.tangent = normalize(mul(float4(input.tangent, 1.0), worldMatrix).xyz);
    output.texCoord = float2(input.texCoord.x, -input.texCoord.y);
    
    return output;
}
////////////////////////////////////////////////////////////////////////////////
// Filename: pointLightFrag.ps
////////////////////////////////////////////////////////////////////////////////

#pragma pack_matrix( column_major )

#include "inc_transform.hlsl"
#include "inc_light.hlsl"

//////////////
// TYPEDEFS //
//////////////
struct PixelInputType {
    float4 position : SV_POSITION;
    float2 texCoord : TEXCOORD0;
    float3 viewRay : POSITION;
};

Texture2D shaderTexture[4];
SamplerState SampleType[4];

cbuffer MatrixInfoType {
    matrix invView;
    matrix invProj;
    float4 eyePos;
    float4 resolution;
};

cbuffer Light {
	float3 lightPosition;
    float lightAttenuationRadius;
	float3 lightColor;
	float lightIntensity;
};

////////////////////////////////////////////////////////////////////////////////
// Pixel Shader
////////////////////////////////////////////////////////////////////////////////
float4 main(PixelInputType input) : SV_TARGET {
    float  depth        = shaderTexture[3].Sample(SampleType[0], input.texCoord).r;
    float3 Position = WorldPosFromDepth(invProj, invView, depth, input.texCoord);
    //return float4(position, 1.0);
    /*float near = 0.1;
    float far = 100;
    float ProjectionA = far / (far - near);
    float ProjectionB = (-far * near) / (far - near);
    depth = ProjectionB / ((depth - ProjectionA));
    float4 position = float4(input.viewRay * depth, 1.0);*/
    // Convert to World Space:
    // position = mul(invView, position);
    float3 Albedo       = shaderTexture[0].Sample(SampleType[0], input.texCoord).rgb;
    float3 Normal       = shaderTexture[1].Sample(SampleType[0], input.texCoord).rgb;
    float4 Specular     = shaderTexture[2].Sample(SampleType[0], input.texCoord);

	float3 lightPow = lightColor * lightIntensity;
	float3 outColor = LightPointCalc(Albedo.rgb, Position.xyz, Specular, Normal.xyz, lightPosition, lightAttenuationRadius, lightPow, eyePos.xyz); // hdrGammaTransform()
	return float4(hdrGammaTransform(outColor), 1.0f);
}

 

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!