Dx11 math and matrices

Started by
5 comments, last by Jason Z 9 years, 9 months ago

I'm porting from OpenGL4 to DX11 renderer. Going fine so far, once you got over the initial setup.

Originally for my OpenGL4 I used GLM (http://glm.g-truc.net/0.9.5/code.html) very intuitive and good syntax. Now in DX11 I first tried "DirectXMath" math library, but all the boilerplate code got my annoyed. Then I used the recommended "SimpleMath" but I just realised it uses row-major matrices and HLSL matrices are column major (just like GLSL). It uses right-handed coordinate system though.

1. Who thought it would be a good idea to mix row/column major-ness between code and HLSL in DX?

2. Frankly I'm thinking of sticking with GLM. It uses column-major matrices and right-handed coordinate system. Am I right in that it should simply work as expected with HLSL? No hidden quirks?

3. Is there any reason to pick one or the other of "SimpleMath" (row-major, right handed) and "GLM" (column-major, right handed)? Any opinions?

Advertisement

1) HLSL and GLSL support either row-major or column-major storage (and row-vectors or column-vectors), but they both default to column-major storage.

2) Yes, I'd just use GLM everywhere.

3) You're already using GLM, so there's no reason to switch wink.png

"Row-major transforms" are there becase of legacy, and (at least for me) they are much more intuitive. unfortunatly im forced to use column major sad.png

.

So I went ahead with using GLM for math, but one thing is screwing me over, and it is the construction of the perspective matrix. Afaik, GL (and GLM) maps [-1, 1] while DX goes [0, 1]. Any easy fix for this?

You have to construct the projection matrix differently to accommodate the difference in coordinate spaces. I think you'll just want to write your own function for creating a DX-style projection. It should be fairly trivial: if you look at the old D3DX docs they show the math for how the matrices are constructed (you'll just have to transpose if you want column-major ordering).

You should be aware that the D3DX functions create a perspective matrix that essentially negates the Z coordinate. This is so that you can work in a right-handed coordinate space such that Z- is the area directly in front of the camera in view space, and then those coordinates get mapped to the positive [0, 1] space in NCD space. If this is not how you set up your view space, then you'll need to adjust things accordingly.

OK, so I've rolled my own perspective matrix function. I'm using GLM through and through, so it is right-handed, column major matrices.

Here's the perspective function - looks correct?


Mat4 DX11PerspectiveMatrixFov(const float fovDegrees, const float aspectRatio, const float zNear, const float zFar)
    {
        const float fovRadians = glm::radians(fovDegrees);
        const float yScale = 1 / tan(fovRadians / 2.0f);
        const float xScale = yScale / aspectRatio;


        Mat4 ret(0.0f);


        ret[0][0] = xScale;
        ret[1][1] = yScale;
        ret[2][2] = zFar / (zNear - zFar);
        ret[2][3] = -1;
        ret[3][2] = zNear * zFar / (zNear - zFar);


        return ret;
    }

As for the view matrix, I could simply use the glm::lookAt, no special deal with the view matrix and directx 11?


As for the view matrix, I could simply use the glm::lookAt, no special deal with the view matrix and directx 11?
No, there isn't anything special about it. In the modern, non-fixed function pipeline, you are responsible to put your vertices into the NDC space for use in the rasterizer like MJP mentioned above. Anything you do prior to that just has to match your input data and the effect you are trying to achieve. You can pass in data in object space, world space, view space, or whacky inverted and sheared view space - it doesn't really matter as long as you end up with proper NDC coordinates in the rasterizer.

This topic is closed to new replies.

Advertisement