• Advertisement
  • Popular Tags

  • Popular Now

  • Advertisement
  • Similar Content

    • By fleissi
      Hey guys!

      I'm new here and I recently started developing my own rendering engine. It's open source, based on OpenGL/DirectX and C++.
      The full source code is hosted on github:
      https://github.com/fleissna/flyEngine

      I would appreciate if people with experience in game development / engine desgin could take a look at my source code. I'm looking for honest, constructive criticism on how to improve the engine.
      I'm currently writing my master's thesis in computer science and in the recent year I've gone through all the basics about graphics programming, learned DirectX and OpenGL, read some articles on Nvidia GPU Gems, read books and integrated some of this stuff step by step into the engine.

      I know about the basics, but I feel like there is some missing link that I didn't get yet to merge all those little pieces together.

      Features I have so far:
      - Dynamic shader generation based on material properties
      - Dynamic sorting of meshes to be renderd based on shader and material
      - Rendering large amounts of static meshes
      - Hierarchical culling (detail + view frustum)
      - Limited support for dynamic (i.e. moving) meshes
      - Normal, Parallax and Relief Mapping implementations
      - Wind animations based on vertex displacement
      - A very basic integration of the Bullet physics engine
      - Procedural Grass generation
      - Some post processing effects (Depth of Field, Light Volumes, Screen Space Reflections, God Rays)
      - Caching mechanisms for textures, shaders, materials and meshes

      Features I would like to have:
      - Global illumination methods
      - Scalable physics
      - Occlusion culling
      - A nice procedural terrain generator
      - Scripting
      - Level Editing
      - Sound system
      - Optimization techniques

      Books I have so far:
      - Real-Time Rendering Third Edition
      - 3D Game Programming with DirectX 11
      - Vulkan Cookbook (not started yet)

      I hope you guys can take a look at my source code and if you're really motivated, feel free to contribute :-)
      There are some videos on youtube that demonstrate some of the features:
      Procedural grass on the GPU
      Procedural Terrain Engine
      Quadtree detail and view frustum culling

      The long term goal is to turn this into a commercial game engine. I'm aware that this is a very ambitious goal, but I'm sure it's possible if you work hard for it.

      Bye,

      Phil
    • By tj8146
      I have attached my project in a .zip file if you wish to run it for yourself.
      I am making a simple 2d top-down game and I am trying to run my code to see if my window creation is working and to see if my timer is also working with it. Every time I run it though I get errors. And when I fix those errors, more come, then the same errors keep appearing. I end up just going round in circles.  Is there anyone who could help with this? 
       
      Errors when I build my code:
      1>Renderer.cpp 1>c:\users\documents\opengl\game\game\renderer.h(15): error C2039: 'string': is not a member of 'std' 1>c:\program files (x86)\windows kits\10\include\10.0.16299.0\ucrt\stddef.h(18): note: see declaration of 'std' 1>c:\users\documents\opengl\game\game\renderer.h(15): error C2061: syntax error: identifier 'string' 1>c:\users\documents\opengl\game\game\renderer.cpp(28): error C2511: 'bool Game::Rendering::initialize(int,int,bool,std::string)': overloaded member function not found in 'Game::Rendering' 1>c:\users\documents\opengl\game\game\renderer.h(9): note: see declaration of 'Game::Rendering' 1>c:\users\documents\opengl\game\game\renderer.cpp(35): error C2597: illegal reference to non-static member 'Game::Rendering::window' 1>c:\users\documents\opengl\game\game\renderer.cpp(36): error C2597: illegal reference to non-static member 'Game::Rendering::window' 1>c:\users\documents\opengl\game\game\renderer.cpp(43): error C2597: illegal reference to non-static member 'Game::Rendering::window' 1>Done building project "Game.vcxproj" -- FAILED. ========== Build: 0 succeeded, 1 failed, 0 up-to-date, 0 skipped ==========  
       
      Renderer.cpp
      #include <GL/glew.h> #include <GLFW/glfw3.h> #include "Renderer.h" #include "Timer.h" #include <iostream> namespace Game { GLFWwindow* window; /* Initialize the library */ Rendering::Rendering() { mClock = new Clock; } Rendering::~Rendering() { shutdown(); } bool Rendering::initialize(uint width, uint height, bool fullscreen, std::string window_title) { if (!glfwInit()) { return -1; } /* Create a windowed mode window and its OpenGL context */ window = glfwCreateWindow(640, 480, "Hello World", NULL, NULL); if (!window) { glfwTerminate(); return -1; } /* Make the window's context current */ glfwMakeContextCurrent(window); glViewport(0, 0, (GLsizei)width, (GLsizei)height); glOrtho(0, (GLsizei)width, (GLsizei)height, 0, 1, -1); glMatrixMode(GL_PROJECTION); glLoadIdentity(); glfwSwapInterval(1); glEnable(GL_SMOOTH); glEnable(GL_DEPTH_TEST); glEnable(GL_BLEND); glDepthFunc(GL_LEQUAL); glHint(GL_PERSPECTIVE_CORRECTION_HINT, GL_NICEST); glEnable(GL_TEXTURE_2D); glLoadIdentity(); return true; } bool Rendering::render() { /* Loop until the user closes the window */ if (!glfwWindowShouldClose(window)) return false; /* Render here */ mClock->reset(); glfwPollEvents(); if (mClock->step()) { glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glfwSwapBuffers(window); mClock->update(); } return true; } void Rendering::shutdown() { glfwDestroyWindow(window); glfwTerminate(); } GLFWwindow* Rendering::getCurrentWindow() { return window; } } Renderer.h
      #pragma once namespace Game { class Clock; class Rendering { public: Rendering(); ~Rendering(); bool initialize(uint width, uint height, bool fullscreen, std::string window_title = "Rendering window"); void shutdown(); bool render(); GLFWwindow* getCurrentWindow(); private: GLFWwindow * window; Clock* mClock; }; } Timer.cpp
      #include <GL/glew.h> #include <GLFW/glfw3.h> #include <time.h> #include "Timer.h" namespace Game { Clock::Clock() : mTicksPerSecond(50), mSkipTics(1000 / mTicksPerSecond), mMaxFrameSkip(10), mLoops(0) { mLastTick = tick(); } Clock::~Clock() { } bool Clock::step() { if (tick() > mLastTick && mLoops < mMaxFrameSkip) return true; return false; } void Clock::reset() { mLoops = 0; } void Clock::update() { mLastTick += mSkipTics; mLoops++; } clock_t Clock::tick() { return clock(); } } TImer.h
      #pragma once #include "Common.h" namespace Game { class Clock { public: Clock(); ~Clock(); void update(); bool step(); void reset(); clock_t tick(); private: uint mTicksPerSecond; ufloat mSkipTics; uint mMaxFrameSkip; uint mLoops; uint mLastTick; }; } Common.h
      #pragma once #include <cstdio> #include <cstdlib> #include <ctime> #include <cstring> #include <cmath> #include <iostream> namespace Game { typedef unsigned char uchar; typedef unsigned short ushort; typedef unsigned int uint; typedef unsigned long ulong; typedef float ufloat; }  
      Game.zip
    • By lxjk
      Hi guys,
      There are many ways to do light culling in tile-based shading. I've been playing with this idea for a while, and just want to throw it out there.
      Because tile frustums are general small compared to light radius, I tried using cone test to reduce false positives introduced by commonly used sphere-frustum test.
      On top of that, I use distance to camera rather than depth for near/far test (aka. sliced by spheres).
      This method can be naturally extended to clustered light culling as well.
      The following image shows the general ideas

       
      Performance-wise I get around 15% improvement over sphere-frustum test. You can also see how a single light performs as the following: from left to right (1) standard rendering of a point light; then tiles passed the test of (2) sphere-frustum test; (3) cone test; (4) spherical-sliced cone test
       

       
      I put the details in my blog post (https://lxjk.github.io/2018/03/25/Improve-Tile-based-Light-Culling-with-Spherical-sliced-Cone.html), GLSL source code included!
       
      Eric
    • By Fadey Duh
      Good evening everyone!

      I was wondering if there is something equivalent of  GL_NV_blend_equation_advanced for AMD?
      Basically I'm trying to find more compatible version of it.

      Thank you!
    • By Jens Eckervogt
      Hello guys, 
       
      Please tell me! 
      How do I know? Why does wavefront not show for me?
      I already checked I have non errors yet.
      using OpenTK; using System.Collections.Generic; using System.IO; using System.Text; namespace Tutorial_08.net.sourceskyboxer { public class WaveFrontLoader { private static List<Vector3> inPositions; private static List<Vector2> inTexcoords; private static List<Vector3> inNormals; private static List<float> positions; private static List<float> texcoords; private static List<int> indices; public static RawModel LoadObjModel(string filename, Loader loader) { inPositions = new List<Vector3>(); inTexcoords = new List<Vector2>(); inNormals = new List<Vector3>(); positions = new List<float>(); texcoords = new List<float>(); indices = new List<int>(); int nextIdx = 0; using (var reader = new StreamReader(File.Open("Contents/" + filename + ".obj", FileMode.Open), Encoding.UTF8)) { string line = reader.ReadLine(); int i = reader.Read(); while (true) { string[] currentLine = line.Split(); if (currentLine[0] == "v") { Vector3 pos = new Vector3(float.Parse(currentLine[1]), float.Parse(currentLine[2]), float.Parse(currentLine[3])); inPositions.Add(pos); if (currentLine[1] == "t") { Vector2 tex = new Vector2(float.Parse(currentLine[1]), float.Parse(currentLine[2])); inTexcoords.Add(tex); } if (currentLine[1] == "n") { Vector3 nom = new Vector3(float.Parse(currentLine[1]), float.Parse(currentLine[2]), float.Parse(currentLine[3])); inNormals.Add(nom); } } if (currentLine[0] == "f") { Vector3 pos = inPositions[0]; positions.Add(pos.X); positions.Add(pos.Y); positions.Add(pos.Z); Vector2 tc = inTexcoords[0]; texcoords.Add(tc.X); texcoords.Add(tc.Y); indices.Add(nextIdx); ++nextIdx; } reader.Close(); return loader.loadToVAO(positions.ToArray(), texcoords.ToArray(), indices.ToArray()); } } } } } And It have tried other method but it can't show for me.  I am mad now. Because any OpenTK developers won't help me.
      Please help me how do I fix.

      And my download (mega.nz) should it is original but I tried no success...
      - Add blend source and png file here I have tried tried,.....  
       
      PS: Why is our community not active? I wait very longer. Stop to lie me!
      Thanks !
  • Advertisement
  • Advertisement
Sign in to follow this  

OpenGL Vector and matrix multiplication order in DirectX and OpenGL

This topic is 591 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

This is driving me absolutely crazy. I have been researching this topic for days and the more I read about it the more I get confused.

 

Lets all agree that in math there are two ways to multiply a vector and a matrix.

 

You can do this 

P = Mv

 

where P is the final matrix, M is a matrix and v is a vector. This would mean that this is a column major matrix. which means that your translation vector in your matrix would look like this

 

[Xx, Xy, Xz, Tx]

[Yx, Yy, Yz, Ty]

[Zx, Zy, Zz, Tz]

[0,   0,   0,    1]

 

where Tx, Ty, and Tz is your translation vector.

 

On the other hand you can also do this

P = vM

 

This would mean that this is a row major matrix. Which means that your translation vector in your matrix would look like this

 

[Xx, Xy, Xz, 0]

[Yx, Yy, Yz, 0]

[Zx, Zy, Zz,  0]

[Tx, Ty, Tz,  1]

 

Ok this makes total sense. Now lets implement that in HLSL and GLSL

 

 

Now this HLSL code makes sense. The position vector is on the left side of the multiplication and the model matrix is on the right side. That means this is a row major matrix.

struct VOut
{
	float4 position : SV_POSITION;
	float4 color : COLOR;
};

VOut main(float4 position : POSITION, float4 color : COLOR)
{
	float4x4 buffer_modelMatrix = 
        {
		{ 1,    0,    0,    0 },
		{ 0,    1,    0,    0 },
		{ 0,    0,    1,    0 },
		{ 0.5f, 0.1f, 0.4f, 1 },
	};

	VOut output;

	output.position = mul(position, buffer_modelMatrix);
	output.color = color;

	return output;
}

Now lets move the position vector to the right side of the multiplication and the model matrix to the left side. Also we will move the translation vector to the top side inseard of the bottom side of the matrix. Well that is valid and now the matrix is a column major matrix. Everything works as intended and we can still translate the triangle just fine.

struct VOut
{
	float4 position : SV_POSITION;
	float4 color : COLOR;
};

VOut main(float4 position : POSITION, float4 color : COLOR)
{
	float4x4 buffer_modelMatrix = 
        {
		{ 1, 0, 0, 0.5f },
		{ 0, 1, 0, 0.1f },
		{ 0, 0, 1, 0.4f },
		{ 0, 0 ,0, 1    },
	};

	VOut output;

	output.position = mul(buffer_modelMatrix, position);
                output.color = color;
                return output;
}
 
Now lets try and do the same with GLSL
 
#version 450 core
layout(location = 0) in vec3 inPosition;
layout(location = 1) in vec3 inColor;

out vec3 fragmentColor;

mat4 buffer_modelMatrix = 
{
			  { 1, 0, 0, 0.5f },
			  { 0, 1, 0, 0    },
			  { 0, 0, 1, 0    },
			  { 0, 0, 0, 1    },
};

void main()
{
	gl_Position = buffer_modelMatrix * vec4(inPosition.xyz, 1.0f);
	fragmentColor = inColor;
}

if you try and run this code, the triangle will not be translated correctly. Even though it should. However if you move the inPosition to the left and model matrix to the right side of the multiplication, the translation will work correctly. But it shouldn't since the translation is still on the top side not bottom.

gl_Position = vec4(inPosition.xyz, 1.0f) * buffer_modelMatrix;

Now that does not make any seance because as we agreed above if the vector (in this case inPosition) is on the left side of the multiplication and if the matrix is on the right side, then this would mean that this is a row major matrix order and the translation should be on the bottom not top.

 

I hope i'm making sense. Can someone please explain why GLSL is not following the above rules for vector and matrix multiplication?

Edited by FantasyVII

Share this post


Link to post
Share on other sites
Advertisement

You are doing an uncorrect syntax of mat4 constructor and there is no need to specify the f of floats in GLSL, it should be 

mat4 buffer_modelMatrix = mat4(1, 0, 0, 0.5,
                               0, 1, 0, 0,
                               0, 0, 1, 0,
                               0, 0, 0, 1);

Share this post


Link to post
Share on other sites
The HLSL matrix constructor takes rows of values, but the GLSL matrix constructor takes columns of values.
So your HLSL code is putting a translation in the right column, but your GLSL code is putting a translation in the bottom row. Fun quirks...
Seeing it's rare to construct a matrix in a shader, it's very easy to overlook this differences in the two languages :|

Share this post


Link to post
Share on other sites

 

You are doing an uncorrect syntax of mat4 constructor and there is no need to specify the f of floats in GLSL, it should be 

mat4 buffer_modelMatrix = mat4(1, 0, 0, 0.5,
                               0, 1, 0, 0,
                               0, 0, 1, 0,
                               0, 0, 0, 1);

Off the top of my head that might not work either. I recall GLSL being really picky when you start mixing integers with decimals. You might have to put .0 on the end of all the numbers not just the.

 

As for what is actually going on I cannot say for sure but I don't think you can actually change whether a matrix is row or column major in either of those since that is a matter of how it is laid out in memory which is out of your hands at that point. What you can change is the matrix you build to account for that. you are transposing those matrices which is effectively accounts for that. As far as I can tell, when you swap the order of multiplication you are transposing the (in) position vector as there is a mathematical limit to the form of matrces that can be multiplied together. When the vector is on the right it will need to be vertical but when it is on left it needs to be horizontal. I believe both OpenGL and DirectX handle that for you transparently.

 

According to this:

https://www.opengl.org/wiki/Data_Type_(GLSL)#Constructors

For multiple values, matrices are filled in in column-major order

The matrix that is build using Supremecy's code (and I assume what you intended) would make a matrix that is suitable to be multiplied when a vector is horizontal, meaning the vector will need to be on the left. If you want it on the right then you will need to transpose the vector.

Share this post


Link to post
Share on other sites

The HLSL matrix constructor takes rows of values, but the GLSL matrix constructor takes columns of values.
So your HLSL code is putting a translation in the right column, but your GLSL code is putting a translation in the bottom row. Fun quirks...
Seeing it's rare to construct a matrix in a shader, it's very easy to overlook this differences in the two languages :|

 

This makes so much sense now. Thank you. I was going insane.

The reason i'm defining the matrix inside HLSL and GLSL is to help me understand how both shaders handle matrices. I didn't want to send the matrix from C++ to the shader and add more confusion to the already confusing topic.

 

Alright to to summarize,

 

Row major order:-

Vector is always on the left side of the multiplication with a matrix.

P = vM

Translation vector is always on the 12, 13 and 14th element.

 

Column major order:-

Vector is always on the right side of the multiplication with a matrix.

P = Mv

Translation vector is always on the 3, 7 and 11th element.

 

So the only difference between HLSL and GLSL is how they layout this data in memory.

HLSL reads the matrix row by row. GLSL reads the matrix column by column.

 

So this is how HLSL layout the data in memory

0  1  2  3 
4  5  6  7 
8  9  10 11
12 13 14 15

And this is how GLSL layout the data in memory.

0 4 8  12 
1 5 9  13 
2 6 10 14 
3 7 11 15

Alright. This makes sense. Now I have to understand how I should layout the matrix in C++ and send it to HLSL and GLSL correctly.

 

Man..... Why couldn't OpenGL just read things row by row...... WHY !! :P

Share this post


Link to post
Share on other sites

Wow, this how I imagine hell looks like.

 

Alright, I think I understand what you are saying.

 

So let me try one more time.

 

Math row major and column major /= CS row major and column major.

Math talks about the multiplication order and CS talks about the indexing order which do not equal each other.

 

Both HLSL and GLSL use this order

0 4 8  12 
1 5 9  13 
2 6 10 14 
3 7 11 15

But HLSL matrix constructor decided to take the matrix this way

0  1  2  3 
4  5  6  7 
8  9  10 11
12 13 14 15

So, in C++ if you have a 1D array of 16 elements, the order in memory should be "column order indexing" even though in the shader you are doing "math row major" multiplication.

 

and if you are doing "math column major" multiplication your C++ memory layout for the array should be in "row major indexing".

 

Yeah this is defiantly hell.


C++ column order indexing               HLSL/GLSL
  [0 4 8  12]                  =
  [1 5 9  13]                  =         P = vM (row major multiplication "Math")
  [2 6 10 14]                  =
  [3 7 11 15]                  =

C++ array indexing         [0, 1, 2, 3   -  4, 5, 6, 7   -  8, 9, 10, 11  -  12, 13, 14, 15] 
C++ array memory layout    [0, 4, 8, 12  -  1, 5, 9, 13  -  2, 6, 10, 14  -  3,  7,  11, 15] 

Translation vector is at 
array[3]
array[7]
array[11]

------------------------------------------------------------------------------------------

C++ row order indexing                 HLSL/GLSL
  [0  1  2  3]                 =
  [4  5  6  7]                 =        P = Mv (column major multiplication "Math")
  [8  9  10 11]                =
  [12 13 14 15]                =

C++ array indexing         [0, 1, 2, 3   -  4, 5, 6, 7   -  8, 9, 10, 11  -  12, 13, 14, 15] 
C++ array memory layout    [0, 1, 2, 3   -  4, 5, 6, 7   -  8, 9, 10, 11  -  12, 13, 14, 15] 

Translation vector is at 
array[12]
array[13]
array[14]

Share this post


Link to post
Share on other sites

So, in C++ if you have a 1D array of 16 elements, the order in memory should be "column order indexing" even though in the shader you are doing "math row major" multiplication.

Yes, whether you're doing "math row major" multiplication or not is irrelevant.
The only time you should use row-major ordering in C++ is if you've also used the row_major keyword to tell your shaders to interpret the memory using that convention.

and if you are doing "math column major" multiplication your C++ memory layout for the array should be in "row major indexing".

No. There's no connection between whether you should use a particular "comp sci majorness" and a "math majorness". Comp-sci-row-major and math-column-major will work together just fine.

You just need to make sure that:
* If you use comp-sci column-major memory layout in the C++ side, then your shaders should work out of the box (just avoid the row_major keyword!).
* If you use comp-sci row-major memory layout in the C++ side, then use the row_major keyword in your shaders so that they interpret your memory correctly.
And separately:
* That your math makes sense, from a purely mathematical perspective  :)
* i.e. The choice of row-vectors / column-vectors, basis vectors in rows / basis vectors in columns, pre-multiply / post-multiply all depend on which mathematical conventions you want to use. These are all well defined and work as long as you're consistent.
* The math conventions that you choose have no impact whatsoever on which com-sci conventions you can use.

Edited by Hodgman

Share this post


Link to post
Share on other sites

Yeah this is defiantly hell.
 

 

I gotta agree on that one. I find the best method of dealing with this is to start at the end and work back. I think of my vectors as a vertical column so mathematically they then have to be the second argument of a multiplication (with a 4x4 matrix) or else it can't be multiplied. Once that's set I then I know that I have to build my transforms a certain way (e.g. the translation needs to be in the last column).

 

Then you need to hide everything away behind functions and avoid creating a matrix directly from values. That way you never really care what ordering it is, it'll just work as you expect it to. The only confusion then comes when you send it to your shader but you can document that step in your code quite thoroughly so once you have it once it shouldn't be an issue. You might need to transpose before sending but that's about it.

 

I believe the mathematical convention is actually to go down columns first and then across rows (which is the opposite of most other things in maths as you tend to go across first then up) so things being column major does make sense in that regard.

 

This has probably been the topic that causes me the most confusion too. Thinking backwards, trying to stick with mathematical convention and using functions/abstraction helps a lot.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement