Jump to content
  • Advertisement
Sign in to follow this  
tscott1213

OpenGL Setting Shader Registers

This topic is 4969 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

In D3D when we set a GPU register with the WorldViewProjection matrix first we have to transpose the matrix. Is that true in OpenGL also? Thanks

Share this post


Link to post
Share on other sites
Advertisement
afaik all matrix in OpenGL stay as column major all the time with no transposes in shaders (which makes me wonder if OGL reflects the underlaying hardware better... )

Share this post


Link to post
Share on other sites
Thanks.

This gets me to the question that is really bugging me.

If D3D uses row vectors and openGL uses col vectors, then when D3D transposes its WorldViewProj matrix before sending it to the shader it is sending the same Matrix as OpenGL. However, OpenGl is col-major (as you mentioned) and D3D is row major. Which means that if registers c0-c3 were set with the WorldViewProj matrix from D3D they would hold the rows of the matrix and if they were set with OpenGL they would hold the cols of the matrix. So, why don't you have to write two versions of the same shader?

Thanks
Todd

Share this post


Link to post
Share on other sites
Here is my understanding, which I believe to be flawed, but don't understand why it is flawed:

An OpenGL translation matrix would be as follows:

1 0 0 tx
0 1 0 ty
0 0 1 tz
0 0 0 1

A D3D translation matrix would be as follows:

1 0 0 0
0 1 0 0
0 0 1 0
tx ty tz 1

So, when you transpose the D3D matrix it becomes the same as the OpenGL matrix.

But, D3D is row-major and OpenGL is col-major. So, when you set the registers using D3D each register holds a row but when you set the registers using OpenGL each register holds a column. Therefore, if we set registers c0-c3 they would hold different values for OpenGL and D3D.


Thanks for the help.
Todd

Share this post


Link to post
Share on other sites
you're right about the transposing of the matrix, as to why the loading works I dont know, i'd have to see the code for D3D to see how you load it and how you access it, chances are its just driver voodoo making sure things work as you expect

Share this post


Link to post
Share on other sites
Hey Phantom,

Here is the answer to my question, I think I just confused everyone by lack of understanding/use of OpenGL!

The following comment was made by Joakim Hårsman on FlipCode:

"LHS vs RHS isn't the only difference between D3D and OpenGL that can cause differences. First of all, OpenGL treats vertices as column vectors, so transformations are concatenated by left multiplying: M'=TM, v'=Mv and this causes the typical tranformation matrix to be the transpose of the Direct3D standard. However, since OpenGL treats matrices as they were stored in column major order (vs D3D's row major), everything works out the same anyway (ignoring coord system orientation) provided you're consistent in how you construct the matrix you load into OpenGL/D3D. "

Thanks for your help.
Todd

Share this post


Link to post
Share on other sites
What some people don't seem to realise, is that the actual memory configuration of OGL and D3D matrices are the same, just one represents a column matrix, and one a row matrix.

Share this post


Link to post
Share on other sites
http://www.opengl.org/resources/faq/technical/transformations.htm#tran0005

9.005 Are OpenGL matrices column-major or row-major?

For programming purposes, OpenGL matrices are 16-value arrays with base vectors laid out contiguously in memory. The translation components occupy the 13th, 14th, and 15th elements of the 16-element matrix.

Column-major versus row-major is purely a notational convention. Note that post-multiplying with column-major matrices produces the same result as pre-multiplying with row-major matrices. The OpenGL Specification and the OpenGL Reference Manual both use column-major notation. You can use any notation, as long as it's clearly stated.

Sadly, the use of column-major format in the spec and blue book has resulted in endless confusion in the OpenGL programming community. Column-major notation suggests that matrices are not laid out in memory as a programmer would expect.

A summary of Usenet postings on the subject can be found here.

Share this post


Link to post
Share on other sites
There is actually a point to treating vectors as column vectors like OpenGL does as well. This means you can view transformations like a function v'=P*V*M*v is the same thing as v'=P(V(M(v))) and is somewhat of a standard in linear algebra. So, originally, the spec was just supposed to be mathematically elegant and saying that are stored in column major order was just a clever way of maintaining backwards compatability with IrisGL. The actual matrices you provide are identical! The whole story is here http://stevehollasch.com/cgindex/math/matrix/column-vec.html

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!