Jump to content

  • Log In with Google      Sign In   
  • Create Account

Calculating the Final Vertex Position with MVP Matrix and another Object's Trans, Rotation, Scaling Calc...


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
12 replies to this topic

#1 tmason   Members   -  Reputation: 296

Like
0Likes
Like

Posted 08 May 2014 - 07:05 AM

Hello,

 

So I figured out the MVP matrix thing and I get things on the screen all well and good.

 

I searched for this additional piece of information but I didn't find it as of yet.

 

The question is now each object in question has their own rotation, translation, scaling, etc. separate from the MVP for the screen.

 

So I know I will need another uniform in my vertex shader code below but how do I alter the near-last line to incorporate the "ObjectOffsetMatrix" which contains the translation, rotation, and scale?

 

Thank you for your time.

#version 140

#extension GL_ARB_explicit_attrib_location : require

in vec4 vPosition;
in vec3 vNormal;
in vec2 vUV;

out vec3 SurfaceNormal;

uniform mat4 ModelViewProjectionMatrix;
uniform mat4 ObjectOffsetMatrix;
uniform mat3 NormalMatrix;

void main () {
	SurfaceNormal = normalize(NormalMatrix * vNormal);
	gl_Position = ModelViewProjectionMatrix * vPosition;
}


Sponsor:

#2 haegarr   Crossbones+   -  Reputation: 4587

Like
1Likes
Like

Posted 08 May 2014 - 07:54 AM


The question is now each object in question has their own rotation, translation, scaling, etc. separate from the MVP for the screen.

The abbreviation MVP means a matrix composed of a model's local-to-world transform (M), the world-to-view transform (V, also known as inverse camera transform), and the perspective transform (P). So there is no need for another model transform, because there is already one there (namely M).

 

What should ObjectOffsetMatrix bring in what is not already contained in M?



#3 tmason   Members   -  Reputation: 296

Like
0Likes
Like

Posted 08 May 2014 - 08:15 AM

 


The question is now each object in question has their own rotation, translation, scaling, etc. separate from the MVP for the screen.

The abbreviation MVP means a matrix composed of a model's local-to-world transform (M), the world-to-view transform (V, also known as inverse camera transform), and the perspective transform (P). So there is no need for another model transform, because there is already one there (namely M).

 

What should ObjectOffsetMatrix bring in what is not already contained in M?

 

 

Thanks for the information.

 

What is happening is that I have one import of different buffers of mesh and they each have their own rotation, scale, and translations relative to the scene which is represented by the MVP matrix.

 

This is stored in a separate matrix. That's why I am calling it the Object Offset Matrix.

 

So, what I imagine I need to do and what I am asking for assistance on if possible is to incorporate that offset into the equation until my c++ code updates the object uniform again for a different mesh with a new offset relative to the scene's MVP matrix.

 

Makes sense?

 

One MVP matrix, different objects with their own offsets against what is on the scene.


Edited by tmason, 08 May 2014 - 08:17 AM.


#4 haegarr   Crossbones+   -  Reputation: 4587

Like
0Likes
Like

Posted 08 May 2014 - 09:11 AM

Do you speak about sub-meshes, each with its own placement relative to the model, where all sub-meshes together define the model? Or do you speak about correction of meshes that do not fit your scene defined purposes? What does "relative to the scene" mean exactly? "Scene" usually means the entire composition of models and the camera in the world...



#5 Sponji   Members   -  Reputation: 1355

Like
0Likes
Like

Posted 08 May 2014 - 09:17 AM

You could do ViewProjectionMatrix * ModelMatrix * vPosition if that's what you mean? Your "ObjectOffsetMatrix" is probably the same thing as ModelMatrix. There can't be just one MVP matrix, because that M should be different for each model. I'd calculate the final matrix on CPU side, because then the GPU wouldn't need to do the matrix multiplication for each vertex. But of course, if you need to use the ModelMatrix in your shader code, send it too.


Edited by Sponji, 08 May 2014 - 09:17 AM.

Derp

#6 tmason   Members   -  Reputation: 296

Like
0Likes
Like

Posted 08 May 2014 - 04:37 PM

Do you speak about sub-meshes, each with its own placement relative to the model, where all sub-meshes together define the model? Or do you speak about correction of meshes that do not fit your scene defined purposes? What does "relative to the scene" mean exactly? "Scene" usually means the entire composition of models and the camera in the world...

 

Do you speak about sub-meshes, each with its own placement relative to the model, where all sub-meshes together define the model?

 

Yes, perhaps that was the confusion. Each sub-mesh has it's own displacement, rotation, and scale but together of course I want to render the entire model/scene.

 

Or do you speak about correction of meshes that do not fit your scene defined purposes?

 

No, see above.

 

What does "relative to the scene" mean exactly?

 

Relative to the scene meaning, I have five boxes with their own set of coordinates coming in from a file. Box 1, is scaled twice as big and placed in front of a camera, Box 2 is scaled normally but sitting to the left of the camera, Box 3 is sitting behind the camera and rotated 45 degrees, etc.

 

Each box has it's own group of vertices coming from the file I am importing as well as their own rotation, scaling, and translation.

 

All of this is making up one scene.

 

"Scene" usually means the entire composition of models and the camera in the world...

 

I agree, so what would be the correct way of rendering the MVP matrix incorporating what I have described above?

 

What I have difficulty understanding is how to calculate and incorporate the Normal Matrix, etc. in the final set of uniforms to be based to the shaders, etc.

 

Thank you for your time.


Edited by tmason, 08 May 2014 - 04:41 PM.


#7 tmason   Members   -  Reputation: 296

Like
0Likes
Like

Posted 08 May 2014 - 04:41 PM

You could do ViewProjectionMatrix * ModelMatrix * vPosition if that's what you mean? Your "ObjectOffsetMatrix" is probably the same thing as ModelMatrix. There can't be just one MVP matrix, because that M should be different for each model. I'd calculate the final matrix on CPU side, because then the GPU wouldn't need to do the matrix multiplication for each vertex. But of course, if you need to use the ModelMatrix in your shader code, send it too.

 

Hmmm, I will try this. Seems like it would work.

 

I was calculating the normal matrix against the entire MVP matrix.

 

Would I calculate this matrix instead again the ViewProjection matrix?



#8 Kaptein   Prime Members   -  Reputation: 2180

Like
0Likes
Like

Posted 08 May 2014 - 05:17 PM

Even with scaling, you can usually use transpose modelview 3x3:

 

vec3 norm = mat3(modelView) * in_normal.xyz;

 

if you have scaling or other effects, normalize it:

normalize(norm);

 

btw. assuming modelview matrix is really just the rotation of your camera: use it to transform any vector into eye-space from regular coordinate system.

we can call the mat3() part "camera rotation matrix." the fact that you are using it with a normal is irrelevant. :)

note that rotation and normal matrix * vector retains vector length, so it will still be normalized.



#9 Sponji   Members   -  Reputation: 1355

Like
1Likes
Like

Posted 08 May 2014 - 08:20 PM

This is how I would do this stuff usually:
mat4 projection = camera->projection();
mat4 view = camera->view(); // should be same as inverse(camera->transform());
mat4 model = model->transform();

mat4 vp = projection * view;
mat4 mvp = vp * model;
mat4 mv = view * model;

mat3 normal_matrix = transpose(inverse(mat3(mv)));

 

For the child objects, you just multiply by the child's transform, so it goes like this:

mat4 mvp = vp * model0 * model1 * model2
mat4 mv = view * model0 * model1 * model2

Derp

#10 tmason   Members   -  Reputation: 296

Like
0Likes
Like

Posted 08 May 2014 - 08:23 PM

 

This is how I would do this stuff usually:
mat4 projection = camera->projection();
mat4 view = camera->view(); // should be same as inverse(camera->transform());
mat4 model = model->transform();

mat4 vp = projection * view;
mat4 mvp = vp * model;
mat4 mv = view * model;

mat3 normal_matrix = transpose(inverse(mat3(mv)));

 

For the child objects, you just multiply by the child's transform, so it goes like this:

mat4 mvp = vp * model0 * model1 * model2
mat4 mv = view * model0 * model1 * model2

 

Thank you! I will give this a shot.



#11 haegarr   Crossbones+   -  Reputation: 4587

Like
1Likes
Like

Posted 09 May 2014 - 01:49 AM

1.) Situation A: We have sub-meshes. This is usually the case if the model has more than a single material and the rendering system cannot blend materials. So the model is divided into said sub-meshes where each sub-mesh has its own material. Further, each sub-mesh has its local placement S relative to the model. If we still name the model's placement in the world M, then the local-to-world matrix W of the sub-mesh is given by

    W := M * S

 

2.) Situation B: We have sub-meshes as above, but the sub-meshes are statically transformed, so that S is already applied to the vertices during pre-processing. The computation at runtime is then just

    W := M

 

3.) Situation C: We have models composed by parenting sub-models, e.g. rigid models of arms and legs structured in a skeleton, or rigid models of turrets on a tank. So a sub-model has a parent model, and the parent model may itself be a sub-model and has another parent model, up until the main model is reached. So each sub-mesh has its own local placement Si relative to its parent. Corresponding to the depth of parenting, we get a chain of transformations like so:

    W := M * Sn * Sn-1 * … * S0

where S0 is the sub-model that is not itself a parent, and Sn is the sub-model that has not itself a parent.

 

4.) View transformation: Now having our models given relative to the global "world" space due to W, we have placed a camera with transform C relative to the world. Because we see in view space and not in world space, we need the inverse of C and hence yield in the view transform V

    V := C-1

 

5.) The projection P is applied in view space and yields in the normalized device co-ordinates.

 

6.) All together, the transform so far looks like

    P * V * W

(still using column vectors) to come from a model local space into device co-ordinates.

 

Herein P changes perhaps never during the entire runtime of the game, V changes usually from frame to frame (letting things like portals and mirrors aside), and W changes from model to model during a frame. This can be realized by computing P * V once per frame and using it again and again during that frame. So the question is what to do with W.

 

7.) Comparing situation C with situation A shows that A looks like C for just a single level. Situation A would simply allow for setting the constant M once per model, and applying the varying S per sub-mesh (i.e. render call). Using parenthesis to express what I mean:

    ( ( P * V ) * M ) * S

 

But thinking in performance of render calls where switching textures, shaders, and whatever have their costs, especially often switching materials may be a no-go, so that render calls with the same material are to be batched. Obviously this contradicts the simple re-use of M as shown above. Instead, we get a situation where for each sub-mesh the own W is computed on the CPU, and the render call gets

    ( P * V ) * W

 

BTW, this is also the typical way to deal with parenting, partly due to the same reasons.

 

Another drawback of using both M and is that either you supply both matrices also for stand-alone models (for which M alone would be sufficient), or else you have to double your set of shaders.

 

So in the end it is usual to deal with all 3 situations in a common way: Computing the MODEL matrix on the CPU and send it to the shader. (Is just a suggestion) ;)



#12 tmason   Members   -  Reputation: 296

Like
0Likes
Like

Posted 09 May 2014 - 01:51 AM

Thank you for this write-up!

 

I will go through it in detail but I wanted to say thank you first.



#13 haegarr   Crossbones+   -  Reputation: 4587

Like
1Likes
Like

Posted 09 May 2014 - 02:35 AM


I was calculating the normal matrix against the entire MVP matrix.
 
Would I calculate this matrix instead again the ViewProjection matrix?

How exactly the normal matrix is to be computed depends on the space in which you want to transform the normals. Often normals are used in eye space for lighting calculations. This means that the projection matrix P does not play a role when computing the normal matrix.

 

Normals are direction vectors and as such invariant to translation. That is the reason why the rotational and scaling part of the transformation matrix is sufficient to deal with. So let us say that

    O := mat3( V * W )

defines that portion in eye space. (Maybe you want to have it in model world space, in which case you would drop V herein.)

 

The correct way of computing the normal matrix is to apply the transpose of the inverse:

    N := ( O-1 )t

 

In your situation you want to support both rotation and scaling (as said, it is invariant to translation, so no need to consider translation here). With some mathematical rules at hand, we get

    ( O-1 )t = ( ( R * S )-1 )t = ( S-1 * R-1 )t = ( R-1 )t * ( S-1 )t

 

Considering that R is an orthonormal basis and S is a diagonal matrix along the main diagonal, this can be simplified to 

    N = R * S-1

 

From this you can see that ...

 

a) … if you have no scaling, so S == I, then the normal matrix is identical to the mat3 of the model-view matrix.

 

b) ... if you have uniform scaling, i.e. scaling factor in all 3 principal directions is the same, then the inverse scaling is applied to the normal vector, which can also be seen as a multiplication with a scalar:

     S-1 = S( 1/s, 1/s, 1/s ) = 1/s * I

 

This is the case mentioned by Kaptein above. The difference is that Kaptein mentions that instead of ensuring the length of the normal vector by the matrix transformation itself, you can use the mat3 of the model-view matrix as normal matrix and undo the scaling happening by this simply by re-normalization of the resulting vector.

 

c) … if you have non-uniform scaling, i.e. scaling factor in all 3 principal directions is not the same

     S-1 = S( 1/sx, 1/sy, 1/sz )

then you have no chance but either compute the transposed inverse of O or else use compositing of R and S-1 (if those are known).

Edited by haegarr, 09 May 2014 - 03:50 AM.





Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS