• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.
Sign in to follow this  
Followers 0
tmason

Calculating the Final Vertex Position with MVP Matrix and another Object's Trans, Rotation, Scaling Calc...

12 posts in this topic

Hello,

 

So I figured out the MVP matrix thing and I get things on the screen all well and good.

 

I searched for this additional piece of information but I didn't find it as of yet.

 

The question is now each object in question has their own rotation, translation, scaling, etc. separate from the MVP for the screen.

 

So I know I will need another uniform in my vertex shader code below but how do I alter the near-last line to incorporate the "ObjectOffsetMatrix" which contains the translation, rotation, and scale?

 

Thank you for your time.

#version 140

#extension GL_ARB_explicit_attrib_location : require

in vec4 vPosition;
in vec3 vNormal;
in vec2 vUV;

out vec3 SurfaceNormal;

uniform mat4 ModelViewProjectionMatrix;
uniform mat4 ObjectOffsetMatrix;
uniform mat3 NormalMatrix;

void main () {
	SurfaceNormal = normalize(NormalMatrix * vNormal);
	gl_Position = ModelViewProjectionMatrix * vPosition;
}
0

Share this post


Link to post
Share on other sites


The question is now each object in question has their own rotation, translation, scaling, etc. separate from the MVP for the screen.

The abbreviation MVP means a matrix composed of a model's local-to-world transform (M), the world-to-view transform (V, also known as inverse camera transform), and the perspective transform (P). So there is no need for another model transform, because there is already one there (namely M).

 

What should ObjectOffsetMatrix bring in what is not already contained in M?

1

Share this post


Link to post
Share on other sites

 


The question is now each object in question has their own rotation, translation, scaling, etc. separate from the MVP for the screen.

The abbreviation MVP means a matrix composed of a model's local-to-world transform (M), the world-to-view transform (V, also known as inverse camera transform), and the perspective transform (P). So there is no need for another model transform, because there is already one there (namely M).

 

What should ObjectOffsetMatrix bring in what is not already contained in M?

 

 

Thanks for the information.

 

What is happening is that I have one import of different buffers of mesh and they each have their own rotation, scale, and translations relative to the scene which is represented by the MVP matrix.

 

This is stored in a separate matrix. That's why I am calling it the Object Offset Matrix.

 

So, what I imagine I need to do and what I am asking for assistance on if possible is to incorporate that offset into the equation until my c++ code updates the object uniform again for a different mesh with a new offset relative to the scene's MVP matrix.

 

Makes sense?

 

One MVP matrix, different objects with their own offsets against what is on the scene.

Edited by tmason
0

Share this post


Link to post
Share on other sites

Do you speak about sub-meshes, each with its own placement relative to the model, where all sub-meshes together define the model? Or do you speak about correction of meshes that do not fit your scene defined purposes? What does "relative to the scene" mean exactly? "Scene" usually means the entire composition of models and the camera in the world...

0

Share this post


Link to post
Share on other sites

You could do ViewProjectionMatrix * ModelMatrix * vPosition if that's what you mean? Your "ObjectOffsetMatrix" is probably the same thing as ModelMatrix. There can't be just one MVP matrix, because that M should be different for each model. I'd calculate the final matrix on CPU side, because then the GPU wouldn't need to do the matrix multiplication for each vertex. But of course, if you need to use the ModelMatrix in your shader code, send it too.

Edited by Sponji
0

Share this post


Link to post
Share on other sites

Do you speak about sub-meshes, each with its own placement relative to the model, where all sub-meshes together define the model? Or do you speak about correction of meshes that do not fit your scene defined purposes? What does "relative to the scene" mean exactly? "Scene" usually means the entire composition of models and the camera in the world...

 

Do you speak about sub-meshes, each with its own placement relative to the model, where all sub-meshes together define the model?

 

Yes, perhaps that was the confusion. Each sub-mesh has it's own displacement, rotation, and scale but together of course I want to render the entire model/scene.

 

Or do you speak about correction of meshes that do not fit your scene defined purposes?

 

No, see above.

 

What does "relative to the scene" mean exactly?

 

Relative to the scene meaning, I have five boxes with their own set of coordinates coming in from a file. Box 1, is scaled twice as big and placed in front of a camera, Box 2 is scaled normally but sitting to the left of the camera, Box 3 is sitting behind the camera and rotated 45 degrees, etc.

 

Each box has it's own group of vertices coming from the file I am importing as well as their own rotation, scaling, and translation.

 

All of this is making up one scene.

 

"Scene" usually means the entire composition of models and the camera in the world...

 

I agree, so what would be the correct way of rendering the MVP matrix incorporating what I have described above?

 

What I have difficulty understanding is how to calculate and incorporate the Normal Matrix, etc. in the final set of uniforms to be based to the shaders, etc.

 

Thank you for your time.

Edited by tmason
0

Share this post


Link to post
Share on other sites

You could do ViewProjectionMatrix * ModelMatrix * vPosition if that's what you mean? Your "ObjectOffsetMatrix" is probably the same thing as ModelMatrix. There can't be just one MVP matrix, because that M should be different for each model. I'd calculate the final matrix on CPU side, because then the GPU wouldn't need to do the matrix multiplication for each vertex. But of course, if you need to use the ModelMatrix in your shader code, send it too.

 

Hmmm, I will try this. Seems like it would work.

 

I was calculating the normal matrix against the entire MVP matrix.

 

Would I calculate this matrix instead again the ViewProjection matrix?

0

Share this post


Link to post
Share on other sites

Even with scaling, you can usually use transpose modelview 3x3:

 

vec3 norm = mat3(modelView) * in_normal.xyz;

 

if you have scaling or other effects, normalize it:

normalize(norm);

 

btw. assuming modelview matrix is really just the rotation of your camera: use it to transform any vector into eye-space from regular coordinate system.

we can call the mat3() part "camera rotation matrix." the fact that you are using it with a normal is irrelevant. :)

note that rotation and normal matrix * vector retains vector length, so it will still be normalized.

0

Share this post


Link to post
Share on other sites
This is how I would do this stuff usually:
mat4 projection = camera->projection();
mat4 view = camera->view(); // should be same as inverse(camera->transform());
mat4 model = model->transform();

mat4 vp = projection * view;
mat4 mvp = vp * model;
mat4 mv = view * model;

mat3 normal_matrix = transpose(inverse(mat3(mv)));

 

For the child objects, you just multiply by the child's transform, so it goes like this:

mat4 mvp = vp * model0 * model1 * model2
mat4 mv = view * model0 * model1 * model2
1

Share this post


Link to post
Share on other sites

 

This is how I would do this stuff usually:
mat4 projection = camera->projection();
mat4 view = camera->view(); // should be same as inverse(camera->transform());
mat4 model = model->transform();

mat4 vp = projection * view;
mat4 mvp = vp * model;
mat4 mv = view * model;

mat3 normal_matrix = transpose(inverse(mat3(mv)));

 

For the child objects, you just multiply by the child's transform, so it goes like this:

mat4 mvp = vp * model0 * model1 * model2
mat4 mv = view * model0 * model1 * model2

 

Thank you! I will give this a shot.

0

Share this post


Link to post
Share on other sites

1.) Situation A: We have sub-meshes. This is usually the case if the model has more than a single material and the rendering system cannot blend materials. So the model is divided into said sub-meshes where each sub-mesh has its own material. Further, each sub-mesh has its local placement S relative to the model. If we still name the model's placement in the world M, then the local-to-world matrix W of the sub-mesh is given by

    W := M * S

 

2.) Situation B: We have sub-meshes as above, but the sub-meshes are statically transformed, so that S is already applied to the vertices during pre-processing. The computation at runtime is then just

    W := M

 

3.) Situation C: We have models composed by parenting sub-models, e.g. rigid models of arms and legs structured in a skeleton, or rigid models of turrets on a tank. So a sub-model has a parent model, and the parent model may itself be a sub-model and has another parent model, up until the main model is reached. So each sub-mesh has its own local placement Si relative to its parent. Corresponding to the depth of parenting, we get a chain of transformations like so:

    W := M * Sn * Sn-1 * … * S0

where S0 is the sub-model that is not itself a parent, and Sn is the sub-model that has not itself a parent.

 

4.) View transformation: Now having our models given relative to the global "world" space due to W, we have placed a camera with transform C relative to the world. Because we see in view space and not in world space, we need the inverse of C and hence yield in the view transform V

    V := C-1

 

5.) The projection P is applied in view space and yields in the normalized device co-ordinates.

 

6.) All together, the transform so far looks like

    P * V * W

(still using column vectors) to come from a model local space into device co-ordinates.

 

Herein P changes perhaps never during the entire runtime of the game, V changes usually from frame to frame (letting things like portals and mirrors aside), and W changes from model to model during a frame. This can be realized by computing P * V once per frame and using it again and again during that frame. So the question is what to do with W.

 

7.) Comparing situation C with situation A shows that A looks like C for just a single level. Situation A would simply allow for setting the constant M once per model, and applying the varying S per sub-mesh (i.e. render call). Using parenthesis to express what I mean:

    ( ( P * V ) * M ) * S

 

But thinking in performance of render calls where switching textures, shaders, and whatever have their costs, especially often switching materials may be a no-go, so that render calls with the same material are to be batched. Obviously this contradicts the simple re-use of M as shown above. Instead, we get a situation where for each sub-mesh the own W is computed on the CPU, and the render call gets

    ( P * V ) * W

 

BTW, this is also the typical way to deal with parenting, partly due to the same reasons.

 

Another drawback of using both M and is that either you supply both matrices also for stand-alone models (for which M alone would be sufficient), or else you have to double your set of shaders.

 

So in the end it is usual to deal with all 3 situations in a common way: Computing the MODEL matrix on the CPU and send it to the shader. (Is just a suggestion) ;)

1

Share this post


Link to post
Share on other sites

Thank you for this write-up!

 

I will go through it in detail but I wanted to say thank you first.

0

Share this post


Link to post
Share on other sites

I was calculating the normal matrix against the entire MVP matrix.
 
Would I calculate this matrix instead again the ViewProjection matrix?

How exactly the normal matrix is to be computed depends on the space in which you want to transform the normals. Often normals are used in eye space for lighting calculations. This means that the projection matrix P does not play a role when computing the normal matrix.

 

Normals are direction vectors and as such invariant to translation. That is the reason why the rotational and scaling part of the transformation matrix is sufficient to deal with. So let us say that

    O := mat3( V * W )

defines that portion in eye space. (Maybe you want to have it in model world space, in which case you would drop V herein.)

 

The correct way of computing the normal matrix is to apply the transpose of the inverse:

    N := ( O-1 )t

 

In your situation you want to support both rotation and scaling (as said, it is invariant to translation, so no need to consider translation here). With some mathematical rules at hand, we get

    ( O-1 )t = ( ( R * S )-1 )t = ( S-1 * R-1 )t = ( R-1 )t * ( S-1 )t

 

Considering that R is an orthonormal basis and S is a diagonal matrix along the main diagonal, this can be simplified to 

    N = R * S-1

 

From this you can see that ...

 

a) … if you have no scaling, so S == I, then the normal matrix is identical to the mat3 of the model-view matrix.

 

b) ... if you have uniform scaling, i.e. scaling factor in all 3 principal directions is the same, then the inverse scaling is applied to the normal vector, which can also be seen as a multiplication with a scalar:

     S-1 = S( 1/s, 1/s, 1/s ) = 1/s * I

 

This is the case mentioned by Kaptein above. The difference is that Kaptein mentions that instead of ensuring the length of the normal vector by the matrix transformation itself, you can use the mat3 of the model-view matrix as normal matrix and undo the scaling happening by this simply by re-normalization of the resulting vector.

 

c) … if you have non-uniform scaling, i.e. scaling factor in all 3 principal directions is not the same

     S-1 = S( 1/sx, 1/sy, 1/sz )

then you have no chance but either compute the transposed inverse of O or else use compositing of R and S-1 (if those are known).
Edited by haegarr
1

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!


Register a new account

Sign in

Already have an account? Sign in here.


Sign In Now
Sign in to follow this  
Followers 0