Calculating the Final Vertex Position with MVP Matrix and another Object's Trans, Rotation, Scaling Calc...

Started by
11 comments, last by haegarr 9 years, 11 months ago

Hello,

So I figured out the MVP matrix thing and I get things on the screen all well and good.

I searched for this additional piece of information but I didn't find it as of yet.

The question is now each object in question has their own rotation, translation, scaling, etc. separate from the MVP for the screen.

So I know I will need another uniform in my vertex shader code below but how do I alter the near-last line to incorporate the "ObjectOffsetMatrix" which contains the translation, rotation, and scale?

Thank you for your time.


#version 140

#extension GL_ARB_explicit_attrib_location : require

in vec4 vPosition;
in vec3 vNormal;
in vec2 vUV;

out vec3 SurfaceNormal;

uniform mat4 ModelViewProjectionMatrix;
uniform mat4 ObjectOffsetMatrix;
uniform mat3 NormalMatrix;

void main () {
	SurfaceNormal = normalize(NormalMatrix * vNormal);
	gl_Position = ModelViewProjectionMatrix * vPosition;
}
Advertisement


The question is now each object in question has their own rotation, translation, scaling, etc. separate from the MVP for the screen.

The abbreviation MVP means a matrix composed of a model's local-to-world transform (M), the world-to-view transform (V, also known as inverse camera transform), and the perspective transform (P). So there is no need for another model transform, because there is already one there (namely M).

What should ObjectOffsetMatrix bring in what is not already contained in M?


The question is now each object in question has their own rotation, translation, scaling, etc. separate from the MVP for the screen.

The abbreviation MVP means a matrix composed of a model's local-to-world transform (M), the world-to-view transform (V, also known as inverse camera transform), and the perspective transform (P). So there is no need for another model transform, because there is already one there (namely M).

What should ObjectOffsetMatrix bring in what is not already contained in M?

Thanks for the information.

What is happening is that I have one import of different buffers of mesh and they each have their own rotation, scale, and translations relative to the scene which is represented by the MVP matrix.

This is stored in a separate matrix. That's why I am calling it the Object Offset Matrix.

So, what I imagine I need to do and what I am asking for assistance on if possible is to incorporate that offset into the equation until my c++ code updates the object uniform again for a different mesh with a new offset relative to the scene's MVP matrix.

Makes sense?

One MVP matrix, different objects with their own offsets against what is on the scene.

Do you speak about sub-meshes, each with its own placement relative to the model, where all sub-meshes together define the model? Or do you speak about correction of meshes that do not fit your scene defined purposes? What does "relative to the scene" mean exactly? "Scene" usually means the entire composition of models and the camera in the world...

You could do ViewProjectionMatrix * ModelMatrix * vPosition if that's what you mean? Your "ObjectOffsetMatrix" is probably the same thing as ModelMatrix. There can't be just one MVP matrix, because that M should be different for each model. I'd calculate the final matrix on CPU side, because then the GPU wouldn't need to do the matrix multiplication for each vertex. But of course, if you need to use the ModelMatrix in your shader code, send it too.

Derp

Do you speak about sub-meshes, each with its own placement relative to the model, where all sub-meshes together define the model? Or do you speak about correction of meshes that do not fit your scene defined purposes? What does "relative to the scene" mean exactly? "Scene" usually means the entire composition of models and the camera in the world...

Do you speak about sub-meshes, each with its own placement relative to the model, where all sub-meshes together define the model?

Yes, perhaps that was the confusion. Each sub-mesh has it's own displacement, rotation, and scale but together of course I want to render the entire model/scene.

Or do you speak about correction of meshes that do not fit your scene defined purposes?

No, see above.

What does "relative to the scene" mean exactly?

Relative to the scene meaning, I have five boxes with their own set of coordinates coming in from a file. Box 1, is scaled twice as big and placed in front of a camera, Box 2 is scaled normally but sitting to the left of the camera, Box 3 is sitting behind the camera and rotated 45 degrees, etc.

Each box has it's own group of vertices coming from the file I am importing as well as their own rotation, scaling, and translation.

All of this is making up one scene.

"Scene" usually means the entire composition of models and the camera in the world...

I agree, so what would be the correct way of rendering the MVP matrix incorporating what I have described above?

What I have difficulty understanding is how to calculate and incorporate the Normal Matrix, etc. in the final set of uniforms to be based to the shaders, etc.

Thank you for your time.

You could do ViewProjectionMatrix * ModelMatrix * vPosition if that's what you mean? Your "ObjectOffsetMatrix" is probably the same thing as ModelMatrix. There can't be just one MVP matrix, because that M should be different for each model. I'd calculate the final matrix on CPU side, because then the GPU wouldn't need to do the matrix multiplication for each vertex. But of course, if you need to use the ModelMatrix in your shader code, send it too.

Hmmm, I will try this. Seems like it would work.

I was calculating the normal matrix against the entire MVP matrix.

Would I calculate this matrix instead again the ViewProjection matrix?

Even with scaling, you can usually use transpose modelview 3x3:

vec3 norm = mat3(modelView) * in_normal.xyz;

if you have scaling or other effects, normalize it:

normalize(norm);

btw. assuming modelview matrix is really just the rotation of your camera: use it to transform any vector into eye-space from regular coordinate system.

we can call the mat3() part "camera rotation matrix." the fact that you are using it with a normal is irrelevant. :)

note that rotation and normal matrix * vector retains vector length, so it will still be normalized.

This is how I would do this stuff usually:

mat4 projection = camera->projection();
mat4 view = camera->view(); // should be same as inverse(camera->transform());
mat4 model = model->transform();

mat4 vp = projection * view;
mat4 mvp = vp * model;
mat4 mv = view * model;

mat3 normal_matrix = transpose(inverse(mat3(mv)));

For the child objects, you just multiply by the child's transform, so it goes like this:


mat4 mvp = vp * model0 * model1 * model2
mat4 mv = view * model0 * model1 * model2

Derp

This is how I would do this stuff usually:

mat4 projection = camera->projection();
mat4 view = camera->view(); // should be same as inverse(camera->transform());
mat4 model = model->transform();

mat4 vp = projection * view;
mat4 mvp = vp * model;
mat4 mv = view * model;

mat3 normal_matrix = transpose(inverse(mat3(mv)));

For the child objects, you just multiply by the child's transform, so it goes like this:


mat4 mvp = vp * model0 * model1 * model2
mat4 mv = view * model0 * model1 * model2

Thank you! I will give this a shot.

This topic is closed to new replies.

Advertisement