Jump to content
  • Advertisement
Sign in to follow this  
EternityZA

Morph targets.

This topic is 2608 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi.

I want to add support for Morph targets to my engine and im looking for some pointers as to how I i should pack the data into my VBO's and how i should set up vertex atributes etc.

A link to a good resource or a quick high level explanation would be very much appreciated!

Thnx in Advance!

Share this post


Link to post
Share on other sites
Advertisement
Okay, so there are generally 2 (actually 3) basic ways how to do it. They are:
1) Using CPU to perform morphing and then pass every frame geometry through VBO to GPU (quite resource eating, but when you need geometry not on GPU but also in RAM - it is the best way)
2) Using GPU on vertex shaders
3) Using GPU in OpenCL

I'll try to describe the 1. and 2.

1.) You actually do in your code:

Let CModel be class containing mNumVertices (number of vertices in model), mVertices (actuall vertices of model - a CVector3 class with x, y and z float members). And g_mInterp be value between 0.0 and 1.0 holding morph target phase between CModel1 and CModel2 Pseudo code:


// You have to load 2 models (they have to have same number of vertices and CModel1.mVertices has to has morpth target in CModel2.mVertices)

// ... During initialization ...
assert(CModel1.mNumVertices == CModel2.mNumVertices); // Let us check if we have same number of vertices on both sides
CModel ModelResult;
ModelResult.mNumVertices = CModel1.mNumVertices;
ModelResult.mVertices = new CVector3[ModelResult.mNumVertices];

// ... During rendering loop ...
for(int i = 0; i < ModelResult.mNumVertices; ++i)
{
ModelResult.mVertices = CModel1.mVertices * (1.0f - g_mInterp) + CModel2.mVertices * g_mInterp;
}

// Now you have morph target in ModelResult stored, you just need to render it (you can create VBO from its vertices and use draw arrays (For example)


Okay, but well this is quite resource waste - because if you're just going to use it for rendering on GPU, you can do most of the stuff in vertex shader...


2.) Doing it all in vertex shader is quite straight forward

Let all variables stay the same, except that our CModel class would contain mVbo (unsigned integer type) which is VBO of our vertices (actually it is "ID of VBO on GPU in VRAM" - but well....)


// .. During initialization ...
assert(CModel1.mNumVertices == CModel2.mNumVertices); // Let us check if we have same number of vertices on both sides

// Load shader and don't forget to setup these attributes for it
glBindAttribLocationARB(this->ShaderProgram, 0, "Model1_Vertex");
glBindAttribLocationARB(this->ShaderProgram, 1, "Model2_Vertex");

// ... During rendering ...
// Turn on your shader
glUniform1fARB(glGetUniformLocationARB(ShaderProgram, "Interp"), g_mInterp);

glBindBufferARB(GL_ARRAY_BUFFER_ARB, CModel1->mVbo);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, 0);
glEnableVertexAttribArrayARB(0);

glBindBufferARB(GL_ARRAY_BUFFER_ARB, CModel2->mVbo);
glVertexAttribPointer(1, 3, GL_FLOAT, GL_FALSE, 0, 0);
glEnableVertexAttribArrayARB(1);

// Render (F.e. using glDrawArrays)
glDrawArrays(GL_TRIANGLES, 0, CModel1->mNumVertices);

glDisableVertexAttribArrayARB(0);
glDisableVertexAttribArrayARB(1);

// Turn off your shader



But we're still not yet done, we need the shader source to actually perform morphing (I'll post just vertex shader).



// Don't forget GLSL version, and other stuff for your shader (it won't be as short as mine)
in vec3 Model1_Vertex;
in vec3 Model2_Vertex;

uniform float Interp;

void main()
{
// This could also be done through lerp built-in function (but to see that the code is similar to what has been done on CPU
vec3 Morphed_Vert = Model1_Vertex * (1.0 - Interp) + Model2_Vertex * Interp;

gl_Position = gl_ModelViewProjectionMatrix * vec4(Morphed_Vert, 1.0);
}



Hope I have everything easy to understand ... if not, feel free to ask.

Btw. I wanted to actually write morph-target library a long time ago and I'm actually really thinking about that again ... thanks :D

Share this post


Link to post
Share on other sites
Thnx.

I just have 1 concern. I see that you bind two sets of vertex coords and you morph between them. I have a detailed character model. It has 31 facial expressions (each being a morph target). Should i create 31 VBO's + vertex attrributes? and what if i later have a model with more than 31? Also the facial expression morph target only affects a small fraction of the total model. Storing all the vertex coords for the detailed character model 31 times seems a bit bad.

I was thinking of just seperating the characters face from the rest of its body so that the rest of the body doesnt get replicated unecesarily but im not sure if this is the best way to do it. But Il still be using 31 VBO's imposing a hard limit of 31 morph targets per model...

Share this post


Link to post
Share on other sites
Actually there is no game, nor engine that actually blends between F.e. 31 morph targets. They're ignoring those that has none weight (and you blend just on those that has some weight in interpolation - I showed simple example where there is just linear interpolation between two morph targets), e.g. you might have 31 VBOs in RAM and 31 corresponding morphing weight (e.g. how much it affects geometry) - you select just 4 (actually I think that most games use just 2 morph targets at once) that has the highest effect and ignore the others (you lose something, but I'm not sure if you'd be able to bind 31 VBOs at once, and considering performance - it is probably better to stick with just 4 affecting morph targets at once).

Even though If you actually need high precision morphing (e.g. to compute even with morph targets that has little or almost none effect), it might be better to perform CPU-based morphing (although I presume you're developing a game or demo - so you probably won't need it).

E.g. summed - you optimize it by using just morph targets that take significant effect (the less their count is, the better for performance it is).



Share this post


Link to post
Share on other sites
You can only bind one VBO at once. My suggestion is having one massive VBO, Where you put the texture coords first, then you can put each morph target data consecutively after that, in the same vbo.

Then when you draw you just bind the 1 massive vbo, and when you are setting attributes, just change the pointers to what is the start morph attributes and end morph attributes.

Share this post


Link to post
Share on other sites
OK that makes sense. Il use the 4 morphs that will have the most significant impact on the model (have the highest interpolation value). One last thing about storing all this data in VBO's. If the model i loaded has 31 it would stil mean i need to store the coords of all the vertices 31 times. even if only a small fraction of the vertices get affected by the morphs. Also since i pack more than one model into a VBO it would mean that i would replicate the coords of even a model that doesnt have any morphs. Unless I always make sure that a big model with lots of morphs gets its own VBO or is there a better way? I guess i can even take all the vertices that has morph targets and place them in their own VBO to ensure that no coords gets replicated unnecesarily. Does this make sense?

Share this post


Link to post
Share on other sites

Also since i pack more than one model into a VBO it would mean that i would replicate the coords of even a model that doesnt have any morphs.

So don't do that......


I guess i can even take all the vertices that has morph targets and place them in their own VBO to ensure that no coords gets replicated unnecesarily. Does this make sense?


Yes it does....

Share this post


Link to post
Share on other sites
I would probably say no to separating the used morph verts and unused? It depends on how many verts we are talking about. Do you want to change your normal,vert,tex coord pointers, and shader to the non-morphed portion of your model and send another draw call? Thats 5 or more openGL calls you will have to make. Unless this model is a million verts and most are not used, I would probably say just do the extra morph and shader on the static vertices. If your static verts are 1-2,000 then your very close to, or better to just draw them than to make 5 GL calls most likely.

Share this post


Link to post
Share on other sites

I would probably say no to separating the used morph verts and unused? It depends on how many verts we are talking about. Do you want to change your normal,vert,tex coord pointers, and shader to the non-morphed portion of your model and send another draw call? Thats 5 or more openGL calls you will have to make. Unless this model is a million verts and most are not used, I would probably say just do the extra morph and shader on the static vertices. If your static verts are 1-2,000 then your very close to, or better to just draw them than to make 5 GL calls most likely.


That's 5 draw calls to make sure you aren't just filling your gpu memory with 50+ copies of exactly the same data.
That's 5 more draw calls to alleviate a potential memory bottleneck on low end GPUs (eg laptops/netbooks) [in cases where they utilise system memory]
That's 5 draw calls that will give a significant improvement in the way you are utilising the GPU and it's ram.

Remember: Always optimise for memory before you optimise for computational performance. If there was always zero overhead for performing a memory read, you might have a point. Since that's not the case, it's best not to make that assumption.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!