Jump to content
  • Advertisement
Sign in to follow this  
chibitotoro0_0

OpenGL Vertex Frame Animation

This topic is 3494 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi, I've started programming my vertex frame animator for my engine and so far it works with interpolation. My question is regarding the speed of interpolation. I have roughly 20,000 vertices. I'm doing linear interpolation with the vertex positions and their normals. eg. Frame1.pos=Frame1.pos + scale(Frame2.pos-Frame1.pos); Rendering a single frame of my .obj is fast. However, when I tell it to render an interpolated frame, it bottle necks at the calculation of the new vertex positions/normals. I'm assuming this is because the calculations at this point are done on the CPU end. Are there any ways to speed this up, ie. move the interpolation calculations to the video card. I'm using OpenGL and Have a 2.4ghz duo core, NVIDIA DDR3 256MB, 4 GB RAM

Share this post


Link to post
Share on other sites
Advertisement
Well I can think of a way of doing it, might not really be optimal though. Use a vertex shader:
in the vertex format you have the position for the vertex at frame n and at frame n+1, then interpolate between them in the vertex shader.
This would mean you would have to load a new model to the card for each frame, but that is probably still faster than interpolating on the CPU. This method could (maybe) be sped up by using textures to store the offsets of vertices on each frame and accessing these texture(s) in the vertex shader to get current vertex positions.
Or you could speed up the CPU side operation using SIMD instructions (SSE/2/3/4), and spreading it over more than one core of your multi-core CPU. Your algorithm is probably fairly easily parellelisable.

Share this post


Link to post
Share on other sites
Thanks for the reply!

Right now at every inbetween frame, I'm creating a new Model that uses the same texture/material information and only creating new vertex positions and normals. This is going with your idea of creating a new model at every frame. At first when I did this, it didn't seem to feesible but it's my solution at the moment which works but is incredibly slow.

I was thinking of working around this by reducing the size of my arrays in my models. But because I'm using glDrawElements. I need to store new normals and new texture coordinates meaning redundancies. The up side with draw elements is that I get faster rendering but on the down side I have to recalculate interpolated vertices.

I was thinking of another way in which I stored only the information retaining to the vertices that changed. But at times all vertices will change. So it isnt that big of a speed up.

Share this post


Link to post
Share on other sites
Sorry, I did not mean create a new model for every displayed frame, but a model for every keyframe (the frames which you are interpolating between) and interpolating between these in a vertex shader. This kind of animation (interpolating entire meshes) is never really going to be optimal, you are better off composing animations using a fixed set of blend-shapes, or using some kind of skinning technique.



Share this post


Link to post
Share on other sites
The simplest way to do this would be to use a vertex shader to let the GPU do the interpolating for you. Then you just pipe through the key frame data and scale factor and each vertex can be blended just before rendering.

In fact here's a vertex shader in GLSL that I use in my project:

uniform float blend;

vec3 mylerp(vec3 a, vec3 b, float blend) {
return b * blend + a * (1.0 - blend);
}

void main()
{
vec3 vertex1 = gl_Vertex.xyz;
vec3 vertex2 = gl_MultiTexCoord1.xyz;
vec3 normal1 = gl_Normal.xyz;
vec3 normal2 = gl_MultiTexCoord2.xyz;

vec4 vertex = vec4(mylerp(vertex1, vertex2, blend), 1);
normal = normalize(vec4(mylerp(normal1, normal2, blend), 1)).xyz;

light = gl_NormalMatrix * normalize(vec3(0.4, 0.4, 1.0));

gl_TexCoord[0] = gl_MultiTexCoord0;
gl_Position = gl_ModelViewProjectionMatrix * vertex;
}



The first key frame is sent through as normal, whilst the second key frame makes use of the other texture coordinate channels.

As mentioned it would be more efficient to use skinning methods as less data has to be pushed through to the GPU. Letting the GPU do the interpolation will definitely be the best course of action though if you want to keep things simple.

If you are going to use this method I would also suggest that you use vertex buffer objects (VBOs) which store the vertex data on the card's memory. If you are already using vertex arrays it should be trivial to do this and almost all cards support this extension now. Using VBOs means all the vertex data does not need to be transferred across the memory bus every frame.

Hope that helps
James

Share this post


Link to post
Share on other sites
Looks like I've got some homework to do with vertex shaders =).

I started out my project in VC++ 6.0 before and now I've ported it to java using LWJGL because I wanted to make a program compatible with wiimotes and cross platform. So I have to look into whether LWJGL has such capabilities, but I'm guessing off the top of my head they don't and I have to resorted back to .NET and learn DirectX.

This is no big deal but a similar question.

Systems such as the Nintendo DS/Playstation 1/Nintendo 64. They don't have vertex shaders (I'm guessing). How do they do their interpolations? Or are there no interpolations involved and everything in those systems are strictly keyframe?

Share this post


Link to post
Share on other sites
I think that the platforms you mentioned all used forms of matrix palette skinning, where each bone in the skeleton has a matrix to transform parts of the model. This method doesn't require shaders and is generally more efficient.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!