• 9
• 11
• 9
• 20
• 12
• ### Similar Content

• I wasn't sure if this would be the right place for a topic like this so sorry if it isn't.
I'm currently working on a project for Uni using FreeGLUT to make a simple solar system simulation. I've got to the point where I've implemented all the planets and have used a Scene Graph to link them all together. The issue I'm having with now though is basically the planets and moons orbit correctly at their own orbit speeds.
I'm not really experienced with using matrices for stuff like this so It's likely why I can't figure out how exactly to get it working. This is where I'm applying the transformation matrices, as well as pushing and popping them. This is within the Render function that every planet including the sun and moons will have and run.
if (tag != "Sun") { glRotatef(orbitAngle, orbitRotation.X, orbitRotation.Y, orbitRotation.Z); } glPushMatrix(); glTranslatef(position.X, position.Y, position.Z); glRotatef(rotationAngle, rotation.X, rotation.Y, rotation.Z); glScalef(scale.X, scale.Y, scale.Z); glDrawElements(GL_TRIANGLES, mesh->indiceCount, GL_UNSIGNED_SHORT, mesh->indices); if (tag != "Sun") { glPopMatrix(); } The "If(tag != "Sun")" parts are my attempts are getting the planets to orbit correctly though it likely isn't the way I'm meant to be doing it. So I was wondering if someone would be able to help me? As I really don't have an idea on what I would do to get it working. Using the if statement is truthfully the closest I've got to it working but there are still weird effects like the planets orbiting faster then they should depending on the number of planets actually be updated/rendered.

• Hello everyone,
I have problem with texture

• Hello everyone
For @80bserver8 nice job - I have found Google search. How did you port from Javascript WebGL to C# OpenTK.?
I have been searched Google but it shows f***ing Unity 3D. I really want know how do I understand I want start with OpenTK But I want know where is porting of Javascript and C#?

Thanks!
• By mike44
Hi
I draw in a OpenGL framebuffer. All is fine but it eats FPS (frames per second), hence I wonder if I could execute the framebuffer drawing only every 5-10th loop or so?
Many thanks

• By cebugdev
hi all,
how to implement this type of effect ?
Also what is this effect called? this is considered volumetric lighting?
what are the options of doing this?
a. billboard? but i want this to have the 3D effect that when we rotate the camera we can still have that 3d feel.
b. a transparent 3d mesh? and we can animate it as well?

2. how to implement things like fireball projectile (shot from a monster) (billboard texture or a 3d mesh)?

Note: im using OpenGL ES 2.0 on mobile.

thanks!

# OpenGL Vertex Frame Animation

This topic is 3406 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

Hi, I've started programming my vertex frame animator for my engine and so far it works with interpolation. My question is regarding the speed of interpolation. I have roughly 20,000 vertices. I'm doing linear interpolation with the vertex positions and their normals. eg. Frame1.pos=Frame1.pos + scale(Frame2.pos-Frame1.pos); Rendering a single frame of my .obj is fast. However, when I tell it to render an interpolated frame, it bottle necks at the calculation of the new vertex positions/normals. I'm assuming this is because the calculations at this point are done on the CPU end. Are there any ways to speed this up, ie. move the interpolation calculations to the video card. I'm using OpenGL and Have a 2.4ghz duo core, NVIDIA DDR3 256MB, 4 GB RAM

##### Share on other sites
Well I can think of a way of doing it, might not really be optimal though. Use a vertex shader:
in the vertex format you have the position for the vertex at frame n and at frame n+1, then interpolate between them in the vertex shader.
This would mean you would have to load a new model to the card for each frame, but that is probably still faster than interpolating on the CPU. This method could (maybe) be sped up by using textures to store the offsets of vertices on each frame and accessing these texture(s) in the vertex shader to get current vertex positions.
Or you could speed up the CPU side operation using SIMD instructions (SSE/2/3/4), and spreading it over more than one core of your multi-core CPU. Your algorithm is probably fairly easily parellelisable.

##### Share on other sites

Right now at every inbetween frame, I'm creating a new Model that uses the same texture/material information and only creating new vertex positions and normals. This is going with your idea of creating a new model at every frame. At first when I did this, it didn't seem to feesible but it's my solution at the moment which works but is incredibly slow.

I was thinking of working around this by reducing the size of my arrays in my models. But because I'm using glDrawElements. I need to store new normals and new texture coordinates meaning redundancies. The up side with draw elements is that I get faster rendering but on the down side I have to recalculate interpolated vertices.

I was thinking of another way in which I stored only the information retaining to the vertices that changed. But at times all vertices will change. So it isnt that big of a speed up.

##### Share on other sites
Sorry, I did not mean create a new model for every displayed frame, but a model for every keyframe (the frames which you are interpolating between) and interpolating between these in a vertex shader. This kind of animation (interpolating entire meshes) is never really going to be optimal, you are better off composing animations using a fixed set of blend-shapes, or using some kind of skinning technique.

##### Share on other sites
The simplest way to do this would be to use a vertex shader to let the GPU do the interpolating for you. Then you just pipe through the key frame data and scale factor and each vertex can be blended just before rendering.

In fact here's a vertex shader in GLSL that I use in my project:
uniform float blend;vec3 mylerp(vec3 a, vec3 b, float blend) {	return b * blend + a * (1.0 - blend);}void main(){	vec3 vertex1 = gl_Vertex.xyz;	vec3 vertex2 = gl_MultiTexCoord1.xyz;	vec3 normal1 = gl_Normal.xyz;	vec3 normal2 = gl_MultiTexCoord2.xyz;		vec4 vertex = vec4(mylerp(vertex1, vertex2, blend), 1);	normal = normalize(vec4(mylerp(normal1, normal2, blend), 1)).xyz;					light = gl_NormalMatrix * normalize(vec3(0.4, 0.4, 1.0));		gl_TexCoord[0] = gl_MultiTexCoord0;	gl_Position = gl_ModelViewProjectionMatrix * vertex;}

The first key frame is sent through as normal, whilst the second key frame makes use of the other texture coordinate channels.

As mentioned it would be more efficient to use skinning methods as less data has to be pushed through to the GPU. Letting the GPU do the interpolation will definitely be the best course of action though if you want to keep things simple.

If you are going to use this method I would also suggest that you use vertex buffer objects (VBOs) which store the vertex data on the card's memory. If you are already using vertex arrays it should be trivial to do this and almost all cards support this extension now. Using VBOs means all the vertex data does not need to be transferred across the memory bus every frame.

Hope that helps
James

##### Share on other sites
Looks like I've got some homework to do with vertex shaders =).

I started out my project in VC++ 6.0 before and now I've ported it to java using LWJGL because I wanted to make a program compatible with wiimotes and cross platform. So I have to look into whether LWJGL has such capabilities, but I'm guessing off the top of my head they don't and I have to resorted back to .NET and learn DirectX.

This is no big deal but a similar question.

Systems such as the Nintendo DS/Playstation 1/Nintendo 64. They don't have vertex shaders (I'm guessing). How do they do their interpolations? Or are there no interpolations involved and everything in those systems are strictly keyframe?