Rendering MD3 in Modern OpenGL

Started by
9 comments, last by ongamex92 8 years, 5 months ago

Does anyone have a nice tutorial on this? How to load and display animated MD3 in Modern OpenGL?

Advertisement

Just use ASSIMP https://github.com/assimp/assimp?

Perhaps "renderergl2" would be good to look at, a renderer for ioq3 using (relatively) modern OpenGL

Engine, graphics, low-level, security and cross-platform programming. xsngine

An interesting idea could be to store your all the vertex positions for your key frames in a texture, with a single row storing all your positions, and each subsequent row is a single key frame. When you come to draw your animated frame, you can set a single float uniform for which frame to use, and then sample all your vertex data from the texture using this float as the y coordinate, and then use gl_VertexID as the x coordinate. That way you get your frame interpolation for free through the bilinear texture filtering.

The fun part is writing the MD3 loader and renderer of animations. :)

An interesting idea could be to store your all the vertex positions for your key frames in a texture, with a single row storing all your positions, and each subsequent row is a single key frame. When you come to draw your animated frame, you can set a single float uniform for which frame to use, and then sample all your vertex data from the texture using this float as the y coordinate, and then use gl_VertexID as the x coordinate. That way you get your frame interpolation for free through the bilinear texture filtering.

I'm not sure if i follow... you mean, storing all the vertex positions for keyframes in glTexImage2D?

Xycaleth means to use the texture as a LUT/lookup table, where each "pixel" contains the vertex position, i.e. R,G,B correlate to X,Y,Z
mrwonko pointed out the MD3 format has 16-bit integer coordinate, so the texture format should be GL_RGB / GL_SHORT. The maximum number of vertices per surface is 4096 (typically < 1k verts) which is scraping as large as you'd want to go, at one texture per surface (maximum of 32 surfaces, typically 1 or 2 surfaces)

If you required any other information, it wouldn't be difficult to pack that further down in the texture and give/calculate the offset.
You can exploit bilinear filtering by sampling between texels in the vertex shader. Texture filtering will interpolate the values, hence smoothed animations.

Engine, graphics, low-level, security and cross-platform programming. xsngine

The texture idea isn't going to work if it's blending between two non-consecutive frames, though.

Direct3D has need of instancing, but we do not. We have plenty of glVertexAttrib calls.

True, it won't actually be blending animations. You could probably get fancy and construct a histogram for blending between sequences.

This is the point where you'd reconsider your decision to render MD3s in modern OpenGL, and write a tool to convert the format offline to something that complements modern OpenGL.

Engine, graphics, low-level, security and cross-platform programming. xsngine

True, it won't actually be blending animations. You could probably get fancy and construct a histogram for blending between sequences.

This is the point where you'd reconsider your decision to render MD3s in modern OpenGL, and write a tool to convert the format offline to something that complements modern OpenGL.

MD3 (and keyframe animation in general) is so easy in modern OpenGL that there's no need to even think about doing anything fancy. It's just one VBO, two glVertexAttribPointer calls - one for previous frame, one for current frame - then send the interpolation factor as a uniform and blend the frames in a single line of shader code. Do the same for normals, boom, done, next.

Direct3D has need of instancing, but we do not. We have plenty of glVertexAttrib calls.

This topic is closed to new replies.

Advertisement