But it doesn't quite looks the same. Besides, from the Planetside 2 pics you can see the tinting doesn't always follow the same direction (sometimes its reddish to blueish from right to left, sometimes from top to bottom, and so on).
The wireframe is the actual geometry. I bake the bind pose on the vertices. The garbled polygons and pink pixels (NaNs) is the result of my awesome skeletal animation. Those garbled polygons should resemble the humanoid walking, but It kinda doesn't resembles that... at all.
My understanding is that each joint can be represented with a rotation matrix, and that rotating along the hierarchy of joints, you get the final position.
What I do is to have a two dimensional array Mat3f, first index is the joint, second you would index into a specific frame of that joint. The idea is to lerp between the frames, according to the animation timer and current delta, and multiply the resulting interpolated matrix by the parent bone (which it should be already an interpolated matrix of the current frames of the parent joint).
Now pardon my Java but this is my code to compute each joint:
protected void processEntities ( ImmutableIntBag entities )
final float currDelta = (float) ((GameWorld) world).deltaSeconds;
final int esize = entities.size();
final int eids = ((IntBag) entities).data();
final GeometryAnimation anims = animations.data();
final Mat3f tmp = OpsCmp.TMP_MATS_3f.get().next();
for ( int i = 0; i < esize; ++i )
// Entity ID.
final int eid = eids[i];
// Animation component.
final GeometryAnimation anim = anims[eid];
// Advance animation timer
anim.currentTime += currDelta;
// Loop the animation.
anim.currentTime %= anim.totalTime;
// Joint-tracks arrays.
final Mat3f tracks = anim.tracks;
// Current computed joint results.
final Mat3f currJoints = anim.currentJoints;
final int jointCount = anim.parentIndices.length;
final int maxFrame = anim.frames - 1;
// Current frame time.
final float currtFrame = clamp( 0.0f, 1.0f, anim.currentTime / anim.totalTime );
// Lerp factor between frames.
final float lerpFactor = OpsMath.frac( anim.currentTime * (anim.frames / anim.totalTime) );
// Previous (or current) track index.
final int prevTracki = round( currtFrame * maxFrame );
// Next track index.
final int nextTracki = Math.min( prevTracki + 1, maxFrame );
// jointi -> joint index.
for ( int jointi = 0; jointi < jointCount; ++jointi )
final Mat3f prevFrame = tracks[jointi][prevTracki];
final Mat3f nextFrame = tracks[jointi][nextTracki];
// Lerp between the current and next frame, store in 'tmp'.
tmp.lerp( prevFrame, nextFrame, lerpFactor );
// Fetch the joint where to store the result.
final Mat3f currJoint = currJoints[jointi];
// Fetch the index of the parent of the current joint.
final int parenti = anim.parentIndices[jointi];
if ( parenti < 0 )
// If index is negative, there is no parent, just copy the result.
currJoint.copyFrom( tmp );
// Multiply 'tmp' result with the parent joint.
currJoint.mul( currJoints[parenti], tmp );
Aaand this is the vertex shader:
// Input parameters.
layout ( location = 4 ) in vec4 inWeights;
layout ( location = 5 ) in uvec4 inJoints;
// Output parameters.
out vec2 passTexCoord;
out vec3 passNormal;
out vec3 passViewPos;
// Instance index.
uniform int instancei;
void main ( void )
vec3 trnsPosition = vec3(0.0);
vec3 trnsNormal = vec3(0.0);
vec4 weights = inWeights;
uvec4 indices = inJoints;
// Testing with one model here, will be 0.
uint uinsti = uint(instancei);
for ( int i = 0; i < 4; ++i )
// Index into joints UBO, and fetch the joint.
mat3 joint = joints[uinsti+indices.x];
trnsPosition += joint * inPosition * weights.x;
trnsNormal += joint * inNormal * weights.x;
// Shift indices and weights;
weights = weights.yzwx;
indices = indices.yzwx;
// Fetch this model transforms from the UBO.
mat4 mvp = trns[instancei].mvp;
mat3 mv = mat3(trns[instancei].mv);
passTexCoord = inTexCoord;
// Normal in view space.
passNormal = normalize(mv * trnsNormal);
// Position in view space.
passViewPos = mv * inPosition;
// Projected position.
gl_Position = mvp * vec4(trnsPosition,1.0);
Lights look weird (light volume doing stenciling on pixels in front of the volume).
Black squares on stuff (no idea on that one).
Normals are inverted (light is coming from above to the right, yet the walls look lit from below to the left).
That one is running on an ATi HD 4250 with most recent drivers available (legacy, 2013). This works fine on an HD5770, GTX 560 Ti and ION 2 (aka GeForce GT 210).
Its a deferred renderer, I use:
RGBA8 for rgb albedo and a for specular.
RG16F for encoded normals.
R11_G11_B10F for light accumulation buffer.
R11_G11_B10F using the x component now for storing shininess.
For the lighting pass I do a fullscreen pass for directional light, and two passes for point lights (one stencil marking pixels in front of the volume, then regular computing lighting from those marked pixels, using stencil increment operator in first one, stencil decrement on the second one).
I have no idea what is failing exactly but maybe some of you have seen these glitches and can point me in the right direction. I saw the black square errors on the GTX 560 Ti if I had a bloom pass enabled (downscale, blur, upscale, output hi pass + original), but it was much less pronounced, and that bloom pass is disabled in that picture.
I have debug output and I see no errors.
EDIT: Oh I'm using OpenGL 3.3 core by the way, I use two GLSL extensions:
I put both as "require" and I'm actually only using the first one to define fixed UBO bindings.