Advertisement Jump to content
  • Advertisement


  • Content Count

  • Joined

  • Last visited

Community Reputation

564 Good

About Pilpel

  • Rank

Personal Information

  • Interests

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. The code attached was 2 years old. Will sure watch the video to refresh my memory. Thanks a lot!!
  2. Yes. Consider the next model made in Blender. This is how assimp imports it: Now if I render the scene without setting ModelMatrix to each node's toRoot matrix (setting it to the identity) both submeshes are drawn at (0,0,0). This was a static mesh but I'm pretty sure it works the same for animated meshes. Can you address the last thing I said in my last post?
  3. I think I understand but I'm not sure. Please don't give up on me ­čśé First off my code is running well. I just don't exactly know why. I want to be 100% with it before I improve it. Each aiNode has a transformation attached to it. When I multiply all these transformations according to the node hierarchy I get a global transformation (toRoot). Each aiNode also has a number of submeshes attached to it. Each vertex these submeshes, before I multiply it by the global transformation (in the vertex shader, named ModelMatrix), is in local space. That might be the reason I get confused, because basically a vertex' local space isn't its rest pose position (at least in assimp). Does that make any sense? BUT assuming it is in local space. Tell me if I got it correctly: Say I have two joints (bones are actually joints, right?) J1, J2, and J2 is a child of J1, and a vertex affected by both. I rotate each joint 10 degrees upwards. Now calculating the rotation of the vertex for J2 for example, I want the vertex to be rotated 10 degrees assuming J2 is the center, and that's why I multiply it by the inverse matrix of J2, before applying the rotation matrix of J2, right? Then continuing with J1 I do the same. One little thing that doesn't get right for me is that J2 rotation matrix is actually about 20 degrees because I went down the hierarchy and multiplied J2.matrix = J1.matrix * J2.matrix beforehand. How does it work with the approach I said above?
  4. Can you elaborate about bone space? How does it differ from local space? @TeaTreeTim But from what I understand 0,0,0 in local space @lawnjelly "So the inverse rest pose ´╗┐transform is to get a vertex from the rest pose´╗┐ position´╗┐ to bone space FOR THAT PARTICULAR BONE." But the vertex starts in local space, which means it's not in rest pose.. no? A vertex gets to rest pose only after I multiply it by the node transformation matrix (the toRootTransformation matrix in my post)
  5. The way I'm doing things right now is: For static meshes, I calculate a "toRootTransformation" 4x4 matrix for every node, based on the hierarchy of the nodes. Then for every node that contains a mesh, I render the mesh with a shader looking like this: #version 430 uniform mat4 ProjectionMatrix; uniform mat4 CameraMatrix; uniform mat4 ModelMatrix; layout(location = 0) in vec3 vertex; layout(location = 1) in vec2 uvs; layout(location = 2) in vec3 normal; out vec2 fsUvs; out vec3 fsNormal; void main() { gl_Position = ProjectionMatrix * CameraMatrix * ModelMatrix * vec4(vertex, 1.0); fsUvs = uvs; fsNormal = normal; } Before that I set the ModelMatrix uniform to that toRootTransformation. For skinning, I upload the bone matrix array uniform to the shader, calculate each vertex position based on it, and multiply by the same "toRootTransformation" as before. Here's the main part of the vertex shader: void main() { vec4 final = vec4(0); for(int i = 0; i < NUM_INFLUENCE_BONES; i++) { vec4 v = Bones[boneIndices[i]] * vec4(vertex, 1.0) * vertexWeights[i]; final += v; } gl_Position = ProjectionMatrix * CameraMatrix * ModelMatrix * final; } Every bone matrix is a multiplication of the bone's inverseBindPos matrix and the matrix that is the product of all the rotations/translations of the bone at the current time, based on the node hierarchy. In code: void Bone::calculateFinalMatrix() { //matrix for the shader _finalMatrix = _animationMatrix * _inverseBindPosMatrix; //_animationMatrix is already calculated by AnimationController } To be honest this is the product of like a week of struggling to get skinning done, and I'm not sure if I'm doing it right. (please correct me if not) The question that I ask myself is why do I need to multiply each bone by the inverseBindPos first. It transforms each vertex from world to local space, before multiplying by the animation matrix. But the thing is that the vertex isn't in world space to begin with, but in local space. So basically we're going from local space to local space (then animating) which doesn't make sense.
  6. Pilpel

    Assimp bones' relation to nodes

    What do you mean? These bones don't exist, as in no aiMesh ever refers to them. Just to be sure, I don't need the mTransformation matrix of the aiNodes anymore when doing skinning, right? Edit: I think I was wrong in the statement above actually. :p Edit2: I'll just start a new topic about that issue, hopefuly you can help me (no spam intended)
  7. Pilpel

    Assimp bones' relation to nodes

    So what's the deal with the nodes that don't have corresponding bones?
  8. I noticed that when importing animated meshes, the total number of bones (aiBone) in the scene might be less than the total number of nodes (aiNode), so my initial thought was "okay their hierarchies are probably not related.. whatever", but then I noticed that an aiBone object doesn't contain a pointer to a parent bone. So my question is how do I calculate the final matrix palette for the bones, not knowing the hierarchy between them? Edit: I forgot to mention that according to some book I read every joint (bone) must store an index to its parent joint.
  9. I'm designing an animation system. I already implemented skinning long time ago (with C++ and opengl) but it was very basic and lacked some important features like blending (e.g. transitioning from walking to running) and playing several animations at the same time (e.g. waving and running). They both might be called "animation blending" but I'm not sure. Anyway, I'm asking for ways to implement all these things. One thing I have to figure out is when for example the character is running, he obviously moves his upper body, but say if I want to add a "waving hello" clip while he is running I'll probably have to somehow tell all the upper body bones to ignore the running transformations (just a thought).
  10. Suppose I have an animation clip of a character waving his hands, and another clip of the same character running. What approaches exist that allow me to implement such a thing? (talking about skeletal animation)
  11. Haha. My piano teacher told me there is no shame in copying from good pianists, so I thought about doing the same here. Thanks
  12. I've read many tutorials on opengl and how to render neat stuff but I can't find any article on how to merge all these techniques together into an "engine" or a "renderer". My thoughts is that finally I need to have this 3d renderer, physics engine, sound, network etc all connected with an "Engine" class, so right now my question is how does the renderer work (where do I put the shadow mapping code, culling etc.). I'm pretty clueless so any piece of information will be great
  13. Pilpel

    Is MSAA affected by drawing order?

    This only happens inside a triangle, right?
  14. Pilpel

    Is MSAA affected by drawing order?

    I'm not sure if I get it. Does MSAA differ from SSAA in a way that the pixel shader is ran, most of the times, fewer times per pixel? Memory usage (number of elements in the color buffer) seems to be the same for both MSAA and SSAA, no?
  • Advertisement

Important Information

By using, you agree to our community Guidelines, Terms of Use, and Privacy Policy. is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!