• Content count

  • Joined

  • Last visited

Community Reputation

122 Neutral

About silvia_steven_2000

  • Rank
  1. Hi all please execuse me for the long question, I meant that to make things more clear: skeletal animation in ms3d file is based on transforming model vertices by a skeleton of joints. the absolute transform of a joint is the product of its relative transform with the absolute transform of its parent joint. I have no problem with the theory behind that. I was able to load the mesh. I was also able to animate the model but there is something wrong going on. in summary here is what I did: 1- I calculated the absolute transform for each joint assuming the provided translation and rotation in ech joint as its relative translation and rotation. 2- I used the inverse of the above transform and transformed all the vertices to make them ready for transformation in the animation function 3- in the animation function we need to calculate the new joint absolute transforms, this is done by interpolating the key translations and rotations. her is the confusion: I need to know relative to what these key translations and rotations are ? I tried to consider the interpolated translation and rotation as the current joint relative translation and rotation and based on that calculated the abs transform but this did not work. I tried to consider the joint initial rotation multiplied by the current interpolated rotation as the joint new relative rotation, similarly, I considered the joint initial translation added to the current interpolated translation as the new joint relative translation. this was able to animate the model but the animation was not 100% correct. I tried to use some test joints defined by me and was able to prove that the procedure I am using is correct so I am afraid I am not interpreting the keyframe informatin the right way. by the way, I was reading in the book focus on 3d models, the joint final transform is calculated as: no parent: (joint_local * joint_current_interpolation_matrix) if it has a parent: parent_matFinal * (joint_local * joint_current_interpolation_matrix) where joint_local is the joint initial relative transform I guess what I mentioned above is basically what the book mentioned I do not know what is going wrong ? any hints ? I passed by this on the web: // // Mesh Transformation: // // 0. Build the transformation matrices from the rotation and position // 1. Multiply the vertices by the inverse of local reference matrix (lmatrix0) // 2. then translate the result by (lmatrix0 * keyFramesTrans) // 3. then multiply the result by (lmatrix0 * keyFramesRot) // // For normals skip step 2. // // NOTE: this file format may change in future versions! // // - Mete Ciragan // in point 2, why the local matrix and not the absolute transform of the joint ? thanks allot
  2. multiple animations in MS3D file

    can we say that ms3d model has only one animation at a time ?
  3. Dear all how can ms3d model store more than one animation in the file. each joint has a predefined number of translation and rotation keyframes. I would assume that interpolating these key frames would generate a single animation. if the model can have more than one animation stored in the file, how can one know where a certain animation starts and ends and what is its name ? in other words, I am trying to understand if ms3d uses for example translation keyframes 1 -> n, rotation keyframes 1 -> m for animation 1, translation keyframes n+1 -> k, rotation key frames m+1 -> j for animation 2 and so on.
  4. why use vertex buffer and index buffer

    can I assume that shared vertices in a mesh group in ms3d have the same texture coordinates and normals ?
  5. why use vertex buffer and index buffer

    I have a model with ~500 vertices and ~3000 index. look at the numbers : from 3000 vertex to 500 vertex, this is a big saving but all of this is screwed because u can not create a vertex buffer of 500 vertex simply because u can not draw them unless they all have the same vertex format which means if more than one vertex are shared by a triangle must have the same texture coordinates and normals and this is not always the case. then u find yourself forced to create a buffer of 3000 vertices. at this case it does not make any sense to use indices since u r using the whole set. if u can convert the 500 to something between 500 and 3000 something like 1500, then it makes sense but I do not think this is doable.
  6. drawing using a vertex buffer and index buffer is really confusing me. it is known that one of the advantages of using this method is that vertices could be shared by triangles and this would save memory. that is correct but what is confusing is that shared vertices might not have the same texture coordinates and same normals. before u draw from a vertex buffer u should specify the vertex format which means assigning it x,y,z (this is ok since it is the same), nx, ny, nz (this might differ between the two adjacent triangles), texture coordinates may also differ so u need to repeat the vertices again to take the above differences into account and this would defeat the purpose of using vertex buffers. I am trying to load an MS3D model, it saves the vertices into one place then it index these vertices when defining the triangles. each triangle has 3 sets of normals and texture coordinates for each corner and 3 indexes for the actual vertices as mentioned earlier. I can not just copy the vertices into a vertex buffer since a vertex may be shared by two triangles with different tex coordinates. the solution is create a vertex buffer that contains the whole set of the model's vertices. if that is the case why use indexed vertex buffer from the very beginning ? thanks
  7. what animations are there in an MD2 model ?

    how can one know where an animation starts and where it ends ? if u check the animation name u might find names like: stand001 attack02 idle1 spy I could identify the start and end based on file name but I was experimenting with different md2 files and found the combinations above, some files has a 3 digit sequence number at the end of the name, some has 2, some has 1 and some has 0. if u do not know how to extract the name from the frame then there is no way to automatically detect where animation's start and end. standard md2 files that has 198 frames has a predefined animation's so u can know but what about models with other animation's ?
  8. Hi All I have read that md2 models define a set of 21 animations (199 frames) such as STAND (frames 0-39) and so on. I dealt with models that has less than 199 frames. how one can identify what kind of animations are there in a certain model ? md2 frame has a name, something like attack001, from this info one can identify the total number of animations and the number of frames per animation, but still how one can know the number of frames per second for a certain animation ? thanks allot
  9. do we need vertex buffers ? + md2 animation

    I do not think I have problems with memcpy. I used it before and it functions fine. I think it is some how related to what u said (timing) but still I am suspicious that there is something related to the buffers. let me describe in more details the way I animate my md2 model. I thought about it twice and I think I have some know problems: each frame is a mesh that has its own vertex and index buffers, at load time I load the model from file and fill these frames. I was able to render any frame I choose using these buffers. so in total I have created 198 vertex buffer and 198 index buffer for a model of 198 frames and this is not correct. I do not have to create buffers for the frames however the index and vertex buffer needs to be created once for the current drawn interpolated frame. I think there is another problem which is : every time I update the current frame, I copy the new vertices form memory to buffer , this is correct but I used to copy the indexes also and this is not correct I guess since the indexes do not change across frames. the above two problems are what I can see right now but STILL I think there is a timing issue. here is what I do (recall that this is not the final thing, it is just for testing) void AnimatedModelNode::onPostRender(unsigned int time) { static Real x = 0; //interpolation percent between i and j static int i = 0; //frame i static int j = 1; //frame j //Update current drawn frame every 1 ms if ((time)%1 == 0) { updateCurrentFrame(i, j, x); } x += 0.01; if (x > 1) { x = 0; i++; j++;} if(j == 197) { i = 0; j = 1; } }
  10. why do we need to define vertex buffers ourselves in direct x while we have the ability to use drawIndexedPrimitiveListUp directly from memory. as long as Direct x copies the data from memory to the hardware buffer, why do we need to create it. by the way, I am trying to animate an MD2 model by interpolating frames as usual. I treat each interpolated frame as an indexed hardware vertex buffer that only needs its vertices to change by interpolation. indexes and texture coordinates are the same. the fps crawls when I update the current drawn frame each 1ms. if I update every 100 ms, the fps improves but u get jerky animation. each time I update the vertex buffer I copy the memory vertex array to the hardware vertex buffer using memcpy. where is the bottle neck knowing that animating an md2 model is done like this , u just update the vertices of the interpolation buffer.
  11. Why people are scared from quaternions ?

    thanks DrGUI for the code, actually I have no problem with that since I already have the code the finds the quaternion to rotate a vector to a vector. but there is something wrong in the way I rotate the look and up vectors. regarding the dx quaternion class. thanks for the hint but I doubt it is gonna change something, I have a good quaternion class. I think my problem is in the login I am using. look at this statement: the view transform is basically the inverse of the transform needed to translate the camera position to the origin and re-orient its local coordinates with the world x , y and z. I am trying to implement that. rotate the look to -z and up to y but it does work in some cases and does not wrok in the other cases. there should be something stupid that I am not paying attention to since in the cases where it does not work I can see the objects I am looking to but there are not in the middle of the screen. the camera is like shifted or missoriented.
  12. Hi All I have spent allot of time trying to resolve my problem but I could not find a solution. I posted my question on game programming forum but no body helped. may be they are scared from quaternions. I am using directx so the coordinate system is left handed where the positive z axis goes into the screen. I am trying to build a simple camera. I am given the up vector, the position of the camera and the target to look at. with this information in hand, it is very easy to build a left handed look at matrix, I already did that but this is not what I want. my scene nodes including cameras have their orientaions stored as quaternions. wheneve I need to animate a node I manipulate the orientaion quaternion then finally it gets converted into a world transform. this is working fine with me but I could not make it work in case of camera. the same way I need to manipulate the camera orientation quaternion and finally convert it to a view transform. sorry for the long explanation but I think it might help. here is what I did : the view transfom is the inverse of the transform needed to move the camera position to the origin and realign the camera look, up and right with the z, u and x axes //Set camera position m_relPos = camPos //up vector m_up.Normalize(); //Calculate look at vector //m_relPos below should be the absolute pos but here //I am assuming the camera has no parent m_look = m_target - m_relPos //Normalize the look vector m_look.Normalize(); //Quaternion to rotate the look to z core::Quaternion look2z; //Quaternion to rotate the up to y core::Quaternion up2y; //calculate look2z look2z = look2z.getRotationTo(m_look, core::Vector3::UNITZ); //calculate up2y up2y = up2y.getRotationTo(m_up, core::Vector3::UNITY); //Update camera orientation m_relOrient = look2z * up2y; so far the camera m_relPos and m_relOrient are there convert the m_relOrient to rotation matrix and set the translation to m_relPos and store it in m_relTransform //Calculate view transform as: m_view = m_relTransform.getInverse(); what is really confusing me is that I guess there is something tricky here. for example: if we look at the origin from the positions: camPos ==== (-10, 0, 0); ok (-10, 10, 0); ok (-10, 0, -10); ok (-10, 10, -10); view is shifted or rotated (0, 0, -10); ok (0, 10, -10); view is shifted or rotated (10, 0, -10); ok (10, 10, -10); view is shifted or rotated (10, 0, 0); ok (10, 10, 0); ok it does not work only in the cases where the y and z of the look vector are both non zero. I can not find an explanation for this. I am suspicious about the fact that direct x is a left handed system while the quaternion math assumes right handed system. sorry for the long message, I entended to do that to make things clear. any help is appreciated thanks again
  13. Quaternion based view matrix

    FYI: in direct x the negative z axis is out of the screen. my camera is a scene node, to define a visible scene node, u set the world transform (in direct x: world and view are separated), but the camera is an invisible scene node, u position it by specifying the view transform. the view transform to position a camera can be calculated using the position , up and look easily, I have done that. however I need to store the camera orientation as a quaternion. I am doing that because the rest of my project deals with nodes as positions and orientations. so finding this orientation quaternion is the end result, I save it as the camera's orientation to be used by the rest of the project and convert it to a matrix and use it to set the view transform after taking the inverse of the converted matrix. let me try to explain what I am doing in different words: if you have a camera positioned in the space and looking at a target with the up vector is set, this means that the camera position and orientation can be defined. to find that, we move the camera position to the origin then rotate the look and make it point into the screen, rotate the up vector and make it points to the y axis, basically u do a translation and two rotations. after doing that, the view transform is basically the inverse of the total transform we just did to the camera position.
  14. Hi Guys I am building a simple quaternion based camera. using the up, look and position vectors of the camera I can easily build the view matrix. I have the formula and it is working fine. but I do not need that, I want to calculate the view matrix by converting the up, look and position to quaternions. here is what I do: goal:: find the inverse of the transform that translate the camera to the origin and reallign it such that looking down the negative z axis with the above vector is the y axis, her is my try: m_up.Normalize(); m_look = m_target - camPos m_look.Normalize(); //The quat needed to rotate the //look and align it with the negative z axis (into the screen in DX9) Quaternion qlook; //The quat to rotate the up vector to the y axis Quaternion qup; qlook = qlook.getRotationTo(m_look, -z_axis); qup = qup.getRotationTo(m_up, y_axis); //total rotation quaternion camOrientation = qlook * qup ; //Convert to rotation matrix Matrix4 rot = camOrientation.quat2Mat4(); //Translate to the origin rot.setTranslation(-camPos); //Invert the matrix m_view = rot.getInverse(); assuming the matrix and quaternion math is correct, can any body see where is the wrong part in that. the above seems to work if I do: 1- qlook = qlook.getRotationTo(m_look, z_axis) instead of qlook = qlook.getRotationTo(m_look, -z_axis); 2- do not include the qup 3- when target = origin, pos = (-5, 5, -5), up (0,1,0) , u can see the object at origin but it is like rotated by an angle about the look vector. there must be something tricky that I do not figure out or wrong math.
  15. Spin the cube on 2 axes

    thanks for the input. I figured out why is the problem. I was not interpolating the quaternions between rotations.