Buckeye

GDNet+ Basic
  • Content count

    5215
  • Joined

  • Last visited

Community Reputation

10747 Excellent

About Buckeye

  • Rank
    Legend

Personal Information

  • Interests
    Programming
  1. You might try simulating it from an energy standpoint. The projectile, at impact, provides a total energy (E) of 1/2mv2. After the size of each chunk that will move (that will have velocity) has been determined, determine the total mass (M) that will then be moving - i.e., the sum of all the mass of the moving chunks. The energy of each chunk will then be E * chunk-mass / M. Given the chunk's energy, chunk-velocity = sqrt( 2 * chunk-energy / chunk-mass ). I haven't a clue whether that will give you the results you've specified, but it may give you a basis for playing around with proportions, tweaking constants, etc.   Something similar could be done with momentum - incoming projectile momentum = mv. The sum total of outgoing momentum must = mv, using the same method described above to proportion each chunk's momentum.
  2.   As you've already decided what behavior you want from the system, it's not a matter of physics. You need to setup a model to do what you want - i.e., make up a system of rules that produces the desired results.   Just a guess at what may give you what you're looking for - rather than having a fixed adhesion between bricks, consider a probability (a bit of randomness around a fixed value) that a crack will form between bricks based on distance from the impact. If a system of cracks (or a sufficient number of cracks) form around a chunk, consider that chunk then to be free and becomes another projectile.
  3.   You said the file you're using (unspecified) works in the assimp viewer, so you do have a model that "works." If you're recoding OGL to DirectX 11, you're definitely going to need known data to test your code, as, at this point, it appears you may be writing code you're not sure of, and hoping it works with data you don't understand. Not a good thing.   Others may have better suggestions, but writing code without being able to test it is definitely not the way to go.   1. Use a very basic model, perhaps something with just 1 or 2 bones, with a very simple animation, perhaps just 1 or 2 frames of simple rotation. That may require working with a modeling program, such as Blender. Export it in a format that Assimp accepts, and which you can examine in text format. I.e., look at the matrices in the file and see if that's what you import.   2. google for something like "directx 11 animation example" and find code that is likely to work (rather than writing your own).   Just my opinion, but debugging code without a way to tell if the code is good or bad, with data that you don't know is good or bad ... futile.
  4. Trying to help you out here - but you didn't respond to questions asked. However, from the wall of code you did post in response, and your comments indicating a lot of confusion, it appears you need to take one thing at a time. You may want to take a look at this debugging method to help your focus. Don't use a shotgun approach, posting bunches of stuff to "to fill out possibly all what could be raising issue." You need to determine where the problem may lie, and that will require you to understand what data you should have, and what the code should do with it.   Take a breath, and, as suggested in the debugging link, start at one point in your code. Determine if both the data and the code at that point are correct. Continue slowly, one step at a time, until, at a minimum, you can post something (brief) like "At this point in my code, I have such-and-such data." Then ask a specific question about the code and/or the data that you don't understand.   EDIT: FYI, the link you posted about "skeletal reading" is an OpenGL tutorial. You have to be careful and ensure you understand the differences between implementations in OGL and DirectX - things like column- versus row-major matrices, etc.
  5.   Sounds like you're almost there. What global transformation do you have, and what transformation should you have? You should be able to compare the two, and, at least, determine if it's a scale problem, or orientation about the wrong axis (or axes), etc.   If you set the global to the identity matrix, how do things look? I.e., is the animation correct, but the mesh isn't oriented correctly? Is the scaling correct?
  6. For the purpose of error-checking, a better alternative is, as you mention, initialize mem-pointers to nullptr in the constructor. Then implement another class function such as bool Init(). In that Init() call, you can perform required allocations and other initializations as desired. The benefit of a separate Init() call: you can return an error indication if any part of your initialization fails.
  7.   Not sure what you mean by "maintain angular momentum." If you mean maintain constant, that's incorrect. As mentioned, angular momentum is constant for a system, not a single object, and angular momentum would be equal to I * w, not I-1 * w. As a result, for a single object, angular momentum might change if torque is applied externally.   The current angular velocity is the sum of the previous angular velocity + the change in angular velocity. The change in the angular velocity is torque / I * dt. The angular momentum would be I*w, but, because w changed, the angular momentum for that single object changed.     That code implements (or appears to implement): w += T / I * dt, which is correct. That is, as mentioned** above, T / I = dw/dt, so delta-w = T / I * dt. The latter is the change in angular velocity, which is added to the angular velocity to update it.   ** My previous post was incomplete, in that I did not include the time. Sorry about that.
  8. First: I'm not familiar with the book you're talking about. And it's not clear exactly what you mean in a mathematical sense by a tensor being "applied" to the torque "before applying" it to angular velocity. If you mean something like T/I = delta-w, that's true.     I think so. In general, you may get a better feeling for torque, angular momentum and angular velocity by considering those as parallels with classical Newtonian linear force, linear momentum and linear velocity. I.e., stuff like linear momentum P = m * v, and F = m * a. So, angular momentum L = I * w, and Torque = I * dw/dt (angular acceleration).     That applies to a system, not a single object. Obviously, an object's angular momentum can change through the application of torque. But, for the torque applied, an equal and opposite torque must be applied to something else in the system. That's similar to linear Newtonian physics: linear momentum is conserved for a system. The linear momentum of a rigid object, if no external force is applied, remains constant. If an external force is applied, an equal and opposite force must be applied to something else. In that case, both the rigid object and the "something else" must be considered as the system in question, and the momentum of the system remains constant.     Your assumptions are incorrect. If no external torque is applied, then there will be no angular acceleration due to external torque. Drawing your arms in changes the moment of inertia. So, because L = I*w, and I has changed, w must change. That is, I is the integral of mass * radius-from-axis. Mass is conserved, so when the radius decreases, the moment of inertia decreases. If I decreases, w must increase. The change in w results from the torque that results from the force the skater applies to the arms to draw them in. Force-at-a-distance is torque.   I suspect you know all that. Perhaps if you provide the exact context of your question, a better explanation can be provided.
  9. FPS Scope/Zoom

    You can do that or you can keep the same camera position and use a projection matrix with a smaller fov. That simulates more closely what a telescope does.   The "problem" with moving the camera itself rather than zooming the fov is when there are intervening objects between the observer and the target. You still want obstructions to .. well .. obstruct the view.
  10. DX11 Why is this 1?

      That's correct with regard to range.   Just to be clear, the ( x, y ) screen coordinates that result are ( -1, +1 ) at the upper-left to ( +1, -1 ) at the lower-right. That represents a Cartesian system with the origin at center-screen, with -X to the left, +Y up, +X to the right, and -Y down.
  11. Visual Studio 2013 Graphics Debug

      There is no reason whatsoever to uninstall the old SDK. It's not clear why you would recommend that. If the poster has other applications that were built with it, a lot of valuable work could be lost.   Potential conflicts between the current SDK and older versions are eliminated by simply not using inappropriate references.
  12. The animation matrices you describe appear, in general, to be pretty standard, though I think your terminology is non-standard.   More commonly, an "offset matrix" is understood to be a matrix that transforms a vertex in pose position from model space to joint space. A joint's animation matrix, when applied to a vertex in joint space, transforms the vertex to an animated position in the joint's space.   By concatenating the animation matrices - call that a "combined" matrix (assuming your action is joint-anim-matrix X joint-parent-anim-matrix X joint-parent-parent-anim-matrix X ... X rootFrame-matrix) then can be understood as animate-vertex-in-joint-space-and-transform-to-model-space.   As a result, offset-matrix X combined-matrix transforms the vertex from pose position to joint space, moves it to the animated position in joint space, and transforms the animated vertex position back into model space.   If your animations are as I assume, you can't arbitrarily apply animations from one skeleton to another. To apply animations from one skeleton to another, the two skeletons have to have the same child-to-parent relationships, and (more importantly) have to have the same distance from joint-to-parent. That joint-to-parent distance is part of each joint's animation matrix. I.e., that's what makes concatenating (or combining) work - it moves the animated vertex position in joint space to model space - which involves rotation and translation (distance.)   Again, if your animations are as I assume, the offset matrices for each bone (joint) are most commonly simple translations. E.g., a vertex is 6 units above the "ground" and a joint which is to influence it is 7 units above the "ground" in pose position; the offset matrix merely translates the vertex -7 units, positioning the vertex at -1 units in joint space. Then the vertex (for example) is rotated in joint space and translated +7 units, putting it back in model space.   If that animation is applied to a vertex in another mesh that, for example, is 8 units above the "ground," the offset matrix still translates it -7 units. Now the rotation is applied in joint space to a vertex at +1 in joint space. That rotation may be in a direction opposite to what is desired. That MAY be why you see "jittering" in the animation on the left in the pix you posted. Several of the bones on the left are longer than the corresponding bones on the right. By bone length, I mean the distance between joints.
  13.   For cryin' out loud - that's 400 some downloads just during the month of July 2015. There are over 46000 downloads all-time.
  14. As Mona2000 mentions, you don't appear to be checking for errors. I.e., device functions such as CreateGeometryShaderWithStreamOutput return a result code of type HRESULT. You should be checking every such call for success - e.g. HRESULT hr = dev->CreateGeometryShaderWithStreamOutput(...); if ( FAILED(hr) ) ... // announce the error to the user and exit gracefully.         If you made that change to the D3D11_SO_DECLARATION_ENTRY, that's incorrect. The semantic name should not include the semantic index. The semantic index follows the semantic name in the next field of the entry. That's why you need to enable the debug layer and read the error messages in the debug output window. Using "TEXCOORD0" as you have it above will result in a message telling you that the semantic cannot end in a number.   I.e., try the following:   D3D11_SO_DECLARATION_ENTRY pDecl[] = {             { 0, "SV_POSITION", 0, 0, 4, 0 },     { 0, "TEXCOORD", 0, 0, 2, 0 }, }; If you wanted to use TEXCOORD1 in the output structure, you would do the following:   D3D11_SO_DECLARATION_ENTRY pDecl[] = {             { 0, "SV_POSITION", 0, 0, 4, 0 },     { 0, "TEXCOORD", 1, 0, 2, 0 }, };