• Advertisement

Search the Community

Showing results for tags 'Linear Algebra'.



More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Categories

  • Audio
    • Music and Sound FX
  • Business
    • Business and Law
    • Career Development
    • Production and Management
  • Game Design
    • Game Design and Theory
    • Writing for Games
    • UX for Games
  • Industry
    • Interviews
    • Event Coverage
  • Programming
    • Artificial Intelligence
    • General and Gameplay Programming
    • Graphics and GPU Programming
    • Engines and Middleware
    • Math and Physics
    • Networking and Multiplayer
  • Visual Arts
  • Archive

Categories

  • News

Categories

  • Audio
  • Visual Arts
  • Programming
  • Writing

Categories

  • GameDev Unboxed

Categories

  • Game Dev Loadout

Categories

  • Game Developers Conference
    • GDC 2017

Forums

  • Audio
    • Music and Sound FX
  • Business
    • Games Career Development
    • Production and Management
    • Games Business and Law
  • Game Design
    • Game Design and Theory
    • Writing for Games
  • Programming
    • Artificial Intelligence
    • Engines and Middleware
    • General and Gameplay Programming
    • Graphics and GPU Programming
    • Math and Physics
    • Networking and Multiplayer
  • Visual Arts
    • 2D and 3D Art
    • Critique and Feedback
  • Topical
    • Virtual and Augmented Reality
    • News
  • Community
    • GameDev Challenges
    • For Beginners
    • GDNet+ Member Forum
    • GDNet Lounge
    • GDNet Comments, Suggestions, and Ideas
    • Coding Horrors
    • Your Announcements
    • Hobby Project Classifieds
    • Indie Showcase
    • Article Writing
  • Affiliates
    • NeHe Productions
    • AngelCode
  • Workshops
    • C# Workshop
    • CPP Workshop
    • Freehand Drawing Workshop
    • Hands-On Interactive Game Development
    • SICP Workshop
    • XNA 4.0 Workshop
  • Archive
    • Topical
    • Affiliates
    • Contests
    • Technical

Calendars

  • Community Calendar
  • Games Industry Events
  • Game Jams

Blogs

There are no results to display.

There are no results to display.

Marker Groups

  • Members

Developers

Developers


Group


About Me


Website


Industry Role


Twitter


Github


Twitch


Steam

Found 15 results

  1. Original Post: Limitless Curiosity Out of various phases of the physics engine. Constraint Resolution was the hardest for me to understand personally. I need to read a lot of different papers and articles to fully understand how constraint resolution works. So I decided to write this article to help me understand it more easily in the future if, for example, I forget how this works. This article will tackle this problem by giving an example then make a general formula out of it. So let us delve into a pretty common scenario when two of our rigid bodies collide and penetrate each other as depicted below. From the scenario above we can formulate: We don't want our rigid bodies to intersect each other, thus we construct a constraint where the penetration depth must be more than zero. \(C: d>=0\) This is an inequality constraint, we can transform it to a more simple equality constraint by only solving it if two bodies are penetrating each other. If two rigid bodies don't collide with each other, we don't need any constraint resolution. So: if d>=0, do nothing else if d < 0 solve C: d = 0 Now we can solve this equation by calculating \( \Delta \vec{p1},\Delta \vec{p2},\Delta \vec{r1}\),and \( \Delta \vec{r2}\) that cause the constraint above satisfied. This method is called the position-based method. This will satisfy the above constraint immediately in the current frame and might cause a jittery effect. A much more modern and preferable method that is used in Box2d, Chipmunk, Bullet and my physics engine is called the impulse-based method. In this method, we derive a velocity constraint equation from the position constraint equation above. We are working in 2D so angular velocity and the cross result of two vectors are scalars. Next, we need to find \(\Delta V\) or impulse to satisfy the velocity constraint. This \(\Delta V\) is caused by a force. We call this force 'constraint force'. Constraint force only exerts a force on the direction of illegal movement in our case the penetration normal. We don't want this force to do any work, contribute or restrict any motion of legal direction. \(\lambda\) is a scalar, called Lagrangian multiplier. To understand why constraint force working on \(J^{T}\) direction (remember J is a 12 by 1 matrix, so \(J^{T}\) is a 1 by 12 matrix or a 12-dimensional vector), try to remember the equation for a three-dimensional plane. Now we can draw similarity between equation(1) and equation(2), where \(\vec{n}^{T}\) is similar to J and \(\vec{v}\) is similar to V. So we can interpret equation(1) as a 12 dimensional plane, we can conclude that \(J^{T}\) as the normal of this plane. If a point is outside a plane, the shortest distance from this point to the surface is the normal direction. After we calculate the Lagrangian multiplier, we have a way to get back the impulse from equation(3). Then, we can apply this impulse to each rigid body. Baumgarte Stabilization Note that solving the velocity constraint doesn't mean that we satisfy the position constraint. When we solve the velocity constraint, there is already a violation in the position constraint. We call this violation position drift. What we achieve is stopping the two bodies from penetrating deeper (The penetration depth will stop growing). It might be fine for a slow-moving object as the position drift is not noticeable, but it will be a problem as the object moving faster. The animation below demonstrates what happens when we solve the velocity constraint. [caption id="attachment_38" align="alignnone" width="800"] So instead of purely solving the velocity constraint, we add a bias term to fix any violation that happens in position constraint. So what is the value of the bias? As mentioned before we need this bias to fix positional drift. So we want this bias to be in proportion to penetration depth. This method is called Baumgarte Stabilization and \(\beta\) is a baumgarte term. The right value for this term might differ for different scenarios. We need to tweak this value between 0 and 1 to find the right value that makes our simulation stable. Sequential Impulse If our world consists only of two rigid bodies and one contact constraint. Then the above method will work decently. But in most games, there are more than two rigid bodies. One body can collide and penetrate with two or more bodies. We need to satisfy all the contact constraint simultaneously. For a real-time application, solving all these constraints simultaneously is not feasible. Erin Catto proposes a practical solution, called sequential impulse. The idea here is similar to Project Gauss-Seidel. We calculate \(\lambda\) and \(\Delta V\) for each constraint one by one, from constraint one to constraint n(n = number of constraint). After we finish iterating through the constraints and calculate \(\Delta V\), we repeat the process from constraint one to constraint n until the specified number of iteration. This algorithm will converge to the actual solution.The more we repeat the process, the more accurate the result will be. In Box2d, Erin Catto set ten as the default for the number of iteration. Another thing to notice is that while we satisfy one constraint we might unintentionally satisfy another constraint. Say for example that we have two different contact constraint on the same rigid body. When we solve \(\dot{C1}\), we might incidentally make \(\dot{d2} >= 0\). Remember that equation(5), is a formula for \(\dot{C}: \dot{d} = 0\) not \(\dot{C}: \dot{d} >= 0\). So we don't need to apply it to \(\dot{C2}\) anymore. We can detect this by looking at the sign of \(\lambda\). If the sign of \(\lambda\) is negative, that means the constraint is already satisfied. If we use this negative lambda as an impulse, it means we pull it closer instead of pushing it apart. It is fine for individual \(\lambda\) to be negative. But, we need to make sure the accumulation of \(\lambda\) is not negative. In each iteration, we add the current lambda to normalImpulseSum. Then we clamp the normalImpulseSum between 0 and positive infinity. The actual Lagrangian multiplier that we will use to calculate the impulse is the difference between the new normalImpulseSum and the previous normalImpulseSum Restitution Okay, now we have successfully resolve contact penetration in our physics engine. But what about simulating objects that bounce when a collision happens. The property to bounce when a collision happens is called restitution. The coefficient of restitution denoted \(C_{r}\), is the ratio of the parting speed after the collision and the closing speed before the collision. The coefficient of restitution only affects the velocity along the normal direction. So we need to do the dot operation with the normal vector. Notice that in this specific case the \(V_{initial}\) is similar to JV. If we look back at our constraint above, we set \(\dot{d}\) to zero because we assume that the object does not bounce back(\(C_{r}=0\)).So, if \(C_{r} != 0\), instead of 0, we can modify our constraint so the desired velocity is \(V_{final}\). We can merge our old bias term with the restitution term to get a new bias value. // init constraint // Calculate J(M^-1)(J^T). This term is constant so we can calculate this first for (int i = 0; i < constraint->numContactPoint; i++) { ftContactPointConstraint *pointConstraint = &constraint->pointConstraint; pointConstraint->r1 = manifold->contactPoints.r1 - (bodyA->transform.center + bodyA->centerOfMass); pointConstraint->r2 = manifold->contactPoints.r2 - (bodyB->transform.center + bodyB->centerOfMass); real kNormal = bodyA->inverseMass + bodyB->inverseMass; // Calculate r X normal real rnA = pointConstraint->r1.cross(constraint->normal); real rnB = pointConstraint->r2.cross(constraint->normal); // Calculate J(M^-1)(J^T). kNormal += (bodyA->inverseMoment * rnA * rnA + bodyB->inverseMoment * rnB * rnB); // Save inverse of J(M^-1)(J^T). pointConstraint->normalMass = 1 / kNormal; pointConstraint->positionBias = m_option.baumgarteCoef * manifold->penetrationDepth; ftVector2 vA = bodyA->velocity; ftVector2 vB = bodyB->velocity; real wA = bodyA->angularVelocity; real wB = bodyB->angularVelocity; ftVector2 dv = (vB + pointConstraint->r2.invCross(wB) - vA - pointConstraint->r1.invCross(wA)); //Calculate JV real jnV = dv.dot(constraint->normal pointConstraint->restitutionBias = -restitution * (jnV + m_option.restitutionSlop); } // solve constraint while (numIteration > 0) { for (int i = 0; i < m_constraintGroup.nConstraint; ++i) { ftContactConstraint *constraint = &(m_constraintGroup.constraints); int32 bodyIDA = constraint->bodyIDA; int32 bodyIDB = constraint->bodyIDB; ftVector2 normal = constraint->normal; ftVector2 tangent = normal.tangent(); for (int j = 0; j < constraint->numContactPoint; ++j) { ftContactPointConstraint *pointConstraint = &(constraint->pointConstraint[j]); ftVector2 vA = m_constraintGroup.velocities[bodyIDA]; ftVector2 vB = m_constraintGroup.velocities[bodyIDB]; real wA = m_constraintGroup.angularVelocities[bodyIDA]; real wB = m_constraintGroup.angularVelocities[bodyIDB]; //Calculate JV. (jnV = JV, dv = derivative of d, JV = derivative(d) dot normal)) ftVector2 dv = (vB + pointConstraint->r2.invCross(wB) - vA - pointConstraint->r1.invCross(wA)); real jnV = dv.dot(normal); //Calculate Lambda ( lambda real nLambda = (-jnV + pointConstraint->positionBias / dt + pointConstraint->restitutionBias) * pointConstraint->normalMass; // Add lambda to normalImpulse and clamp real oldAccumI = pointConstraint->nIAcc; pointConstraint->nIAcc += nLambda; if (pointConstraint->nIAcc < 0) { pointConstraint->nIAcc = 0; } // Find real lambda real I = pointConstraint->nIAcc - oldAccumI; // Calculate linear impulse ftVector2 nLinearI = normal * I; // Calculate angular impulse real rnA = pointConstraint->r1.cross(normal); real rnB = pointConstraint->r2.cross(normal); real nAngularIA = rnA * I; real nAngularIB = rnB * I; // Apply linear impulse m_constraintGroup.velocities[bodyIDA] -= constraint->invMassA * nLinearI; m_constraintGroup.velocities[bodyIDB] += constraint->invMassB * nLinearI; // Apply angular impulse m_constraintGroup.angularVelocities[bodyIDA] -= constraint->invMomentA * nAngularIA; m_constraintGroup.angularVelocities[bodyIDB] += constraint->invMomentB * nAngularIB; } } --numIteration; } General Step to Solve Constraint In this article, we have learned how to solve contact penetration by defining it as a constraint and solve it. But this framework is not only used to solve contact penetration. We can do many more cool things with constraints like for example implementing hinge joint, pulley, spring, etc. So this is the step-by-step of constraint resolution: Define the constraint in the form \(\dot{C}: JV + b = 0\). V is always \(\begin{bmatrix} \vec{v1} \\ w1 \\ \vec{v2} \\ w2\end{bmatrix}\) for every constraint. So we need to find J or the Jacobian Matrix for that specific constraint. Decide the number of iteration for the sequential impulse. Next find the Lagrangian multiplier by inserting velocity, mass, and the Jacobian Matrix into this equation: Do step 3 for each constraint, and repeat the process as much as the number of iteration. Clamp the Lagrangian multiplier if needed. This marks the end of this article. Feel free to ask if something is still unclear. And please inform me if there are inaccuracies in my article. Thank you for reading. NB: Box2d use sequential impulse, but does not use baumgarte stabilization anymore. It uses full NGS to resolve the position drift. Chipmunk still use baumgarte stabilization. References Allen Chou's post on Constraint Resolution A Unified Framework for Rigid Body Dynamics An Introduction to Physically Based Modeling: Constrained Dynamics Erin Catto's Box2d and presentation on constraint resolution Falton Debug Visualizer 18_01_2018 22_40_12.mp4 equation.svg
  2. Hello guys, My math is failing and can't get my orthographic projection matrix to work in Vulkan 1.0 (my implementation works great in D3D11 and D3D12). Specifically, there's nothing being drawn on the screen when using an ortho matrix but my perspective projection matrix work fantastic! I use glm with defines GLM_FORCE_LEFT_HANDED and GLM_FORCE_DEPTH_ZERO_TO_ONE (to handle 0 to 1 depth). This is how i define my matrices: m_projection_matrix = glm::perspective(glm::radians(fov), aspect_ratio, 0.1f, 100.0f); m_ortho_matrix = glm::ortho(0.0f, (float)width, (float)height, 0.0f, 0.1f, 100.0f); // I also tried 0.0f and 1.0f for depth near and far, the same I set and work for D3D but in Vulkan it doesn't work either. Then I premultiply both matrices with a "fix matrix" to invert the Y axis: glm::mat4 matrix_fix = {1.0f, 0.0f, 0.0f, 0.0f, 0.0f, -1.0f, 0.0f, 0.0f, 0.0f, 0.0f, 1.0f, 0.0f, 0.0f, 0.0f, 0.0f, 1.0f}; m_projection_matrix = m_projection_matrix * matrix_fix; m_ortho_matrix = m_ortho_matrix * matrix_fix; This fix matrix works good in tandem with GLM_FORCE_DEPTH_ZERO_TO_ONE. Model/World matrix is the identity matrix: glm::mat4 m_world_matrix(1.0f); Then finally this is how i set my view matrix: // Yes, I use Euler angles (don't bring the gimbal lock topic here, lol). They work great with my cameras in D3D too! m_view_matrix = glm::yawPitchRoll(glm::radians(m_rotation.y), glm::radians(m_rotation.x), glm::radians(m_rotation.z)); m_view_matrix = glm::translate(m_view_matrix, -m_position); That's all guys, in my shaders I correctly multiply all 3 matrices with the position vector and as I said, the perspective matrix works really good but my ortho matrix displays no geometry. EDIT: My vertex data is also on the right track, I use the same geometry in D3D and it works great: 256.0f units means 256 points/dots/pixels wide. What could I possibly be doing wrong or missing? Big thanks guys any help would be greatly appreciated. Keep on coding, cheers.
  3. I've been puzzling over this for days now, and have hit a wall so need some help! I'm using VR motion controllers in Unreal Engine. I need an object to follow along with the motion controller, but to maintain a certain orientation. If the motion controller / object were to move around on the flat XY plane (UE4 is Z up so the XY plane is the horizontal plane), the top of the object would be pointing upwards, and the object just turns corners following the direction of movement (turning much like a car would when driving around). At each tick, I am taking the current position of the motion controller, and making a normalised directional vector using (CurrentPositionXYZ - PreviousPositionXYZ). Using this direction (forward vector), I am then making a rotation from the forward and up vectors. In order to do that I am first finding out the right vector by taking the cross product of a global up vector (0,0,1) with the forward vector. Then I take the cross product of the right and forward vectors to get the actual up vector. Then I make the rotation with forward and up vectors. Here is what this looks like if the motion controller movement were to be locked to the XY plane and move in a circle. All looks good, and exactly what I was hoping for. However, if the circular movement was to occur in either of the other planes (XZ or YZ) then we have a problem at two points on the circle when the forward (directional) vector lines up with the global up vector I'm using. Because these calculations have no way of knowing that the object is moving in a certain direction, when the direction changes to move from negative Y to positive Y, for example, the whole thing flips. Here's what this looks like with a circle drawn in the YZ plane: This is obviously no good, as it breaks the consistency of movement. As such, I wrote some logic that detects if the X or Y value crosses 0 when the direction is pointing up/down, and then reverses the global up vector. Assuming there is no movement/value in X, the movement graph for this looks like this: Red = X; Green = Y; Blue = Z || Circle in YZ plane starting movement in -Y direction. And with the up vector flip logic added, it looks like this: Looking good! Except there's a massive problem. This all works fine if the movement is constrained to a plane (in this case the YZ plane, but also works perfectly in the XZ plane), but breaks down if the circular movement is drawn eve slightly off-axis. Here's what happens: There's a section where everything flips. The size of that section is determined by the magnitude of X, and relates to the part of the graph pointed to in this diagram: Here X is set at a constant -0.2. As soon as the Y value passes the value of X and approaches zero, the value in X starts having greater and greater influence over the direction, and then as Y starts moving further away from zero again, the influence drops away again. Although this makes sense, it's not what I want, and I can't figure out how to achieve what I want. I'm having trouble even defining the nature of the problem. Perhaps (probably) I'm doing this in a really stupid way, and there's a much simpler solution for what I'm trying to achieve. I could really use some help! Given a progression of known locations through 3D space, what is the best way to move an object along them such that its axes bank (if that's the correct term) realistically and don't do any weird flipping around? All help massively appreciated!
  4. Gaussian based Weights

    Hi. I was studying the Bloom tutorial on learnopengl.com. https://learnopengl.com/#!Advanced-Lighting/Bloom I understand the benefits of using a Gaussian blur for achieving a Bloom effect. I understand that the author of this tutorial is using the gaussian values as weights which are used to determine how much neighboring pixels should affect the current pixel when blurred. In this tutorial, the author chooses to use the following weights, 0.227027, 0.1945946, 0.1216216, 0.054054, 0.016216 Where 0.227027 + (0.1945946*2.0) + (0.1216216 * 2.0) + (0.054054 * 2.0) + (0.016216 *2.0) comes out to about 1. The author chooses to same 4 samples to the left and right of the current pixel for a total sample count of 9 per axis. All that makes sense but what I don't understand is how to calculate the weights myself. For example, lets say I wanted a sample count of some arbitrary number like 5. With a sample count of 5, the formula would then need to be A + (B * 2.0) + (C * 2.0) + (D * 2.0) + (E * 2.0) = 1 What I want to do is figure out how to calculate the wights, A,B,C,D, and E given the sample count. Thanks!
  5. I'm trying to implement a constraint that fixes the relative orientation and position of two rigid bodies. My implementation is based on this document http://danielchappuis.ch/download/ConstraintsDerivationRigidBody3D.pdf The resulting velocity adjustments do not satisfy the constraint and I don't know that I'm doing wrong. #include <Eigen/Eigen> using namespace Eigen; typedef double Real; typedef Eigen::Matrix<Real, 3, 3> Matrix3r; typedef Eigen::Matrix<Real, 3, 1> Vector3r; typedef Eigen::Quaternion<Real> Quaternionr; typedef Eigen::AlignedBox<Real, 3> AlignedBox3r; template <typename Scalar> inline Eigen::Matrix<Scalar, 3, 3> crossProductMatrix(const Eigen::Matrix<Scalar, 3, 1> &v) { Eigen::Matrix<Scalar, 3, 3> v_hat; v_hat << 0, -v(2), v(1), v(2), 0, -v(0), -v(1), v(0), 0; return v_hat; } struct RigidBody { Matrix3r R; // world-space rotation of the rigid body Vector3r x; // world-space position of the rigid body Vector3r v; // linear velocity Vector3r w; // angular velocity Matrix3r Iinv; // local-space inertia tensor Real invmass; }; Matrix3r compute_inertia_tensor(const AlignedBox3r& box, Real m) { const Real a = box.sizes().x(); const Real b = box.sizes().y(); const Real c = box.sizes().z(); return (Real(1.0 / 3.0) * m * Vector3r { b * b + c * c, a * a + c * c, a * a + b * b }).asDiagonal(); } void computeMatrixK( const Vector3r &connector, const Real invMass, const Vector3r &x, const Matrix3r &inertiaInverseW, Matrix3r &K) { const Vector3r r = connector - x; K.setZero(); K += invMass * Matrix3r::Identity(); K += crossProductMatrix(r) * inertiaInverseW * crossProductMatrix(r).transpose(); } bool init_RigidBodyFixedConstraint( const Real invMass0, // inverse mass is zero if body is static const Vector3r &x0, // center of mass of body 0 const Vector3r &v0, // velocity of body 0 const Matrix3r &inertiaInverseW0, // inverse inertia tensor (world space) of body 0 const Vector3r &omega0, // angular velocity of body 0 const Real invMass1, // inverse mass is zero if body is static const Vector3r &x1, // center of mass of body 1 const Vector3r &v1, // velocity of body 1 const Matrix3r &inertiaInverseW1, // inverse inertia tensor (world space) of body 1 const Vector3r &omega1, // angular velocity of body 1 const Vector3r &connector, // world-space coordinates of anchor Eigen::Matrix<Real, 3, 7> &constraintInfo) { Matrix3r K1, K2; computeMatrixK(connector, invMass0, x0, inertiaInverseW0, K1); computeMatrixK(connector, invMass1, x1, inertiaInverseW1, K2); const Matrix3r inv_K_trans = (K1 + K2).inverse(); const Matrix3r inv_K_rot = (inertiaInverseW0 + inertiaInverseW1).inverse(); constraintInfo.block<3, 1>(0, 0) = connector; constraintInfo.block<3, 3>(0, 1) = inv_K_trans; constraintInfo.block<3, 3>(0, 4) = inv_K_rot; return true; } bool velocitySolve_RigidBodyFixedConstraint( const Real invMass0, // inverse mass is zero if body is static const Vector3r &x0, // center of mass of body 0 const Vector3r &v0, // velocity of body 0 const Matrix3r &inertiaInverseW0, // inverse inertia tensor (world space) of body 0 const Vector3r &omega0, // angular velocity of body 0 const Real invMass1, // inverse mass is zero if body is static const Vector3r &x1, // center of mass of body 1 const Vector3r &v1, // velocity of body 1 const Matrix3r &inertiaInverseW1, // inverse inertia tensor (world space) of body 1 const Vector3r &omega1, // angular velocity of body 1 Eigen::Matrix<Real, 3, 7> &constraintInfo, // precomputed constraint info Vector3r &corr_v0, Vector3r &corr_omega0, Vector3r &corr_v1, Vector3r &corr_omega1) { const Vector3r connector = constraintInfo.block<3, 1>(0, 0); const Matrix3r invKt = constraintInfo.block<3, 3>(0, 1); const Matrix3r invKr = constraintInfo.block<3, 3>(0, 4); const Vector3r r0 = connector - x0; const Vector3r r1 = connector - x1; const Vector3r J_trans_v = (v1 + omega1.cross(r1)) - (v0 + omega0.cross(r0)); const Vector3r J_rot_v = omega1 - omega0; const Vector3r lambda_trans = -invKt * J_trans_v; const Vector3r lambda_rot = -invKr * J_rot_v; corr_v0.setZero(); corr_omega0.setZero(); corr_v1.setZero(); corr_omega1.setZero(); if (invMass0 != 0.0f) { corr_v0 -= invMass0 * lambda_trans; corr_omega0 -= inertiaInverseW0 * r0.cross(lambda_trans); corr_omega0 -= inertiaInverseW0 * lambda_rot; } if (invMass1 != 0.0f) { corr_v1 += invMass1 * lambda_trans; corr_omega1 += inertiaInverseW1 * r1.cross(lambda_trans); corr_omega1 += inertiaInverseW1 * lambda_rot; } return true; } Real evalError( const Vector3r &x0, // center of mass of body 0 const Vector3r &v0, // velocity of body 0 const Vector3r &omega0, // angular velocity of body 0 const Vector3r &x1, // center of mass of body 1 const Vector3r &v1, // velocity of body 1 const Vector3r &omega1, // angular velocity of body 1 const Vector3r &connector) { const Vector3r r0 = connector - x0; const Vector3r r1 = connector - x1; const Vector3r J_trans_v = v1 + omega1.cross(r1) - v0 - omega0.cross(r0); const Vector3r J_rot_v = -omega0 + omega1; return J_trans_v.norm() + J_rot_v.norm(); } int main(int argc, char ** argv) { const Real extents = 2.0f; // each cube is 2x2x2 RigidBody A, B; A.x = Vector3r{ 0.0f, 1.0f, 0.0f }; B.x = A.x + Vector3r::UnitY() * (1.0f * extents); const AlignedBox3r box{ Vector3r{ -1.0f, -1.0f, -1.0f }, Vector3r{ +1.0f, +1.0f, +1.0f } }; const Real massA = 20.0f; const Real massB = 40.0f; A.Iinv = compute_inertia_tensor(box, massA).inverse(); B.Iinv = compute_inertia_tensor(box, massB).inverse(); A.invmass = massA == 0.0 ? 0.0 : 1.0 / massA; B.invmass = massB == 0.0 ? 0.0 : 1.0f / massB; B.v = Vector3r::UnitX() * +1.0f; A.v = Vector3r::UnitY() * -1.0f; //A.w.setZero(); //B.w.setZero(); B.w = { 0.1f, 0.2f, 0.3f }; A.w = { 0.4f, 0.1f, 0.7f }; A.R.setIdentity(); B.R.setIdentity(); // world-space inverse inertia tensors const Matrix3r inertiaInverseW1 = A.R * A.Iinv * A.R.transpose(); const Matrix3r inertiaInverseW2 = B.R * B.Iinv * B.R.transpose(); const Vector3r connector{ 0.0f, 2.0f, 0.0f }; Eigen::Matrix<Real, 3, 7> constraintInfo; init_RigidBodyFixedConstraint( A.invmass, A.x, A.v, inertiaInverseW1, A.w, B.invmass, B.x, B.v, inertiaInverseW2, B.w, connector, constraintInfo); for (int i = 0; i < 1; ++i) { Vector3r corr_v0, corr_omega0; Vector3r corr_v1, corr_omega1; velocitySolve_RigidBodyFixedConstraint( A.invmass, A.x, A.v, inertiaInverseW1, A.w, B.invmass, B.x, B.v, inertiaInverseW2, B.w, constraintInfo, corr_v0, corr_omega0, corr_v1, corr_omega1); A.v += corr_v0; A.w += corr_omega0; B.v += corr_v1; B.w += corr_omega1; } const Real error = evalError(A.x, A.v, A.w, B.x, B.v, B.w, connector); return 0; }
  6. Sorry for making a new thread about this, but I have a specific question which I couldn't find an answer to in any of the other threads I've looked at. I've been trying to get the method shown here to work several days now and I've run out of things to try. I've more or less resorted to using the barebones example shown there (with some very minor modifications as it wouldn't run otherwise), but I still can't get it to work. Either I have misunderstood something completely, or there's a mistake somewhere. My shader code looks like this: Vertex shader: #version 330 core //Vertex shader //Half the size of the near plane {tan(fovy/2.0) * aspect, tan(fovy/2.0) } uniform vec2 halfSizeNearPlane; layout (location = 0) in vec3 clipPos; //UV for the depth buffer/screen access. //(0,0) in bottom left corner (1, 1) in top right corner layout (location = 1) in vec2 texCoord; out vec3 eyeDirection; out vec2 uv; void main() { uv = texCoord; eyeDirection = vec3((2.0 * halfSizeNearPlane * texCoord) - halfSizeNearPlane , -1.0); gl_Position = vec4(clipPos.xy, 0, 1); } Fragment shader: #version 330 core //Fragment shader layout (location = 0) out vec3 fragColor; in vec3 eyeDirection; in vec2 uv; uniform mat4 persMatrix; uniform vec2 depthrange; uniform sampler2D depth; vec4 CalcEyeFromWindow(in float windowZ, in vec3 eyeDirection, in vec2 depthrange) { float ndcZ = (2.0 * windowZ - depthrange.x - depthrange.y) / (depthrange.y - depthrange.x); float eyeZ = persMatrix[3][2] / ((persMatrix[2][3] * ndcZ) - persMatrix[2][2]); return vec4(eyeDirection * eyeZ, 1); } void main() { vec4 eyeSpace = CalcEyeFromWindow(texture(depth, uv).x, eyeDirection, depthrange); fragColor = eyeSpace.rbg; } Where my camera settings are: float fov = glm::radians(60.0f); float aspect = 800.0f / 600.0f; And my uniforms equal: uniform mat4 persMatrix = glm::perspective(fov, aspect, 0.1f, 100.0f) uniform vec2 halfSizeNearPlane = glm::vec2(glm::tan(fov/2.0) * aspect, glm::tan(fov/2.0)) uniform vec2 depthrange = glm::vec2(0.0f, 1.0f) uniform sampler2D depth is a GL_DEPTH24_STENCIL8 texture which has depth values from an earlier pass (if I linearize it and set fragColor = vec3(linearizedZ), it shows up like it should, so nothing seems wrong there). I can confirm that it's wrong because it doesn't give me similar results to what saving position in the G-buffer or reconstructing using inverse matrices does. Is there something obvious I'm missing? To me the logic seems sound, and from the description on the Khronos wiki I can't see where I go wrong. Thanks!
  7. Hi, there's a great tutorial on frustum culling, but impossible to compile because it uses old DirectX 11 types (Direct3DXPlane instead of XMVECTOR, etc). Can someone please help me update this one class - frustumClass - to the new DirectX11 types (XMMATRIX, XMVECTOR, etc)? http://www.rastertek.com/dx11tut16.html Furthermore, can anyone please explain how he gets the minimum Z distance from the projection matrix by dividing one element by another? He leaves no explanation for this math and it's extremely frustrating. // Calculate the minimum Z distance in the frustum. zMinimum = -projectionMatrix._43 / projectionMatrix._33; r = screenDepth / (screenDepth - zMinimum); projectionMatrix._33 = r; projectionMatrix._43 = -r * zMinimum; Also not sure how to use an XMVECTOR instead of the old Plane class that he uses. Confused as to where all the m12, m13, etc correspond to the elements of an XMVECTOR. I thought you're not supposed to even access the elements of an XMVECTOR directly! So incredibly frustrating. Please help, thanks.
  8. I am doing a little physics project with circle circle collisions for now, and have tried to do impulse resolution for collisions with 2 circles, using the following code. relativeVelocity = (other.doVerletVelocity()).subtract(self.doVerletVelocity()) normDirecVel = relativeVelocity.dotProduct(collisionNormal) restitution = -1 - min(self.restitution, other.restitution) numerator = normDirecVel * restitution impulseScalar = numerator / float(1 / self.mass) + float(1 / other.mass) selfVel = self.doVerletVelocity() otherVel = other.doVerletVelocity() impulse = collisionNormal.scalarMult(impulseScalar) selfDV = impulse.scalarMult(1 / self.mass) otherDV = impulse.scalarMult(1 / other.mass) newSelfVel = selfVel.subtract(selfDV) newOtherVel = otherVel.add(otherDV) self.oldPos = (self.center).subtract(newSelfVel.scalarMult(dt)) other.oldPos = (other.center).subtract(newOtherVel.scalarMult(dt)) The problem seems to be that whatever value I give to self.mass and other.mass, the output stays exactly the same, the values that I used are: center = Vector(0, 0) radius = 1 oldPos = Vector(0, 0) accel = Vector(0, 0) mass = 100 restitution = 0.001 center2 = Vector(0, 3.20) radius2 = 1 oldPos2 = Vector(0, 3.201) accel2 = Vector(0, -1) mass2 = 1 restitution2 = 1 the output was: 0.0 0.0 0.0 2.165000000000114 0.0 0.0 0.0 2.1360000000001174 0.0 0.0 0.0 2.1066000000001206 0.0 0.0 0.0 2.076800000000124 0.0 0.0 0.0 2.046600000000127 0.0 0.0 0.0 2.0160000000001306 0.0 0.0 0.0 1.985000000000134 CIRCLE INTERSECTION 0.0 -1.985000000000134 0.0 3.938600000000271 0.0 -3.970000000000268 0.0 5.891800000000408 0.0 -5.9550000000004015 0.0 7.844600000000544 0.0 -7.940000000000535 0.0 9.797000000000681 I changed the values for the masses to make them higher, bu the output still remained the same, if you could get to the bottom of this, it would be much appreciated.
  9. How do we rotate the camera around x axis 360 degrees, without having the strange effect as in my video below? Mine behaves exactly the same way spherical coordinates would, I'm using euler angles. Tried googling, but couldn't find a proper answer, guessing I don't know what exactly to google for, googled 'rotate 360 around x axis', got no proper answers. References: Code: https://pastebin.com/Hcshj3FQ The video shows the difference between blender and my rotation:
  10. Hi all, More than a decade ago, a problem came up on this forum for computing a fast transpose of a 3x3 matrix using SSE. The most sensible implementation stores the matrix internally as a 3x4 matrix (so, one row stores 4 elements, aligned in a vector). A version, which I believe to be the fastest currently known, was presented: I am pleased to report that I have been able to come up with a version which should be faster: inline void transpose(__m128& A, __m128& B, __m128& C) { //Input rows in __m128& A, B, and C. Output in same. __m128 T0 = _mm_unpacklo_ps(A,B); __m128 T1 = _mm_unpackhi_ps(A,B); A = _mm_movelh_ps(T0,C); B = _mm_shuffle_ps( T0,C, _MM_SHUFFLE(3,1,3,2) ); C = _mm_shuffle_ps( T1,C, _MM_SHUFFLE(3,2,1,0) ); } This should be 5 instructions instead of ajas95's 8 instructions. Of course, to get that level of performance with either version, you need to inline everything, or else you spend tons of time on moving floating point arguments to/from input registers. The other thing that is crucial is that the instruction set be VEX encoded. This allows generating instructions that take three arguments, like `vunpcklps`, instead of instructions like `unpcklps` that take only two. VEX is only available in AVX and higher (usually passing e.g. `-mavx` is sufficient to get the compiler to generate VEX instructions). -G
  11. I'm trying to implement OBB vs OBB SAT collision from Randy Gaul's article. I have been playing with it for days, but I can't figure it out why it doesn't work. I am storing all vertex positions of each box, and also each edge normal on two lists. Then I use them on the algorithm, I tried a lot of changes to make it work but no luck. This is my C++ code: QVector2D Entity::getSupport(QVector2D dir) { float bestProjection = -std::numeric_limits<float>::max(); QVector2D bestVertex; for(quint32 i = 0; i < verticesList.count(); ++i) { QVector2D v = verticesList.at(i); float projection = QVector2D::dotProduct(v,dir); if(projection > bestProjection) { bestVertex = v; bestProjection = projection; } } return bestVertex; } qreal Collision::FindAxisLeastPenetration(Entity *A, Entity *B ) { float bestDistance = -std::numeric_limits<float>::max(); for(quint32 k = 0; k < A->verticesList.count() ; ++k) { QVector2D n = A->normalList.at(k); QVector2D s = B->getSupport(-n); QVector2D v = A->verticesList.at(k); QVector2D r = s-v; qreal d = QVector2D::dotProduct(n,r); if(d > bestDistance) { bestDistance = d; } } return bestDistance; } if (coli->FindAxisLeastPenetration(player,player2) <0 && FindAxisLeastPenetration(player2,player) <0 ) { qDebug() << "Collision detected "; }
  12. Hi guys, I'm trying to build my 3x3 rotation matrix algorithm for the sake of thoroughly understand it as much as I can The code below is what I just wrote from memory in an attempt to explain it to myself and test what I understand and what I do not. I roughly start to get it (saw it last week for the first time and kind of crashed my soul, at the time) and if I decompose it in small single pieces as I did below, it actually works and my point is being rotated to the appropriate coordinates (or almost, since the y component ends up getting 0ed out, so I think I just need to add back the p_proj_to_rA Vector to the final resoult to have it right). This being said, I completely missed the point of building a rotation matrix out of it, even though the whole time I was convinced to be moving toward the goal I mean I've made my matrix class with all the relevant operations/methods, but I just don't know how to do the final step to convert what I have below into a rotation matrix...and I do not mean the fomula, which I have in the book and I can copypaste that and have it work if I want to, what I need is to actually see how to derive that final matrix formula from what I have below, I need to understand how it correlate in a way that if I where to reason about the final rotation matrix formula in the book I could say "of course, it's obvious". So everyone that deeply understand rotation matrix and how to end up with one(without just copy-pasting the formula from a book or using a function that just give you one) is welcome to take on the challenge of making it look obvious to me, so that I can finish my program and end up with the correct one for wathever Axis of rotation I choose Or just link me wathever online resource helped you in the past Thanks Vector p = { 1, 1, 1 };// point that needs to rotate Vector rA = { 0, 1, 0 }; // rotation Axis p.computeMagnitude(); // sqrt( x*x + y*y + z*z ) rA.computeMagnitude();// sqrt( x*x + y*y + z*z ) //projection of our point on the rotation axis, which corresponds to the dotProduct since rA is of lenght 1 // Formula 1 dotProduct(p,rA) = px*rAx + py*rAy + pz*rAz // Formula 2 dotProduct(p,rA) = |p| * |rA| * cos(angleBetween) // proj_rA(p) = dotProduct(p,rA) = |p| * 1 * cos(angleBetween) = |p| * cos(angleBetween) //Solve Formula 2 for angleBetween //|p| * cos(angleBetween) = dotProduct(p,rA) // cos(angleBetween) = (dotProduct(p,rA) / |p|) // angleBetween = acos( dotProduct(p,rA) / |p| ) // proj_rA(p) = |p| * cos(angleBetween) // proj_rA(p) = |p| * cos( acos( dotProduct(p,rA) / |p| ) ) //projection of p on rA: double proj_to_rA = p.magnitude * cos(Vector::angleBetween(p, rA)); //vector p projected on rA, which is basically a longer or shorter version of rA Vector p_proj_to_rA = rA * proj_to_rA; //subtracting from p the (projection of p on rA) zeroes out the dotProduct(p,rA) so the resulting vector must be orthogonal to rA Vector p_perp_to_rA1 = p - p_proj_to_rA; //the cross product between two vector is a vector orthogonal to both of them //Formula 1 crossProduct(A,B) = |A|*|B|*sin(angleBetween) //Formula 2 v1.y*v2.z - v1.z*v2.y, //X // v1.z*v2.x - v1.x*v2.z, //Y // v1.x*v2.y - v1.y*v2.x //Z Vector p_perp_to_rA2 = Vector::crossProduct(rA, p_perp_to_rA1); //since rA, p_perp_to_rA1 and p_perp_to_rA2 are all perpendicular to each other, //if we now only consider p_perp_to_rA1 and p_perp_to_rA2 they describe a 2d plane //perpendicular to rA, and we rotate stuff based on this plane //Now the desired point of that plane is the sum (VectorX on p_perp_to_rA1 + VectorY on p_perp_to_rA2) double desiredRotation = toRad(-30); double X = cos(desiredRotation); double Y = sin(desiredRotation); Vector finalPoint = Vector(p_perp_to_rA1 * X) + Vector(p_perp_to_rA2 * Y); cout << "x,y,z{ 1, 1, 1 } -30 degree on the x,y,z{ 0, 1, 0 } Axis = " << finalPoint << endl; return 0; output:
  13. Has anyone tried finding isometric bounds programatically? What I'm trying to achieve is to use the Cartesian width and height of the object to find it's isometric bounds, but I think that just doesn't work once the image gets complicated. The attachments should explain clearly what I'm trying to achieve (note: the bounds drawing isn't perfect).
  14. Hello,   I was wrote a math library for C, and saw a thread about this, I wanted to share this library, maybe helps who working with C like me and looking for 3D math library for C89/C99, it is really easy to use and really fast!   Repo: https://github.com/recp/cglm   All funcs are inlined but you can prefer to use pre-compiled version of all funcs For instance glm_mat4_mul is inilne and glmc_mat4_mul is pre-compiled   if you don't need pre-compiled then you dont need to build it.   Library uses SIMD/AVX if available but in the future neon maybe supported All functions are optimized and there are lot of convenient funcs.    Features are listed in repo's README so I think there is no need to put them here   Just wanted to share my stuff to help people who are looking new lib or lib for C
  15. Quaternion Powers

    Version 1.2, February, 2003 Updates 1.2 A minor correction with the formula of converting from Quat to Axis. The scale is missing a square root. Thanks to Shi for pointing that out. From version 1.0 - 1.1 The norm of a quaternion should be the square root of the q.q. The mistake was brought to my attention by several kind readers and upon checking the definition of the Euclidean properties for complex numbers, I realize the norm property [bquote] || u+v || [/bquote] is violated for the previous definition of the magnitude. The code in the samples are updated as well. Foreword To me, the term 'Quaternion' sounds out of this world, like some term from quantum theory about dark matter, having dark secret powers. If you, too, are enthralled by this dark power, this article will bring enlightenment (I hope). The article will show you how to do rotations using quaternions, and bring you closer to understanding quaternions (and their powers). If you do spot a mistake please email me at robin@cyberversion.com. Also if you intend to put this on your site, please send me a mail. I like to know where this ends up. Why use Quaternions? To answer the question, let's first discuss some common orientation implementations. Euler representation This is by far the simplest method to implement orientation. For each axis, there is a value specifying the rotation around the axis. Therefore, we have 3 variables [bquote] x, y, z [/bquote] that vary between 0 and 360 degrees (or 0 - 2pi). They are the roll, pitch, and yaw (or pitch, roll, and yaw - whatever) representation. Orientation is obtained by multiplying the 3 rotation matrices generated from the 3 angles together (in a specific order that you define). Note: The rotations are specified with respect to the global coordinate axis frame. This means the first rotation does not change the axis of rotation for the second and third rotations. This causes a situation known as gimbal lock, which I will discuss later. Angle Axis representation This implementation method is better than the Euler representation as it avoids the gimbal lock problem. The representation consists of a unit vector representing an arbitrary axis of rotation, and another variable (0 - 360) representing the rotation around the vector: [bquote] x, y, z [/bquote] Why are these representations bad? Gimbal Lock As rotations in the Euler representation are done with respect to the global axis, a rotation in one axis could 'override' a rotation in another, making you lose a degree of freedom. This is gimbal lock. Say, if the rotation in the Y axis rotates a vector (parallel to the x axis) so that the vector is parallel to the z axis. Then, any rotations in the z axis would have no effect on the vector. Later, I will show you an example of gimbal lock and how you can use quaternions to overcome it. Interpolation Problems Though the angle axis representation does not suffer from gimbal lock, there are problems when you need to interpolate between two rotations. The calculated interpolated orientations may not be smooth, so you will get jerky rotation movements. Euler representation suffers from this problem as well. Let's get started Before we begin, let's establish some assumptions I'll be making. I hate the way many articles leave this important section out, causing a great deal of confusion when it comes to the mathematics. Coordinate System - This article assumes a right hand coordinate system, like OpenGL. If you are using a left handed coordinate system like Direct3D, you may need to transpose the matrices. Note that the Direct3D samples have a quaternion library already, though I recommend you check through their implementation before using it. Rotation Order - The sequence of rotations in the Euler representation is X, then Y, and then Z. In matrix form: [bquote] RotX * RotY * RotZ Very Important [/bquote] Matrix - Matrices are in column major format, like they are in OpenGL. [bquote] Example[nbsp][[nbsp]0[nbsp][nbsp]4[nbsp][nbsp]8[nbsp][nbsp]12 [nbsp][nbsp][nbsp][nbsp][nbsp][nbsp][nbsp][nbsp][nbsp][nbsp]1[nbsp][nbsp]5[nbsp][nbsp]9[nbsp][nbsp]13 [nbsp][nbsp][nbsp][nbsp][nbsp][nbsp][nbsp][nbsp][nbsp][nbsp]2[nbsp][nbsp]6[nbsp][nbsp]10[nbsp]14 [nbsp][nbsp][nbsp][nbsp][nbsp][nbsp][nbsp][nbsp][nbsp][nbsp]3[nbsp][nbsp]7[nbsp][nbsp]11[nbsp]15[nbsp]] [/bquote] Vectors and Points - Implemented as a 4x1 matrix so applying a transformation is of the order [bquote] Rotation[nbsp]Matrix*[nbsp][nbsp][[nbsp]vx [nbsp][nbsp][nbsp][nbsp][nbsp][nbsp][nbsp][nbsp][nbsp][nbsp][nbsp][nbsp][nbsp][nbsp][nbsp][nbsp][nbsp][nbsp][nbsp][nbsp]vy [nbsp][nbsp][nbsp][nbsp][nbsp][nbsp][nbsp][nbsp][nbsp][nbsp][nbsp][nbsp][nbsp][nbsp][nbsp][nbsp][nbsp][nbsp][nbsp][nbsp]vz [nbsp][nbsp][nbsp][nbsp][nbsp][nbsp][nbsp][nbsp][nbsp][nbsp][nbsp][nbsp][nbsp][nbsp][nbsp][nbsp][nbsp][nbsp][nbsp][nbsp]1[nbsp][nbsp]][nbsp][nbsp][/bquote] This does not imply that I prefer OpenGL over Direct3D. It just happened that I learned OpenGL first, and so my quaternion knowledge was gained in OpenGL. Note: If you specify rotations in another order, certain quaternion functions will be implemented differently, especially those that deal with Euler representation. What is a Quaternion? A complex number is an imaginary number that is defined in terms of i, the imaginary number, which is defined such that i * i = -1. A quaternion is an extension of the complex number. Instead of just i, we have three numbers that are all square roots of -1, denoted by i, j, and k. This means that [bquote]j * j = -1 k * k = -1[/bquote] So a quaternion can be represented as [bquote]q = w + xi + yj + zk[/bquote] where w is a real number, and x, y, and z are complex numbers. Another common representation is [bquote]q=[ w,v ][/bquote] where v = (x, y, z) is called a "vector" and w is called a "scalar". Although the v is called a vector, don't think of it as a typical 3 dimensional vector. It is a vector in 4D space, which is totally unintuitive to visualize. Identity Quaternions Unlike vectors, there are two identity quaternions. The multiplication identity quaternion is [bquote]q= [1,(0, 0, 0)][/bquote] So any quaternion multiplied with this identity quaternion will not be changed. The addition identity quaternion (which we do not use) is [bquote]q= [0,(0, 0, 0)][/bquote] Using quaternions as orientations The first thing I want to point out is that quaternions are not vectors, so please don't use your preconceived vector mathematics on what I'm going to show. This is going to get very mathematical, so please bear with me. We need to first define the magnitude of a quaternion. [bquote]|| q ||= Norm(q) =sqrt(w2 + x2 + y2 + z2)[/bquote] A unit quaternion has the following property [bquote]w2 + x2 + y2 + z2=1[/bquote] So to normalize a quaternion q, we do [bquote]q = q / || q || = q / sqrt(w2 + x2 + y2 + z2)[/bquote] What is so special about this unit quaternion is that it represents an orientation in 3D space. So you can use a unit quaternion to represent an orientation instead of the two methods discussed previously. To use them as orientations, you will need methods to convert them to other representations (e.g. matrices) and back, which will be discussed soon. Visualizing a unit quaternion You can visualize unit quaternions as a rotation in 4D space where the (x,y,z) components form the arbitrary axis and the w forms the angle of rotation. All the unit quaternions form a sphere of unit length in the 4D space. Again, this is not very intuitive but what I'm getting at is that you can get a 180 degree rotation of a quaternion by simply inverting the scalar (w) component. Note: Only unit quaternions can be used for representing orientations. All discussions from here on will assume unit quaternions. Conversion from Quaternions To be able to use quaternions effectively, you will eventually need to convert them to some other representation. You cannot interpret keyboard presses as quaternions, can you? Well, not yet. Quaternion to Matrix Since OpenGL and Direct3D allow rotations to be specified as matrices, this is probably the most important conversion function, since homogenous matrices are the standard 3D representations. The equivalent rotation matrix representing a quaternion is [bquote] Matrix[nbsp]=[nbsp][nbsp][[nbsp]w2[nbsp]+[nbsp]x2[nbsp]-[nbsp]y2[nbsp]-[nbsp]z2[nbsp][nbsp][nbsp][nbsp][nbsp][nbsp][nbsp]2xy[nbsp]-[nbsp]2wz[nbsp][nbsp][nbsp][nbsp][nbsp][nbsp][nbsp][nbsp][nbsp][nbsp][nbsp]2xz[nbsp]+[nbsp]2wy [nbsp][nbsp][nbsp][nbsp][nbsp][nbsp][nbsp][nbsp][nbsp][nbsp][nbsp][nbsp][nbsp][nbsp][nbsp][nbsp]2xy[nbsp]+[nbsp]2wz[nbsp][nbsp][nbsp][nbsp][nbsp][nbsp][nbsp]w2[nbsp]-[nbsp]x2[nbsp]+[nbsp]y2[nbsp]-[nbsp]z2[nbsp][nbsp][nbsp][nbsp][nbsp][nbsp][nbsp]2yz[nbsp]-[nbsp]2wx [nbsp][nbsp][nbsp][nbsp][nbsp][nbsp][nbsp][nbsp][nbsp][nbsp][nbsp][nbsp][nbsp][nbsp][nbsp][nbsp]2xz[nbsp]-[nbsp]2wy[nbsp][nbsp][nbsp][nbsp][nbsp][nbsp][nbsp][nbsp][nbsp][nbsp][nbsp]2yz[nbsp]+[nbsp]2wx[nbsp][nbsp][nbsp][nbsp][nbsp][nbsp][nbsp]w2[nbsp]-[nbsp]x2[nbsp]-[nbsp]y2[nbsp]+[nbsp]z2[nbsp]] [/bquote] Using the property of unit quaternions that w2 + x2 + y2 + z2 = 1, we can reduce the matrix to [bquote] Matrix[nbsp]=[nbsp][nbsp][[nbsp]1[nbsp]-[nbsp]2y2[nbsp]-[nbsp]2z2[nbsp][nbsp][nbsp][nbsp]2xy[nbsp]-[nbsp]2wz[nbsp][nbsp][nbsp][nbsp][nbsp][nbsp]2xz[nbsp]+[nbsp]2wy [nbsp][nbsp][nbsp][nbsp][nbsp][nbsp][nbsp][nbsp][nbsp][nbsp][nbsp][nbsp][nbsp][nbsp]2xy[nbsp]+[nbsp]2wz[nbsp][nbsp][nbsp][nbsp]1[nbsp]-[nbsp]2x2[nbsp]-[nbsp]2z2[nbsp][nbsp][nbsp][nbsp]2yz[nbsp]-[nbsp]2wx [nbsp][nbsp][nbsp][nbsp][nbsp][nbsp][nbsp][nbsp][nbsp][nbsp][nbsp][nbsp][nbsp][nbsp]2xz[nbsp]-[nbsp]2wy[nbsp][nbsp][nbsp][nbsp][nbsp][nbsp]2yz[nbsp]+[nbsp]2wx[nbsp][nbsp][nbsp][nbsp]1[nbsp]-[nbsp]2x2[nbsp]-[nbsp]2y2[nbsp]] [/bquote] Quaternion to Axis Angle To change a quaternion to a rotation around an arbitrary axis in 3D space, we do the following: [bquote] If[nbsp]the[nbsp]axis[nbsp]of[nbsp]rotation[nbsp]is[nbsp][nbsp][nbsp][nbsp][nbsp][nbsp][nbsp][nbsp][nbsp](ax,[nbsp]ay,[nbsp]az) and[nbsp]the[nbsp]angle[nbsp]is[nbsp][nbsp][nbsp][nbsp][nbsp][nbsp][nbsp][nbsp][nbsp][nbsp][nbsp][nbsp][nbsp][nbsp][nbsp][nbsp][nbsp][nbsp][nbsp]theta[nbsp](radians) then[nbsp]the[nbsp][nbsp][nbsp][nbsp][nbsp][nbsp][nbsp][nbsp][nbsp][nbsp][nbsp][nbsp][nbsp][nbsp][nbsp][nbsp][nbsp][nbsp][nbsp][nbsp][nbsp][nbsp][nbsp][nbsp][nbsp][nbsp][nbsp]angle=[nbsp]2[nbsp]*[nbsp]acos(w) [nbsp][nbsp][nbsp][nbsp][nbsp][nbsp][nbsp][nbsp][nbsp][nbsp][nbsp]ax=[nbsp]x[nbsp]/[nbsp]scale [nbsp][nbsp][nbsp][nbsp][nbsp][nbsp][nbsp][nbsp][nbsp][nbsp][nbsp]ay=[nbsp]y[nbsp]/[nbsp]scale [nbsp][nbsp][nbsp][nbsp][nbsp][nbsp][nbsp][nbsp][nbsp][nbsp][nbsp]az=[nbsp]z[nbsp]/[nbsp]scale where[nbsp]scale[nbsp]=[nbsp]sqrt[nbsp](x2[nbsp]+[nbsp]y2[nbsp]+[nbsp]z2) [/bquote] Another variation I have seen is that the scale = sin(acos(w)). They may be equivalent, though I didn't try to find the mathematical relationship behind them. Anyway if the scale is 0, it means there is no rotation so unless you do something, the axis will be infinite. So whenever the scale is 0, just set the rotation axis to any unit vector with a rotation angle of 0. A Simple Example In case you are getting confused with what I'm getting at, I will show you a simple example here. Say the camera orientation is represented as Euler angles. Then, in the rendering loop, we position the camera using [bquote] RotateX * RotateY * RotateZ * Translate [/bquote] where each component is a 4x4 matrix. So if we are using a unit quaternion to represent the camera orientation, we have to convert the quaternion to a matrix first [bquote] Rotate (from Quaternion) * Translate [/bquote] A more specific example in OpenGL: Euler Quaternion glRotatef( angleX, 1, 0, 0) glRotatef( angleY, 0, 1, 0) glRotatef( angleZ, 0, 0, 1) // translate // convert Euler to quaternion // convert quaternion to axis angle glRotate(theta, ax, ay, az) // translate The above implementations are equivalent. The point I'm trying to get across is that using quaternions for orientation is the same as using Euler or Axis angle representation and that they can be interchanged through the conversion functions I've described. Note that the above quaternion representation will also incur gimbal lock like the Euler method. Of course, you do not know how to make the rotation to be a quaternion in the first place but we will get to that shortly. Note: If you are using Direct3D or OpenGL, you may not have to deal with matrices directly, but matrix concatenation is something that the API does, so it's worth learning about them. Multiplying Quaternions Since a unit quaternion represents an orientation in 3D space, the multiplication of two unit quaternions will result in another unit quaternion that represents the combined rotation. Amazing, but it's true. Given two unit quaternions [bquote]Q1=(w1, x1, y1, z1); Q2=(w2, x2, y2, z2);[/bquote] A combined rotation of unit two quaternions is achieved by [bquote]Q1 * Q2 =( w1.w2 - v1.v2, w1.v2 + w2.v1 + v1*v2)[/bquote] where [bquote]v1= (x1, y1, z1) v2 = (x2, y2, z2)[/bquote] and both . and * are the standard vector dot and cross product. However an optimization can be made by rearranging the terms to produce [bquote]w=w1w2 - x1x2 - y1y2 - z1z2 x = w1x2 + x1w2 + y1z2 - z1y2 y = w1y2 + y1w2 + z1x2 - x1z2 z = w1z2 + z1w2 + x1y2 - y1x2[/bquote] Of course, the resultant unit quaternion can be converted to other representations just like the two original unit quaternions. This is the real beauty of quaternions - the multiplication of two unit quaternions in 4D space solves gimbal lock because the unit quaternions lie on a sphere. Be aware that the order of multiplication is important. Quaternion multiplication is not commutative, meaning [bquote]q1 * q2 does not equal q2 * q1[/bquote] Note: Both quaternions must refer to the same coordinate axis. I made the mistake of combining two quaternions from different coordinate axes, and I had a very hard time wondering why the result quaternion fails in certain angles only. Conversion To Quaternions Now we learn how to convert other representations to quaternions. Although I do not use all the conversions in the sample program, there are times when you'll need them when you want to use quaternion orientation for more advanced stuff like inverse kinematics. Axis Angle to Quaternion A rotation around an arbitrary axis in 3D space can be converted to a quaternion as follows [bquote] If[nbsp]the[nbsp]axis[nbsp]of[nbsp]rotation[nbsp]is[nbsp][nbsp][nbsp][nbsp][nbsp][nbsp][nbsp][nbsp](ax,[nbsp]ay,[nbsp]az)-[nbsp]must[nbsp]be[nbsp]a[nbsp]unit[nbsp]vector and[nbsp]the[nbsp]angle[nbsp]is[nbsp][nbsp][nbsp][nbsp][nbsp][nbsp][nbsp][nbsp][nbsp][nbsp][nbsp][nbsp][nbsp][nbsp][nbsp][nbsp][nbsp][nbsp]theta[nbsp](radians) [nbsp][nbsp][nbsp][nbsp]w[nbsp][nbsp][nbsp]=[nbsp][nbsp][nbsp]cos(theta/2) [nbsp][nbsp][nbsp][nbsp]x[nbsp][nbsp][nbsp]=[nbsp][nbsp][nbsp]ax[nbsp]*[nbsp]sin(theta/2) [nbsp][nbsp][nbsp][nbsp]y[nbsp][nbsp][nbsp]=[nbsp][nbsp][nbsp]ay[nbsp]*[nbsp]sin(theta/2) [nbsp][nbsp][nbsp][nbsp]z[nbsp][nbsp][nbsp]=[nbsp][nbsp][nbsp]az[nbsp]*[nbsp]sin(theta/2) [/bquote] The axis must first be normalized. If the axis is a zero vector (meaning there is no rotation), the quaternion should be set to the rotation identity quaternion. Euler to Quaternion Converting from Euler angles to a quaternion is slightly more tricky, as the order of operations must be correct. Since you can convert the Euler angles to three independent quaternions by setting the arbitrary axis to the coordinate axes, you can then multiply the three quaternions together to obtain the final quaternion. So if you have three Euler angles (a, b, c), then you can form three independent quaternions [bquote]Qx = [ cos(a/2), (sin(a/2), 0, 0)] Qy = [ cos(b/2), (0, sin(b/2), 0)] Qz = [ cos(c/2), (0, 0, sin(c/2))][/bquote] And the final quaternion is obtained by Qx * Qy * Qz. Demo - Avoiding Gimbal Lock Finally, we've reached what you all been waiting for: "How can quaternions avoid gimbal lock.?" The basic idea is Use a quaternion to represent the rotation. Generate a temporary quaternion for the change from the current orientation to the new orientation. PostMultiply the temp quaternion with the original quaternion. This results in a new orientation that combines both rotations. Convert the quaternion to a matrix and use matrix multiplication as normal. Firstly, I want to make a disclaimer regarding the sample code. The code is ugly and very poorly organized. But do remember, this is just a cut down version of my program when I was testing quaternions, and I'm not getting paid for this. There are two executable samples that I have included. The first program, CameraEuler.exe, is an example for camera implementation using Euler angles. The main concern should be the Main_Loop function in main.cpp. The main thing you should take note (in the while loop) is There are 3 angles I keep track of for rotation in the X, Y, and Z axis. With every key press, I adjust the corresponding rotation variable. In the while loop, I translate and then just convert the 3 Euler angles to rotation matrices and multiply them into the final transformation matrix. Use the up/down keys to rotate around the X axis, left/right to rotate around the Y axis and Insert/PageUp to rotate around the Z axis. This program suffers from gimbal lock. If you want to see it in action, rotate the camera so that the yaw is 90 deg. Then try rotating in the X and Z direction. See what happens. Now for the quaternion solution. The program is CameraQuat.exe and it is a slight modification of the previous program. The main point you should take note (in the while loop) is The orientation of the camera is a quaternion. There are 3 angles corresponding to the keypress. Note the angles are meant to be an on/off switch (not accumulative). I reset them inside the while loop. Of course this is not the best way to do it but as I said, it is a quick job. I convert the 3 angles to a temporary quaternion. I multiply the temporary quaternion to the camera quaternion to obtain the combined orientation. Note the order of multiplication. The camera rotation is then converted to the Axis Angle representation for transforming the final matrix. When a key is pressed, I generate a temporary quaternion corresponding to the key for a small rotation in that particular axis. I then multiply the temporary quaternion into the camera quaternion. This concatenation of rotations in 4D space will avoid gimbal lock. Try it and see for yourself. The camera quaternion has to be changed into either a matrix form or equivalence form so that you can concatenate it into a final transformation matrix. You have to do this for every quaternion you use, as 4D space and 3D space just don't mix. In the case of OpenGL, I just changed the quaternion to an Axis Angle representation and let the API do the rest. Although I did not use the global Euler angles for rotations in the second program, I have left them there as a guide for you to see the similar Euler rotations in the first program. Note the Euler angles will be incorrect if you rotate more than 1 axis (because it counts the keypress rather than getting the Euler angles from the camera quaternion). It is just a reference for you to see that when you rotate the yaw to 90 deg when the program starts, the gimbal lock problem is no more. Note: I don't recommend you use my math library as it is. Understand the quaternion and write your own. For your information, I am going to throw all of that away and rewrite it too. It is just too messy and ugly for my taste. What I did not show If you notice, I did not show how to convert from a Quaternion to the Euler angle. That's because I have yet to find a conversion that works perfectly. The only way I know is to obtain a matrix from the quaternion and try to extract the Euler angles from the matrix. However, as Euler to matrix conversions are a many-to-one relationship (due to sin and cos), I do not know how to get the reverse even using atan2. If anyone knows how to extract Euler angles from a matrix accurately, please do share with me. The other thing I did not show is the conversion of a matrix to a quaternion. I didn't think I needed this conversion as you can convert the Euler and Axis angle representation to quaternion straight without needing to throw them to a matrix. More you can do - SLERP If you think you are a quaternion master, think again. There is still more to learn about them. Remember I said something about why the Axis Angle representation is bad? Does the word 'interpolation' ring a bell? I don't have the time to write about interpolations using quaternions. This article has taken much longer than I had anticipated. I can give you the basic idea about SLERP (Spherical Linear Interpolation), which is basically generating a series of quaternions between two quaternion orientations (which you specify). The series of quaternions will result in smooth motion between the first and end quaternion (something which both the Euler and Axis Angle representation cannot achieve consistently). Final Words I hope this article can clear up any mystery behind the quaternion theory. A final word of caution again: Don't multiply two quaternions from different coordinate frames. Nothing but pain and hair loss will result from it. With your new found powers, I bid thee farewell. Take care, .. and watch your back.
  • Advertisement