Jump to content
  • Advertisement

Search the Community

Showing results for tags 'Linear Algebra'.

The search index is currently processing. Current results may not be complete.


More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Categories

  • Audio
    • Music and Sound FX
  • Business
    • Business and Law
    • Career Development
    • Production and Management
  • Game Design
    • Game Design and Theory
    • Writing for Games
    • UX for Games
  • Industry
    • Interviews
    • Event Coverage
  • Programming
    • Artificial Intelligence
    • General and Gameplay Programming
    • Graphics and GPU Programming
    • Engines and Middleware
    • Math and Physics
    • Networking and Multiplayer
  • Visual Arts
  • Archive

Categories

  • Audio
  • Visual Arts
  • Programming
  • Writing

Categories

  • Game Developers Conference
    • GDC 2017
    • GDC 2018
  • Power-Up Digital Games Conference
    • PDGC I: Words of Wisdom
    • PDGC II: The Devs Strike Back
    • PDGC III: Syntax Error

Forums

  • Audio
    • Music and Sound FX
  • Business
    • Games Career Development
    • Production and Management
    • Games Business and Law
  • Game Design
    • Game Design and Theory
    • Writing for Games
  • Programming
    • Artificial Intelligence
    • Engines and Middleware
    • General and Gameplay Programming
    • Graphics and GPU Programming
    • Math and Physics
    • Networking and Multiplayer
  • Visual Arts
    • 2D and 3D Art
    • Critique and Feedback
  • Community
    • GameDev Challenges
    • GDNet+ Member Forum
    • GDNet Lounge
    • GDNet Comments, Suggestions, and Ideas
    • Coding Horrors
    • Your Announcements
    • Hobby Project Classifieds
    • Indie Showcase
    • Article Writing
  • Affiliates
    • NeHe Productions
    • AngelCode
  • Topical
    • Virtual and Augmented Reality
    • News
  • Workshops
    • C# Workshop
    • CPP Workshop
    • Freehand Drawing Workshop
    • Hands-On Interactive Game Development
    • SICP Workshop
    • XNA 4.0 Workshop
  • Archive
    • Topical
    • Affiliates
    • Contests
    • Technical
  • GameDev Challenges's Topics
  • For Beginners's Forum

Calendars

  • Community Calendar
  • Games Industry Events
  • Game Jams
  • GameDev Challenges's Schedule

Blogs

There are no results to display.

There are no results to display.

Product Groups

  • GDNet+
  • Advertisements
  • Careers

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


About Me


Website


Industry Role


Twitter


Github


Twitch


Steam

Found 24 results

  1. I control the camera in my program by changing the transformation matrix (which alters the other underlying matrices). Not this is all in 2D. The transformation matrix is pretty standard. It is a 4x4 matrix with the diagonal acting as the scale factor (x, y, z) and the right side acting as the offset coordinates (this is ignoring shear and rotation which aren't that important to me though I would like the method to still work with rotation). So essentially I am currently zooming by altering the x and y scale in the transformation matrix. This seems to act from the top left corner (which is the origin in my coordinate space). I would like the zoom to act towards the mouse. So knowing the x and y position of the mouse when I apply my transformation (every frame) how should I alter the zoom so that it zooms towards whatever the mouse position was? I can share code if you would like but I am writing this in Rust. Also let me know if you need more information I am happy to provide I just didn't want to over share.
  2. Hello, I am trying to recreate a feature that exists in Unity which is called Stretched Billboarding. However I am having a hard time figuring out how to use a particle velocity to rotate and stretch the particle-quad accordingly. Here's a screenie of unity's example: Depending on the velocity of the particle, the quad rotates and stretches, but it is still always facing the camera. In my current solution I have normal billboarding and velocities and particle-local rotations are working fine. I generate my quads in a geometry-shader, if that makes any difference. So does anyone have any thoughts of how to achieve this? Best regards Hampus
  3. Hey I'm dealing with ribbons following the shape of multiple spline segments. It's straightforward to compute the direction at any point along the spline. However the ribbon also got a flat shape and I'm struggling with finding a way to compute the angle of the ribbon in the plane perpendicular to the direction. To illustrate what I mean here's a piece of code that almost worked: float3x3 rotMtxFromSpline; rotMtxFromSpline[1] = normalize(splineDir); rotMtxFromSpline[0] = normalize(cross(float3(1, 0, 0), rotMtxFromSpline[1])); rotMtxFromSpline[2] = cross(rotMtxFromSpline[0], rotMtxFromSpline[1]); // Rotate rotMtxFromSpline[0] in the rotMtxFromSpline[0]-rotMtxFromSpline[2]-plane to align with float3(0, 0, 1) dir rotMtxFromSpline[0] = normalize(dot(rotMtxFromSpline[0], float3(0, 0, 1)) * rotMtxFromSpline[0] + dot(rotMtxFromSpline[2], float3(0, 0, 1)) * rotMtxFromSpline[2]); rotMtxFromSpline[2] = cross(rotMtxFromSpline[0], rotMtxFromSpline[1]); The problem with this code is when the spline segment becomes perpendicular to (0,0,1)-dir as the orientation switch from one side to the other very easily. The approach above is kind of a global approach and I'm thinking if there's a way to append some info to each spline segment to remedy the issue. Anyhow I wanted to post this question in case anyone had a similar problem that they solved or maybe anyone know some web resource dealing with this issue? Thanks!
  4. This is in reference to "Stable Constrained Dynamics": https://hal.inria.fr/hal-01157835/document Equation (18) / (21) I'm having trouble understanding how to build this K "geometric stiffness" term. K = (∂J^T / ∂x) λ Where J is the constraints jacobian and λ is the constraint force magnitudes. What I do know - based on its usage in (21), K should be a (3n x 3n) matrix in 3D where n is the number of particles lets say. What I'm confused about - the jacobian J is a (C x 3n) matrix where C is the number of constraints. λ is (C x 1). This doesn't seem to work out in terms of the matrix dimensions... What am I missing here? If I consider only a single constraint, then it does appear to work out in terms of the dimensions - I end up with λ being a scalar and K ultimately being (3n x 3n). However, that leads to the question of how to then build K such that it contains all of the individual constraint K's (one K for each constraint I guess)?
  5. Agustin Andreucci

    Align sight to camera

    Hi everyone,I got stuck with my approach to aiming down sights. I've set up my first person arms mesh to be a child of the camera. Added a socket on the right hand of the skeleton to which I attach my weapon and there is a socket in the weapon used to signal the point at which the camera should sit during aiming down sight. When the players aims holding left click what I'm trying to do is to calculate the offset of the sight socket to the root of the FPP Mesh while on the aiming down sights pose and move the camera based on that. I'm using the code attached below. It's Unreal Engine but the problem seemed generic enough to qualify as an algebra one. What I'm doing is obtaining the transformations from the camera to the sight bone. That gives me the sight transform relative to the camera. After that I just negate the translation vector and add it to the first child of the camera. I was expecting to have the sight bone sit at the exact same point as the camera. What happens is that it is almost there but it's not correctly aligned. I believe my math and the general idea is right. Any ideas why this could be failing? Here is how I apply the resulting transform and below how I obtain it. Thank you very much for reading. FVector Translation = DeltaTransform.GetTranslation(); Translation.Z = 165.0f; /*Translation.X += Translation.Y; Translation.Y = Translation.X - Translation.Y; Translation.X = Translation.X - Translation.Y;*/ DeltaTransform *= DefaultMesh1PTransform; DeltaTransform.SetTranslation(Translation); Mesh1PCamOffsetLocationDelta = DeltaTransform.GetTranslation(); Mesh1PCamOffsetRotationDelta = DeltaTransform.GetRotation(); bInterpMesh1PCamOffset = bInterp; UE_LOG(LogTemp, Warning, TEXT("DeltaInverse Location x: %f, y: %f, z: %f"), (DeltaTransform).GetLocation().X, (DeltaTransform).GetLocation().Y, (DeltaTransform).GetLocation().Z); Mesh1P->SetRelativeLocation(DeltaTransform.GetLocation() * -1); UE_LOG(LogTemp, Warning, TEXT("Rotation yaw: %f, pitch: %f, roll: %f"), DeltaTransform.GetRotation().Rotator().Yaw, DeltaTransform.GetRotation().Rotator().Pitch, DeltaTransform.GetRotation().Rotator().Roll); FTransform HandToComponent; HandToComponent.SetIdentity(); FTransform BoneTransform; USkeletalMeshComponent* Mesh1P = CarryingPlayer->Mesh1P; // Get transform to weapon attach socket space HandToComponent *= FirearmMesh->GetSocketTransform(FName("sight_point_socket"), RTS_ParentBoneSpace); HandToComponent *= Mesh1P->GetSocketTransform(Firearm->GetCarrySocket(), RTS_ParentBoneSpace); // Get transform from the right hand bone relative to component (root bone) FName CurrentBoneName = FName("hand_r"); while (CurrentBoneName != NAME_None) { ArmsADSIdleAnimSequence->GetBoneTransform(BoneTransform, Mesh1P->GetBoneIndex(CurrentBoneName), 0.0f, true); HandToComponent *= BoneTransform; CurrentBoneName = Mesh1P->GetParentBone(CurrentBoneName); } return HandToComponent;
  6. hi guys, To describe the position and orientation of a rigid body in a world coordinate system, we usually attach a coordinate system to the body and then give a description of this coordinate system relative to the world coordinate system. One way to describe the body-attached coordinate system is write three orthonormal vectors of its three principal axes in terms of the world coordinate system. It will be convenient if we stack these three vectors together as the column of a 3X3 matrix, we call this matrix a rotation matrix, but what's the order of the three vectors? Different orders will have different results? The determinant of the rotation matrix may be -1, that is an improper rotation matrix, but it can represent the orientation of the rigid body. So how to determine the order of the three vectors in the rotation matrix? Anyone can clarify it? thank you!
  7. I want to implement XYZ, XZY, YXZ, YZX, ZXY, ZYX intrinsic Tait-Bryan rotations to matrix (Right Hand), So I implemented formulas in this PDF: https://www.geometrictools.com/Documentation/EulerAngles.pdf The problem is that some formulas (or components) appears to be different in wikipedia: https://en.wikipedia.org/wiki/Euler_angles See the "Rotation matrix" section and XZY order. XYZ seems same. Wikipedia version used s1s3 + c1c3s2 but the PDF used sxsy + cxcysz for m01, −s2 in wiki −sz in PDF for m10... I'm confused Does anyone know which one is correct? Thanks
  8. I'm having problems rotating GameObjects in my engine. I'm trying to rotate in 2 ways. I'm using MathGeoLib to calculate maths in the engine. First way: Rotates correctly around axis but if I want to rotate back, if I don't do it following the inverse order then rotation doesn't work properly. e.g: Rotate X axis 50 degrees, Rotate Y axis 30 degrees -> Rotate Y axis -50 degrees, Rotate X axis -30 degrees. Works. Rotate X axis 50 degrees, Rotate Y axis 30 degrees -> Rotate X axis -50 degrees, Rotate Y axis -30 degrees. Doesn't. Code: void ComponentTransform::SetRotation(float3 euler_rotation) { float3 diff = euler_rotation - editor_rotation; editor_rotation = euler_rotation; math::Quat mod = math::Quat::FromEulerXYZ(diff.x * DEGTORAD, diff.y * DEGTORAD, diff.z * DEGTORAD); quat_rotation = quat_rotation * mod; UpdateMatrix(); } Second way: Starts rotating good around axis but after rotating some times, then it stops to rotate correctly around axis, but if I rotate it back regardless of the rotation order it works, not like the first way. Code: void ComponentTransform::SetRotation(float3 euler_rotation) { editor_rotation = euler_rotation; quat_rotation = math::Quat::FromEulerXYZ(euler_rotation.x * DEGTORAD, euler_rotation.y * DEGTORAD, euler_rotation.z * DEGTORAD); UpdateMatrix(); } Rest of code: #define DEGTORAD 0.0174532925199432957f void ComponentTransform::UpdateMatrix() { if (!this->GetGameObject()->IsParent()) { //Get parent transform component ComponentTransform* parent_transform = (ComponentTransform*)this->GetGameObject()->GetParent()->GetComponent(Component::CompTransform); //Create matrix from position, rotation(quaternion) and scale transform_matrix = math::float4x4::FromTRS(position, quat_rotation, scale); //Multiply the object transform by parent transform transform_matrix = parent_transform->transform_matrix * transform_matrix; //If object have childs, call this function in childs objects for (std::list<GameObject*>::iterator it = this->GetGameObject()->childs.begin(); it != this->GetGameObject()->childs.end(); it++) { ComponentTransform* child_transform = (ComponentTransform*)(*it)->GetComponent(Component::CompTransform); child_transform->UpdateMatrix(); } } else { //Create matrix from position, rotation(quaternion) and scale transform_matrix = math::float4x4::FromTRS(position, quat_rotation, scale); //If object have childs, call this function in childs objects for (std::list<GameObject*>::iterator it = this->GetGameObject()->childs.begin(); it != this->GetGameObject()->childs.end(); it++) { ComponentTransform* child_transform = (ComponentTransform*)(*it)->GetComponent(Component::CompTransform); child_transform->UpdateMatrix(); } } } MathGeoLib: Quat MUST_USE_RESULT Quat::FromEulerXYZ(float x, float y, float z) { return (Quat::RotateX(x) * Quat::RotateY(y) * Quat::RotateZ(z)).Normalized(); } Quat MUST_USE_RESULT Quat::RotateX(float angle) { return Quat(float3(1,0,0), angle); } Quat MUST_USE_RESULT Quat::RotateY(float angle) { return Quat(float3(0,1,0), angle); } Quat MUST_USE_RESULT Quat::RotateZ(float angle) { return Quat(float3(0,0,1), angle); } Quat(const float3 &rotationAxis, float rotationAngleRadians) { SetFromAxisAngle(rotationAxis, rotationAngleRadians); } void Quat::SetFromAxisAngle(const float3 &axis, float angle) { assume1(axis.IsNormalized(), axis); assume1(MATH_NS::IsFinite(angle), angle); float sinz, cosz; SinCos(angle*0.5f, sinz, cosz); x = axis.x * sinz; y = axis.y * sinz; z = axis.z * sinz; w = cosz; } Any help? Thanks.
  9. Introduction The impetus for this research topic is concerned with a data structure that efficiently stores and queries scatter point data. These points may originally be neatly arranged on a grid, or randomly scattered across the surface of a sphere (e.g. the earth), but it makes no assumption in that regard. The data structure could have member functions such as add(point) or add(points, amount). The points themselves contain three scalar values: the longitude, the latitude, and the data value itself. This data value could be geographical data such as land height or weather data such as air temperature, but again, the data structure is agnostic about the data itself other than as a scalar value. Data Structure The purpose of the data structure is ultimately to efficiently and accurately query for point(s) that is(are) nearby another point, the reason being to compute the average value at that point. Suppose a requirement is to create an image representing the air temperature (or rainfall, cloud cover, whatever the scalar value of each point represents) - this data structure could be queried for each point (i.e. pixel) in the image. Once nearby points are queried, the algorithm to average the scalar values of each point is outside the scope of this, but a good candidate is the inverse distance weighed algorithm. Within the data structure, the data points will not be stored in spherical coordinates, but as n-vectors. The reason for this is that the sampling algorithm - finding N closest data points - is more efficient and accurate using vector algebra. While it is outside the scope of this, to summarize: it is easier to determine how close two points are on a sphere if these points are represented by unit vectors than by latitude and longitude. For example, two points sharing the same longitude are farther apart at the equator than above or below the equator. Even if we are just concerned with relative distances, and do not need to compute the square root of the distance formula, for this reason it is still not as accurate. Also the international date line, where longitude "resets", presents a problem that would have to be dealt with. Also either pole. But you get the idea - vectors are more stable, accurate, and efficient. I've always been a fan of Binary Space Partitions - each branch node of a BSP has two areas, one below and one above (or one left and one right, or one in and one out, however you like to think of it). To go along with storing points as vectors, branch nodes within the structure partitions their space also using a vector, which in this case forms the normal of a plane that bisects the unit sphere. All data points in a node (or children of the node) will either be above or below this plane (points that are on the plane can be allocated to either side as is appropriate). Points that are below the plane are placed in the first child of the node, and points that are above placed in the second child. We call this plane normal the split axis of the node. Querying the structure for the closest N points then becomes trivial. For any branch node, compute the dot product of the point in question and the split axis of the node. If it is negative (the point is below the split plane), recursively query with the first child node, and if positive (the point is above the split plane), with the second child node. For a leaf node, compute the dot product of the point in question with each data point contained in the node, keeping a sorted list of the closest N points. The one caveat is that in branch nodes, after recursing into one child node, it may be necessary to recurse into the other child if the farthest point found so far is farther than the other child node, since there may be closer points in the other child node. But this is trivial as we are comparing dot products. No expensive computations are necessary to compute the N closest points - just a bunch of dot products. As dot products of unit vectors range from -1 to 1 (-1 being farthest apart and 1 being equal), two points are closer if their dot product is higher. Once complete, and the list of N points found, the actual distances can be calculated if necessary, which in this case is done by calculating the angles using the already-calculated dot products. This angle can then be multiplied by the radius of the earth to get an exact distance, again only if necessary. But for averaging, these extra calculations are not necessary. As a final note, the data structure lends itself well to graphics hardware. In my particular case, rendering an image using such a data structure on the CPU may take several minutes, but on the GPU takes a fraction of a second. Problem The problem - common to any space partitioning tree - is how to initially create the data structure. Again, the points are not assumed to be arranged in any specific way, and as far as we are concerned, are a "point soup". They can be specified one at a time - addPoint(point) - or all at once - addPoints(points) - or both. Specifically, how can the determination of the split axis of any branch be efficient and provide an even split (the same or similar number of points on either side). The problem is unique here because the points are not arranged on a two-dimensional surface or in three-dimensional space, but lie on a unit sphere. My current solution to the problem is not terribly elegant. For each two data points, compute the axis that splits them evenly. This is done by computing the point between the two (subtract one from the other) and normalizing it, then crossing this with the cross product of the two points. This cross product comprises the normal of the plane that evenly splits the two points. This normal is then tested against each other point in the node to get 1) the number of points on either side of the plane and 2) the distances of the points to the plane. A good split plane is one that 1) splits points evenly, 2) has the same/similar distances on other side, and 3) has large distances on other side. As this test is performed for each two data points, some big-O notation shows that determination of the split plane for nodes containing a large number of points will become prohibitive. However, I have not been able to determine a better solution. But the advantages of such a data structure still outweigh this expense. In my particular case, the time spent initially creating the tree is worth the time saved during queries. I should mention that if the data points are known ahead of time, it is faster to specify them all at once, so the tree can re-build itself once, rather than one at a time which may cause the tree to re-build itself multiple times.
  10. Original Post: Limitless Curiosity Out of various phases of the physics engine. Constraint Resolution was the hardest for me to understand personally. I need to read a lot of different papers and articles to fully understand how constraint resolution works. So I decided to write this article to help me understand it more easily in the future if, for example, I forget how this works. This article will tackle this problem by giving an example then make a general formula out of it. So let us delve into a pretty common scenario when two of our rigid bodies collide and penetrate each other as depicted below. From the scenario above we can formulate: We don't want our rigid bodies to intersect each other, thus we construct a constraint where the penetration depth must be more than zero. \(C: d>=0\) This is an inequality constraint, we can transform it to a more simple equality constraint by only solving it if two bodies are penetrating each other. If two rigid bodies don't collide with each other, we don't need any constraint resolution. So: if d>=0, do nothing else if d < 0 solve C: d = 0 Now we can solve this equation by calculating \( \Delta \vec{p1},\Delta \vec{p2},\Delta \vec{r1}\),and \( \Delta \vec{r2}\) that cause the constraint above satisfied. This method is called the position-based method. This will satisfy the above constraint immediately in the current frame and might cause a jittery effect. A much more modern and preferable method that is used in Box2d, Chipmunk, Bullet and my physics engine is called the impulse-based method. In this method, we derive a velocity constraint equation from the position constraint equation above. We are working in 2D so angular velocity and the cross result of two vectors are scalars. Next, we need to find \(\Delta V\) or impulse to satisfy the velocity constraint. This \(\Delta V\) is caused by a force. We call this force 'constraint force'. Constraint force only exerts a force on the direction of illegal movement in our case the penetration normal. We don't want this force to do any work, contribute or restrict any motion of legal direction. \(\lambda\) is a scalar, called Lagrangian multiplier. To understand why constraint force working on \(J^{T}\) direction (remember J is a 12 by 1 matrix, so \(J^{T}\) is a 1 by 12 matrix or a 12-dimensional vector), try to remember the equation for a three-dimensional plane. Now we can draw similarity between equation(1) and equation(2), where \(\vec{n}^{T}\) is similar to J and \(\vec{v}\) is similar to V. So we can interpret equation(1) as a 12 dimensional plane, we can conclude that \(J^{T}\) as the normal of this plane. If a point is outside a plane, the shortest distance from this point to the surface is the normal direction. After we calculate the Lagrangian multiplier, we have a way to get back the impulse from equation(3). Then, we can apply this impulse to each rigid body. Baumgarte Stabilization Note that solving the velocity constraint doesn't mean that we satisfy the position constraint. When we solve the velocity constraint, there is already a violation in the position constraint. We call this violation position drift. What we achieve is stopping the two bodies from penetrating deeper (The penetration depth will stop growing). It might be fine for a slow-moving object as the position drift is not noticeable, but it will be a problem as the object moving faster. The animation below demonstrates what happens when we solve the velocity constraint. [caption id="attachment_38" align="alignnone" width="800"] So instead of purely solving the velocity constraint, we add a bias term to fix any violation that happens in position constraint. So what is the value of the bias? As mentioned before we need this bias to fix positional drift. So we want this bias to be in proportion to penetration depth. This method is called Baumgarte Stabilization and \(\beta\) is a baumgarte term. The right value for this term might differ for different scenarios. We need to tweak this value between 0 and 1 to find the right value that makes our simulation stable. Sequential Impulse If our world consists only of two rigid bodies and one contact constraint. Then the above method will work decently. But in most games, there are more than two rigid bodies. One body can collide and penetrate with two or more bodies. We need to satisfy all the contact constraint simultaneously. For a real-time application, solving all these constraints simultaneously is not feasible. Erin Catto proposes a practical solution, called sequential impulse. The idea here is similar to Project Gauss-Seidel. We calculate \(\lambda\) and \(\Delta V\) for each constraint one by one, from constraint one to constraint n(n = number of constraint). After we finish iterating through the constraints and calculate \(\Delta V\), we repeat the process from constraint one to constraint n until the specified number of iteration. This algorithm will converge to the actual solution.The more we repeat the process, the more accurate the result will be. In Box2d, Erin Catto set ten as the default for the number of iteration. Another thing to notice is that while we satisfy one constraint we might unintentionally satisfy another constraint. Say for example that we have two different contact constraint on the same rigid body. When we solve \(\dot{C1}\), we might incidentally make \(\dot{d2} >= 0\). Remember that equation(5), is a formula for \(\dot{C}: \dot{d} = 0\) not \(\dot{C}: \dot{d} >= 0\). So we don't need to apply it to \(\dot{C2}\) anymore. We can detect this by looking at the sign of \(\lambda\). If the sign of \(\lambda\) is negative, that means the constraint is already satisfied. If we use this negative lambda as an impulse, it means we pull it closer instead of pushing it apart. It is fine for individual \(\lambda\) to be negative. But, we need to make sure the accumulation of \(\lambda\) is not negative. In each iteration, we add the current lambda to normalImpulseSum. Then we clamp the normalImpulseSum between 0 and positive infinity. The actual Lagrangian multiplier that we will use to calculate the impulse is the difference between the new normalImpulseSum and the previous normalImpulseSum Restitution Okay, now we have successfully resolve contact penetration in our physics engine. But what about simulating objects that bounce when a collision happens. The property to bounce when a collision happens is called restitution. The coefficient of restitution denoted \(C_{r}\), is the ratio of the parting speed after the collision and the closing speed before the collision. The coefficient of restitution only affects the velocity along the normal direction. So we need to do the dot operation with the normal vector. Notice that in this specific case the \(V_{initial}\) is similar to JV. If we look back at our constraint above, we set \(\dot{d}\) to zero because we assume that the object does not bounce back(\(C_{r}=0\)).So, if \(C_{r} != 0\), instead of 0, we can modify our constraint so the desired velocity is \(V_{final}\). We can merge our old bias term with the restitution term to get a new bias value. // init constraint // Calculate J(M^-1)(J^T). This term is constant so we can calculate this first for (int i = 0; i < constraint->numContactPoint; i++) { ftContactPointConstraint *pointConstraint = &constraint->pointConstraint; pointConstraint->r1 = manifold->contactPoints.r1 - (bodyA->transform.center + bodyA->centerOfMass); pointConstraint->r2 = manifold->contactPoints.r2 - (bodyB->transform.center + bodyB->centerOfMass); real kNormal = bodyA->inverseMass + bodyB->inverseMass; // Calculate r X normal real rnA = pointConstraint->r1.cross(constraint->normal); real rnB = pointConstraint->r2.cross(constraint->normal); // Calculate J(M^-1)(J^T). kNormal += (bodyA->inverseMoment * rnA * rnA + bodyB->inverseMoment * rnB * rnB); // Save inverse of J(M^-1)(J^T). pointConstraint->normalMass = 1 / kNormal; pointConstraint->positionBias = m_option.baumgarteCoef * manifold->penetrationDepth; ftVector2 vA = bodyA->velocity; ftVector2 vB = bodyB->velocity; real wA = bodyA->angularVelocity; real wB = bodyB->angularVelocity; ftVector2 dv = (vB + pointConstraint->r2.invCross(wB) - vA - pointConstraint->r1.invCross(wA)); //Calculate JV real jnV = dv.dot(constraint->normal pointConstraint->restitutionBias = -restitution * (jnV + m_option.restitutionSlop); } // solve constraint while (numIteration > 0) { for (int i = 0; i < m_constraintGroup.nConstraint; ++i) { ftContactConstraint *constraint = &(m_constraintGroup.constraints); int32 bodyIDA = constraint->bodyIDA; int32 bodyIDB = constraint->bodyIDB; ftVector2 normal = constraint->normal; ftVector2 tangent = normal.tangent(); for (int j = 0; j < constraint->numContactPoint; ++j) { ftContactPointConstraint *pointConstraint = &(constraint->pointConstraint[j]); ftVector2 vA = m_constraintGroup.velocities[bodyIDA]; ftVector2 vB = m_constraintGroup.velocities[bodyIDB]; real wA = m_constraintGroup.angularVelocities[bodyIDA]; real wB = m_constraintGroup.angularVelocities[bodyIDB]; //Calculate JV. (jnV = JV, dv = derivative of d, JV = derivative(d) dot normal)) ftVector2 dv = (vB + pointConstraint->r2.invCross(wB) - vA - pointConstraint->r1.invCross(wA)); real jnV = dv.dot(normal); //Calculate Lambda ( lambda real nLambda = (-jnV + pointConstraint->positionBias / dt + pointConstraint->restitutionBias) * pointConstraint->normalMass; // Add lambda to normalImpulse and clamp real oldAccumI = pointConstraint->nIAcc; pointConstraint->nIAcc += nLambda; if (pointConstraint->nIAcc < 0) { pointConstraint->nIAcc = 0; } // Find real lambda real I = pointConstraint->nIAcc - oldAccumI; // Calculate linear impulse ftVector2 nLinearI = normal * I; // Calculate angular impulse real rnA = pointConstraint->r1.cross(normal); real rnB = pointConstraint->r2.cross(normal); real nAngularIA = rnA * I; real nAngularIB = rnB * I; // Apply linear impulse m_constraintGroup.velocities[bodyIDA] -= constraint->invMassA * nLinearI; m_constraintGroup.velocities[bodyIDB] += constraint->invMassB * nLinearI; // Apply angular impulse m_constraintGroup.angularVelocities[bodyIDA] -= constraint->invMomentA * nAngularIA; m_constraintGroup.angularVelocities[bodyIDB] += constraint->invMomentB * nAngularIB; } } --numIteration; } General Step to Solve Constraint In this article, we have learned how to solve contact penetration by defining it as a constraint and solve it. But this framework is not only used to solve contact penetration. We can do many more cool things with constraints like for example implementing hinge joint, pulley, spring, etc. So this is the step-by-step of constraint resolution: Define the constraint in the form \(\dot{C}: JV + b = 0\). V is always \(\begin{bmatrix} \vec{v1} \\ w1 \\ \vec{v2} \\ w2\end{bmatrix}\) for every constraint. So we need to find J or the Jacobian Matrix for that specific constraint. Decide the number of iteration for the sequential impulse. Next find the Lagrangian multiplier by inserting velocity, mass, and the Jacobian Matrix into this equation: Do step 3 for each constraint, and repeat the process as much as the number of iteration. Clamp the Lagrangian multiplier if needed. This marks the end of this article. Feel free to ask if something is still unclear. And please inform me if there are inaccuracies in my article. Thank you for reading. NB: Box2d use sequential impulse, but does not use baumgarte stabilization anymore. It uses full NGS to resolve the position drift. Chipmunk still use baumgarte stabilization. References Allen Chou's post on Constraint Resolution A Unified Framework for Rigid Body Dynamics An Introduction to Physically Based Modeling: Constrained Dynamics Erin Catto's Box2d and presentation on constraint resolution Falton Debug Visualizer 18_01_2018 22_40_12.mp4 equation.svg
  11. Hello guys, My math is failing and can't get my orthographic projection matrix to work in Vulkan 1.0 (my implementation works great in D3D11 and D3D12). Specifically, there's nothing being drawn on the screen when using an ortho matrix but my perspective projection matrix work fantastic! I use glm with defines GLM_FORCE_LEFT_HANDED and GLM_FORCE_DEPTH_ZERO_TO_ONE (to handle 0 to 1 depth). This is how i define my matrices: m_projection_matrix = glm::perspective(glm::radians(fov), aspect_ratio, 0.1f, 100.0f); m_ortho_matrix = glm::ortho(0.0f, (float)width, (float)height, 0.0f, 0.1f, 100.0f); // I also tried 0.0f and 1.0f for depth near and far, the same I set and work for D3D but in Vulkan it doesn't work either. Then I premultiply both matrices with a "fix matrix" to invert the Y axis: glm::mat4 matrix_fix = {1.0f, 0.0f, 0.0f, 0.0f, 0.0f, -1.0f, 0.0f, 0.0f, 0.0f, 0.0f, 1.0f, 0.0f, 0.0f, 0.0f, 0.0f, 1.0f}; m_projection_matrix = m_projection_matrix * matrix_fix; m_ortho_matrix = m_ortho_matrix * matrix_fix; This fix matrix works good in tandem with GLM_FORCE_DEPTH_ZERO_TO_ONE. Model/World matrix is the identity matrix: glm::mat4 m_world_matrix(1.0f); Then finally this is how i set my view matrix: // Yes, I use Euler angles (don't bring the gimbal lock topic here, lol). They work great with my cameras in D3D too! m_view_matrix = glm::yawPitchRoll(glm::radians(m_rotation.y), glm::radians(m_rotation.x), glm::radians(m_rotation.z)); m_view_matrix = glm::translate(m_view_matrix, -m_position); That's all guys, in my shaders I correctly multiply all 3 matrices with the position vector and as I said, the perspective matrix works really good but my ortho matrix displays no geometry. EDIT: My vertex data is also on the right track, I use the same geometry in D3D and it works great: 256.0f units means 256 points/dots/pixels wide. What could I possibly be doing wrong or missing? Big thanks guys any help would be greatly appreciated. Keep on coding, cheers.
  12. I've been puzzling over this for days now, and have hit a wall so need some help! I'm using VR motion controllers in Unreal Engine. I need an object to follow along with the motion controller, but to maintain a certain orientation. If the motion controller / object were to move around on the flat XY plane (UE4 is Z up so the XY plane is the horizontal plane), the top of the object would be pointing upwards, and the object just turns corners following the direction of movement (turning much like a car would when driving around). At each tick, I am taking the current position of the motion controller, and making a normalised directional vector using (CurrentPositionXYZ - PreviousPositionXYZ). Using this direction (forward vector), I am then making a rotation from the forward and up vectors. In order to do that I am first finding out the right vector by taking the cross product of a global up vector (0,0,1) with the forward vector. Then I take the cross product of the right and forward vectors to get the actual up vector. Then I make the rotation with forward and up vectors. Here is what this looks like if the motion controller movement were to be locked to the XY plane and move in a circle. All looks good, and exactly what I was hoping for. However, if the circular movement was to occur in either of the other planes (XZ or YZ) then we have a problem at two points on the circle when the forward (directional) vector lines up with the global up vector I'm using. Because these calculations have no way of knowing that the object is moving in a certain direction, when the direction changes to move from negative Y to positive Y, for example, the whole thing flips. Here's what this looks like with a circle drawn in the YZ plane: This is obviously no good, as it breaks the consistency of movement. As such, I wrote some logic that detects if the X or Y value crosses 0 when the direction is pointing up/down, and then reverses the global up vector. Assuming there is no movement/value in X, the movement graph for this looks like this: Red = X; Green = Y; Blue = Z || Circle in YZ plane starting movement in -Y direction. And with the up vector flip logic added, it looks like this: Looking good! Except there's a massive problem. This all works fine if the movement is constrained to a plane (in this case the YZ plane, but also works perfectly in the XZ plane), but breaks down if the circular movement is drawn eve slightly off-axis. Here's what happens: There's a section where everything flips. The size of that section is determined by the magnitude of X, and relates to the part of the graph pointed to in this diagram: Here X is set at a constant -0.2. As soon as the Y value passes the value of X and approaches zero, the value in X starts having greater and greater influence over the direction, and then as Y starts moving further away from zero again, the influence drops away again. Although this makes sense, it's not what I want, and I can't figure out how to achieve what I want. I'm having trouble even defining the nature of the problem. Perhaps (probably) I'm doing this in a really stupid way, and there's a much simpler solution for what I'm trying to achieve. I could really use some help! Given a progression of known locations through 3D space, what is the best way to move an object along them such that its axes bank (if that's the correct term) realistically and don't do any weird flipping around? All help massively appreciated!
  13. akaitora

    Gaussian based Weights

    Hi. I was studying the Bloom tutorial on learnopengl.com. https://learnopengl.com/#!Advanced-Lighting/Bloom I understand the benefits of using a Gaussian blur for achieving a Bloom effect. I understand that the author of this tutorial is using the gaussian values as weights which are used to determine how much neighboring pixels should affect the current pixel when blurred. In this tutorial, the author chooses to use the following weights, 0.227027, 0.1945946, 0.1216216, 0.054054, 0.016216 Where 0.227027 + (0.1945946*2.0) + (0.1216216 * 2.0) + (0.054054 * 2.0) + (0.016216 *2.0) comes out to about 1. The author chooses to same 4 samples to the left and right of the current pixel for a total sample count of 9 per axis. All that makes sense but what I don't understand is how to calculate the weights myself. For example, lets say I wanted a sample count of some arbitrary number like 5. With a sample count of 5, the formula would then need to be A + (B * 2.0) + (C * 2.0) + (D * 2.0) + (E * 2.0) = 1 What I want to do is figure out how to calculate the wights, A,B,C,D, and E given the sample count. Thanks!
  14. I'm trying to implement a constraint that fixes the relative orientation and position of two rigid bodies. My implementation is based on this document http://danielchappuis.ch/download/ConstraintsDerivationRigidBody3D.pdf The resulting velocity adjustments do not satisfy the constraint and I don't know that I'm doing wrong. #include <Eigen/Eigen> using namespace Eigen; typedef double Real; typedef Eigen::Matrix<Real, 3, 3> Matrix3r; typedef Eigen::Matrix<Real, 3, 1> Vector3r; typedef Eigen::Quaternion<Real> Quaternionr; typedef Eigen::AlignedBox<Real, 3> AlignedBox3r; template <typename Scalar> inline Eigen::Matrix<Scalar, 3, 3> crossProductMatrix(const Eigen::Matrix<Scalar, 3, 1> &v) { Eigen::Matrix<Scalar, 3, 3> v_hat; v_hat << 0, -v(2), v(1), v(2), 0, -v(0), -v(1), v(0), 0; return v_hat; } struct RigidBody { Matrix3r R; // world-space rotation of the rigid body Vector3r x; // world-space position of the rigid body Vector3r v; // linear velocity Vector3r w; // angular velocity Matrix3r Iinv; // local-space inertia tensor Real invmass; }; Matrix3r compute_inertia_tensor(const AlignedBox3r& box, Real m) { const Real a = box.sizes().x(); const Real b = box.sizes().y(); const Real c = box.sizes().z(); return (Real(1.0 / 3.0) * m * Vector3r { b * b + c * c, a * a + c * c, a * a + b * b }).asDiagonal(); } void computeMatrixK( const Vector3r &connector, const Real invMass, const Vector3r &x, const Matrix3r &inertiaInverseW, Matrix3r &K) { const Vector3r r = connector - x; K.setZero(); K += invMass * Matrix3r::Identity(); K += crossProductMatrix(r) * inertiaInverseW * crossProductMatrix(r).transpose(); } bool init_RigidBodyFixedConstraint( const Real invMass0, // inverse mass is zero if body is static const Vector3r &x0, // center of mass of body 0 const Vector3r &v0, // velocity of body 0 const Matrix3r &inertiaInverseW0, // inverse inertia tensor (world space) of body 0 const Vector3r &omega0, // angular velocity of body 0 const Real invMass1, // inverse mass is zero if body is static const Vector3r &x1, // center of mass of body 1 const Vector3r &v1, // velocity of body 1 const Matrix3r &inertiaInverseW1, // inverse inertia tensor (world space) of body 1 const Vector3r &omega1, // angular velocity of body 1 const Vector3r &connector, // world-space coordinates of anchor Eigen::Matrix<Real, 3, 7> &constraintInfo) { Matrix3r K1, K2; computeMatrixK(connector, invMass0, x0, inertiaInverseW0, K1); computeMatrixK(connector, invMass1, x1, inertiaInverseW1, K2); const Matrix3r inv_K_trans = (K1 + K2).inverse(); const Matrix3r inv_K_rot = (inertiaInverseW0 + inertiaInverseW1).inverse(); constraintInfo.block<3, 1>(0, 0) = connector; constraintInfo.block<3, 3>(0, 1) = inv_K_trans; constraintInfo.block<3, 3>(0, 4) = inv_K_rot; return true; } bool velocitySolve_RigidBodyFixedConstraint( const Real invMass0, // inverse mass is zero if body is static const Vector3r &x0, // center of mass of body 0 const Vector3r &v0, // velocity of body 0 const Matrix3r &inertiaInverseW0, // inverse inertia tensor (world space) of body 0 const Vector3r &omega0, // angular velocity of body 0 const Real invMass1, // inverse mass is zero if body is static const Vector3r &x1, // center of mass of body 1 const Vector3r &v1, // velocity of body 1 const Matrix3r &inertiaInverseW1, // inverse inertia tensor (world space) of body 1 const Vector3r &omega1, // angular velocity of body 1 Eigen::Matrix<Real, 3, 7> &constraintInfo, // precomputed constraint info Vector3r &corr_v0, Vector3r &corr_omega0, Vector3r &corr_v1, Vector3r &corr_omega1) { const Vector3r connector = constraintInfo.block<3, 1>(0, 0); const Matrix3r invKt = constraintInfo.block<3, 3>(0, 1); const Matrix3r invKr = constraintInfo.block<3, 3>(0, 4); const Vector3r r0 = connector - x0; const Vector3r r1 = connector - x1; const Vector3r J_trans_v = (v1 + omega1.cross(r1)) - (v0 + omega0.cross(r0)); const Vector3r J_rot_v = omega1 - omega0; const Vector3r lambda_trans = -invKt * J_trans_v; const Vector3r lambda_rot = -invKr * J_rot_v; corr_v0.setZero(); corr_omega0.setZero(); corr_v1.setZero(); corr_omega1.setZero(); if (invMass0 != 0.0f) { corr_v0 -= invMass0 * lambda_trans; corr_omega0 -= inertiaInverseW0 * r0.cross(lambda_trans); corr_omega0 -= inertiaInverseW0 * lambda_rot; } if (invMass1 != 0.0f) { corr_v1 += invMass1 * lambda_trans; corr_omega1 += inertiaInverseW1 * r1.cross(lambda_trans); corr_omega1 += inertiaInverseW1 * lambda_rot; } return true; } Real evalError( const Vector3r &x0, // center of mass of body 0 const Vector3r &v0, // velocity of body 0 const Vector3r &omega0, // angular velocity of body 0 const Vector3r &x1, // center of mass of body 1 const Vector3r &v1, // velocity of body 1 const Vector3r &omega1, // angular velocity of body 1 const Vector3r &connector) { const Vector3r r0 = connector - x0; const Vector3r r1 = connector - x1; const Vector3r J_trans_v = v1 + omega1.cross(r1) - v0 - omega0.cross(r0); const Vector3r J_rot_v = -omega0 + omega1; return J_trans_v.norm() + J_rot_v.norm(); } int main(int argc, char ** argv) { const Real extents = 2.0f; // each cube is 2x2x2 RigidBody A, B; A.x = Vector3r{ 0.0f, 1.0f, 0.0f }; B.x = A.x + Vector3r::UnitY() * (1.0f * extents); const AlignedBox3r box{ Vector3r{ -1.0f, -1.0f, -1.0f }, Vector3r{ +1.0f, +1.0f, +1.0f } }; const Real massA = 20.0f; const Real massB = 40.0f; A.Iinv = compute_inertia_tensor(box, massA).inverse(); B.Iinv = compute_inertia_tensor(box, massB).inverse(); A.invmass = massA == 0.0 ? 0.0 : 1.0 / massA; B.invmass = massB == 0.0 ? 0.0 : 1.0f / massB; B.v = Vector3r::UnitX() * +1.0f; A.v = Vector3r::UnitY() * -1.0f; //A.w.setZero(); //B.w.setZero(); B.w = { 0.1f, 0.2f, 0.3f }; A.w = { 0.4f, 0.1f, 0.7f }; A.R.setIdentity(); B.R.setIdentity(); // world-space inverse inertia tensors const Matrix3r inertiaInverseW1 = A.R * A.Iinv * A.R.transpose(); const Matrix3r inertiaInverseW2 = B.R * B.Iinv * B.R.transpose(); const Vector3r connector{ 0.0f, 2.0f, 0.0f }; Eigen::Matrix<Real, 3, 7> constraintInfo; init_RigidBodyFixedConstraint( A.invmass, A.x, A.v, inertiaInverseW1, A.w, B.invmass, B.x, B.v, inertiaInverseW2, B.w, connector, constraintInfo); for (int i = 0; i < 1; ++i) { Vector3r corr_v0, corr_omega0; Vector3r corr_v1, corr_omega1; velocitySolve_RigidBodyFixedConstraint( A.invmass, A.x, A.v, inertiaInverseW1, A.w, B.invmass, B.x, B.v, inertiaInverseW2, B.w, constraintInfo, corr_v0, corr_omega0, corr_v1, corr_omega1); A.v += corr_v0; A.w += corr_omega0; B.v += corr_v1; B.w += corr_omega1; } const Real error = evalError(A.x, A.v, A.w, B.x, B.v, B.w, connector); return 0; }
  15. Hi, there's a great tutorial on frustum culling, but impossible to compile because it uses old DirectX 11 types (Direct3DXPlane instead of XMVECTOR, etc). Can someone please help me update this one class - frustumClass - to the new DirectX11 types (XMMATRIX, XMVECTOR, etc)? http://www.rastertek.com/dx11tut16.html Furthermore, can anyone please explain how he gets the minimum Z distance from the projection matrix by dividing one element by another? He leaves no explanation for this math and it's extremely frustrating. // Calculate the minimum Z distance in the frustum. zMinimum = -projectionMatrix._43 / projectionMatrix._33; r = screenDepth / (screenDepth - zMinimum); projectionMatrix._33 = r; projectionMatrix._43 = -r * zMinimum; Also not sure how to use an XMVECTOR instead of the old Plane class that he uses. Confused as to where all the m12, m13, etc correspond to the elements of an XMVECTOR. I thought you're not supposed to even access the elements of an XMVECTOR directly! So incredibly frustrating. Please help, thanks.
  16. Sorry for making a new thread about this, but I have a specific question which I couldn't find an answer to in any of the other threads I've looked at. I've been trying to get the method shown here to work several days now and I've run out of things to try. I've more or less resorted to using the barebones example shown there (with some very minor modifications as it wouldn't run otherwise), but I still can't get it to work. Either I have misunderstood something completely, or there's a mistake somewhere. My shader code looks like this: Vertex shader: #version 330 core //Vertex shader //Half the size of the near plane {tan(fovy/2.0) * aspect, tan(fovy/2.0) } uniform vec2 halfSizeNearPlane; layout (location = 0) in vec3 clipPos; //UV for the depth buffer/screen access. //(0,0) in bottom left corner (1, 1) in top right corner layout (location = 1) in vec2 texCoord; out vec3 eyeDirection; out vec2 uv; void main() { uv = texCoord; eyeDirection = vec3((2.0 * halfSizeNearPlane * texCoord) - halfSizeNearPlane , -1.0); gl_Position = vec4(clipPos.xy, 0, 1); } Fragment shader: #version 330 core //Fragment shader layout (location = 0) out vec3 fragColor; in vec3 eyeDirection; in vec2 uv; uniform mat4 persMatrix; uniform vec2 depthrange; uniform sampler2D depth; vec4 CalcEyeFromWindow(in float windowZ, in vec3 eyeDirection, in vec2 depthrange) { float ndcZ = (2.0 * windowZ - depthrange.x - depthrange.y) / (depthrange.y - depthrange.x); float eyeZ = persMatrix[3][2] / ((persMatrix[2][3] * ndcZ) - persMatrix[2][2]); return vec4(eyeDirection * eyeZ, 1); } void main() { vec4 eyeSpace = CalcEyeFromWindow(texture(depth, uv).x, eyeDirection, depthrange); fragColor = eyeSpace.rbg; } Where my camera settings are: float fov = glm::radians(60.0f); float aspect = 800.0f / 600.0f; And my uniforms equal: uniform mat4 persMatrix = glm::perspective(fov, aspect, 0.1f, 100.0f) uniform vec2 halfSizeNearPlane = glm::vec2(glm::tan(fov/2.0) * aspect, glm::tan(fov/2.0)) uniform vec2 depthrange = glm::vec2(0.0f, 1.0f) uniform sampler2D depth is a GL_DEPTH24_STENCIL8 texture which has depth values from an earlier pass (if I linearize it and set fragColor = vec3(linearizedZ), it shows up like it should, so nothing seems wrong there). I can confirm that it's wrong because it doesn't give me similar results to what saving position in the G-buffer or reconstructing using inverse matrices does. Is there something obvious I'm missing? To me the logic seems sound, and from the description on the Khronos wiki I can't see where I go wrong. Thanks!
  17. I am doing a little physics project with circle circle collisions for now, and have tried to do impulse resolution for collisions with 2 circles, using the following code. relativeVelocity = (other.doVerletVelocity()).subtract(self.doVerletVelocity()) normDirecVel = relativeVelocity.dotProduct(collisionNormal) restitution = -1 - min(self.restitution, other.restitution) numerator = normDirecVel * restitution impulseScalar = numerator / float(1 / self.mass) + float(1 / other.mass) selfVel = self.doVerletVelocity() otherVel = other.doVerletVelocity() impulse = collisionNormal.scalarMult(impulseScalar) selfDV = impulse.scalarMult(1 / self.mass) otherDV = impulse.scalarMult(1 / other.mass) newSelfVel = selfVel.subtract(selfDV) newOtherVel = otherVel.add(otherDV) self.oldPos = (self.center).subtract(newSelfVel.scalarMult(dt)) other.oldPos = (other.center).subtract(newOtherVel.scalarMult(dt)) The problem seems to be that whatever value I give to self.mass and other.mass, the output stays exactly the same, the values that I used are: center = Vector(0, 0) radius = 1 oldPos = Vector(0, 0) accel = Vector(0, 0) mass = 100 restitution = 0.001 center2 = Vector(0, 3.20) radius2 = 1 oldPos2 = Vector(0, 3.201) accel2 = Vector(0, -1) mass2 = 1 restitution2 = 1 the output was: 0.0 0.0 0.0 2.165000000000114 0.0 0.0 0.0 2.1360000000001174 0.0 0.0 0.0 2.1066000000001206 0.0 0.0 0.0 2.076800000000124 0.0 0.0 0.0 2.046600000000127 0.0 0.0 0.0 2.0160000000001306 0.0 0.0 0.0 1.985000000000134 CIRCLE INTERSECTION 0.0 -1.985000000000134 0.0 3.938600000000271 0.0 -3.970000000000268 0.0 5.891800000000408 0.0 -5.9550000000004015 0.0 7.844600000000544 0.0 -7.940000000000535 0.0 9.797000000000681 I changed the values for the masses to make them higher, bu the output still remained the same, if you could get to the bottom of this, it would be much appreciated.
  18. How do we rotate the camera around x axis 360 degrees, without having the strange effect as in my video below? Mine behaves exactly the same way spherical coordinates would, I'm using euler angles. Tried googling, but couldn't find a proper answer, guessing I don't know what exactly to google for, googled 'rotate 360 around x axis', got no proper answers. References: Code: https://pastebin.com/Hcshj3FQ The video shows the difference between blender and my rotation:
  19. Hi all, More than a decade ago, a problem came up on this forum for computing a fast transpose of a 3x3 matrix using SSE. The most sensible implementation stores the matrix internally as a 3x4 matrix (so, one row stores 4 elements, aligned in a vector). A version, which I believe to be the fastest currently known, was presented: I am pleased to report that I have been able to come up with a version which should be faster: inline void transpose(__m128& A, __m128& B, __m128& C) { //Input rows in __m128& A, B, and C. Output in same. __m128 T0 = _mm_unpacklo_ps(A,B); __m128 T1 = _mm_unpackhi_ps(A,B); A = _mm_movelh_ps(T0,C); B = _mm_shuffle_ps( T0,C, _MM_SHUFFLE(3,1,3,2) ); C = _mm_shuffle_ps( T1,C, _MM_SHUFFLE(3,2,1,0) ); } This should be 5 instructions instead of ajas95's 8 instructions. Of course, to get that level of performance with either version, you need to inline everything, or else you spend tons of time on moving floating point arguments to/from input registers. The other thing that is crucial is that the instruction set be VEX encoded. This allows generating instructions that take three arguments, like `vunpcklps`, instead of instructions like `unpcklps` that take only two. VEX is only available in AVX and higher (usually passing e.g. `-mavx` is sufficient to get the compiler to generate VEX instructions). -G
  20. Myopic Rhino

    Quaternion Powers

    Version 1.2, February, 2003 Updates 1.2 A minor correction with the formula of converting from Quat to Axis. The scale is missing a square root. Thanks to Shi for pointing that out. From version 1.0 - 1.1 The norm of a quaternion should be the square root of the q.q. The mistake was brought to my attention by several kind readers and upon checking the definition of the Euclidean properties for complex numbers, I realize the norm property [bquote][color=#000080][font=courier new,courier,monospace]|| u+v ||
  21. I'm trying to implement OBB vs OBB SAT collision from Randy Gaul's article. I have been playing with it for days, but I can't figure it out why it doesn't work. I am storing all vertex positions of each box, and also each edge normal on two lists. Then I use them on the algorithm, I tried a lot of changes to make it work but no luck. This is my C++ code: QVector2D Entity::getSupport(QVector2D dir) { float bestProjection = -std::numeric_limits<float>::max(); QVector2D bestVertex; for(quint32 i = 0; i < verticesList.count(); ++i) { QVector2D v = verticesList.at(i); float projection = QVector2D::dotProduct(v,dir); if(projection > bestProjection) { bestVertex = v; bestProjection = projection; } } return bestVertex; } qreal Collision::FindAxisLeastPenetration(Entity *A, Entity *B ) { float bestDistance = -std::numeric_limits<float>::max(); for(quint32 k = 0; k < A->verticesList.count() ; ++k) { QVector2D n = A->normalList.at(k); QVector2D s = B->getSupport(-n); QVector2D v = A->verticesList.at(k); QVector2D r = s-v; qreal d = QVector2D::dotProduct(n,r); if(d > bestDistance) { bestDistance = d; } } return bestDistance; } if (coli->FindAxisLeastPenetration(player,player2) <0 && FindAxisLeastPenetration(player2,player) <0 ) { qDebug() << "Collision detected "; }
  22. Hi guys, I'm trying to build my 3x3 rotation matrix algorithm for the sake of thoroughly understand it as much as I can The code below is what I just wrote from memory in an attempt to explain it to myself and test what I understand and what I do not. I roughly start to get it (saw it last week for the first time and kind of crashed my soul, at the time) and if I decompose it in small single pieces as I did below, it actually works and my point is being rotated to the appropriate coordinates (or almost, since the y component ends up getting 0ed out, so I think I just need to add back the p_proj_to_rA Vector to the final resoult to have it right). This being said, I completely missed the point of building a rotation matrix out of it, even though the whole time I was convinced to be moving toward the goal I mean I've made my matrix class with all the relevant operations/methods, but I just don't know how to do the final step to convert what I have below into a rotation matrix...and I do not mean the fomula, which I have in the book and I can copypaste that and have it work if I want to, what I need is to actually see how to derive that final matrix formula from what I have below, I need to understand how it correlate in a way that if I where to reason about the final rotation matrix formula in the book I could say "of course, it's obvious". So everyone that deeply understand rotation matrix and how to end up with one(without just copy-pasting the formula from a book or using a function that just give you one) is welcome to take on the challenge of making it look obvious to me, so that I can finish my program and end up with the correct one for wathever Axis of rotation I choose Or just link me wathever online resource helped you in the past Thanks Vector p = { 1, 1, 1 };// point that needs to rotate Vector rA = { 0, 1, 0 }; // rotation Axis p.computeMagnitude(); // sqrt( x*x + y*y + z*z ) rA.computeMagnitude();// sqrt( x*x + y*y + z*z ) //projection of our point on the rotation axis, which corresponds to the dotProduct since rA is of lenght 1 // Formula 1 dotProduct(p,rA) = px*rAx + py*rAy + pz*rAz // Formula 2 dotProduct(p,rA) = |p| * |rA| * cos(angleBetween) // proj_rA(p) = dotProduct(p,rA) = |p| * 1 * cos(angleBetween) = |p| * cos(angleBetween) //Solve Formula 2 for angleBetween //|p| * cos(angleBetween) = dotProduct(p,rA) // cos(angleBetween) = (dotProduct(p,rA) / |p|) // angleBetween = acos( dotProduct(p,rA) / |p| ) // proj_rA(p) = |p| * cos(angleBetween) // proj_rA(p) = |p| * cos( acos( dotProduct(p,rA) / |p| ) ) //projection of p on rA: double proj_to_rA = p.magnitude * cos(Vector::angleBetween(p, rA)); //vector p projected on rA, which is basically a longer or shorter version of rA Vector p_proj_to_rA = rA * proj_to_rA; //subtracting from p the (projection of p on rA) zeroes out the dotProduct(p,rA) so the resulting vector must be orthogonal to rA Vector p_perp_to_rA1 = p - p_proj_to_rA; //the cross product between two vector is a vector orthogonal to both of them //Formula 1 crossProduct(A,B) = |A|*|B|*sin(angleBetween) //Formula 2 v1.y*v2.z - v1.z*v2.y, //X // v1.z*v2.x - v1.x*v2.z, //Y // v1.x*v2.y - v1.y*v2.x //Z Vector p_perp_to_rA2 = Vector::crossProduct(rA, p_perp_to_rA1); //since rA, p_perp_to_rA1 and p_perp_to_rA2 are all perpendicular to each other, //if we now only consider p_perp_to_rA1 and p_perp_to_rA2 they describe a 2d plane //perpendicular to rA, and we rotate stuff based on this plane //Now the desired point of that plane is the sum (VectorX on p_perp_to_rA1 + VectorY on p_perp_to_rA2) double desiredRotation = toRad(-30); double X = cos(desiredRotation); double Y = sin(desiredRotation); Vector finalPoint = Vector(p_perp_to_rA1 * X) + Vector(p_perp_to_rA2 * Y); cout << "x,y,z{ 1, 1, 1 } -30 degree on the x,y,z{ 0, 1, 0 } Axis = " << finalPoint << endl; return 0; output:
  23. Has anyone tried finding isometric bounds programatically? What I'm trying to achieve is to use the Cartesian width and height of the object to find it's isometric bounds, but I think that just doesn't work once the image gets complicated. The attachments should explain clearly what I'm trying to achieve (note: the bounds drawing isn't perfect).
  24. Hello,   I was wrote a math library for C, and saw a thread about this, I wanted to share this library, maybe helps who working with C like me and looking for 3D math library for C89/C99, it is really easy to use and really fast!   Repo: https://github.com/recp/cglm   All funcs are inlined but you can prefer to use pre-compiled version of all funcs For instance glm_mat4_mul is inilne and glmc_mat4_mul is pre-compiled   if you don't need pre-compiled then you dont need to build it.   Library uses SIMD/AVX if available but in the future neon maybe supported All functions are optimized and there are lot of convenient funcs.    Features are listed in repo's README so I think there is no need to put them here   Just wanted to share my stuff to help people who are looking new lib or lib for C
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

Participate in the game development conversation and more when you create an account on GameDev.net!

Sign me up!