• Advertisement
Sign in to follow this  

Vulkan Can't get my orthographic projection matrix to work

Recommended Posts

Hello guys,

My math is failing and can't get my orthographic projection matrix to work in Vulkan 1.0 (my implementation works great in D3D11 and D3D12). Specifically, there's nothing being drawn on the screen when using an ortho matrix but my perspective projection matrix work fantastic!

I use glm with defines GLM_FORCE_LEFT_HANDED and GLM_FORCE_DEPTH_ZERO_TO_ONE (to handle 0 to 1 depth).

This is how i define my matrices:

m_projection_matrix = glm::perspective(glm::radians(fov), aspect_ratio, 0.1f, 100.0f);
m_ortho_matrix = glm::ortho(0.0f, (float)width, (float)height, 0.0f, 0.1f, 100.0f); // I also tried 0.0f and 1.0f for depth near and far, the same I set and work for D3D but in Vulkan it doesn't work either.

Then I premultiply both matrices with a "fix matrix" to invert the Y axis:

glm::mat4 matrix_fix =
{1.0f, 0.0f, 0.0f, 0.0f,
0.0f, -1.0f, 0.0f, 0.0f,
0.0f, 0.0f, 1.0f, 0.0f,
0.0f, 0.0f, 0.0f, 1.0f};

m_projection_matrix = m_projection_matrix * matrix_fix;
m_ortho_matrix = m_ortho_matrix * matrix_fix;

This fix matrix works good in tandem with GLM_FORCE_DEPTH_ZERO_TO_ONE.

Model/World matrix is the identity matrix:

glm::mat4 m_world_matrix(1.0f);

Then finally this is how i set my view matrix:

// Yes, I use Euler angles (don't bring the gimbal lock topic here, lol). They work great with my cameras in D3D too!
m_view_matrix = glm::yawPitchRoll(glm::radians(m_rotation.y), glm::radians(m_rotation.x), glm::radians(m_rotation.z));
m_view_matrix = glm::translate(m_view_matrix, -m_position);

That's all guys, in my shaders I correctly multiply all 3 matrices with the position vector and as I said, the perspective matrix works really good but my ortho matrix displays no geometry.

EDIT: My vertex data is also on the right track, I use the same geometry in D3D and it works great: 256.0f units means 256 points/dots/pixels wide.

What could I possibly be doing wrong or missing?

Big thanks guys any help would be greatly appreciated. Keep on coding, cheers.

 

Edited by HateWork

Share this post


Link to post
Share on other sites
Advertisement
m_ortho_matrix = glm::ortho(0.0f, (float)width, (float)height, 0.0f, 0.1f, 100.0f);

width and height here should not be the screen width and height, but rather the width and height in world space. The width usually depends om how ”zoomed in” you want to be, and height will be ”width * (resHeight/ resWidth)”

hope this helps

Quote

 

Share this post


Link to post
Share on other sites
On 1/10/2018 at 11:27 PM, Finoli said:

m_ortho_matrix = glm::ortho(0.0f, (float)width, (float)height, 0.0f, 0.1f, 100.0f);

width and height here should not be the screen width and height, but rather the width and height in world space. The width usually depends om how ”zoomed in” you want to be, and height will be ”width * (resHeight/ resWidth)”

hope this helps

Hi, thanks for your reply. I'm not setting any "zoom level", i'm 1:1 with the resolution. Any other examples of glm::ortho set the screen resolution, but no matter anyway, all the values I try fail.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Advertisement
  • Advertisement
  • Popular Tags

  • Advertisement
  • Popular Now

  • Similar Content

    • By kevinyu
      Original Post: Limitless Curiosity
      Out of various phases of the physics engine. Constraint Resolution was the hardest for me to understand personally. I need to read a lot of different papers and articles to fully understand how constraint resolution works. So I decided to write this article to help me understand it more easily in the future if, for example, I forget how this works.
      This article will tackle this problem by giving an example then make a general formula out of it. So let us delve into a pretty common scenario when two of our rigid bodies collide and penetrate each other as depicted below.

      From the scenario above we can formulate:
      \(\(d = ((\vec{p1} + \vec{r1}) - (\vec{p2} + \vec{r2}) \cdot \vec{n}\\)
      We don't want our rigid bodies to intersect each other, thus we construct a constraint where the penetration depth must be more than zero.
      \(C: d>=0\)
      This is an inequality constraint, we can transform it to a more simple equality constraint by only solving it if two bodies are penetrating each other. If two rigid bodies don't collide with each other, we don't need any constraint resolution. So:
      if d>=0, do nothing else if d < 0 solve C: d = 0
      Now we solve this constraint by calculating $\Delta \vec{p1},\Delta \vec{p2},\Delta \vec{r1},and \Delta \vec{r2}$ that cause the constraint above satisfied. This method is called the position-based method. This will satisfy the above constraint immediately in the current frame and might cause a jittery effect.
      A much more modern and preferable method that is used in Box2d, Chipmunk, Bullet and my physics engine is called the impulse-based method. In this method, we derive a velocity constraint equation from the position constraint equation above.

      We are working in 2D so angular velocity and the cross result of two vectors are scalars.
      Next, we need to find $\Delta V$ or impulse to satisfy the velocity constraint. This $\Delta V$ is caused by a force. We call this force 'constraint force'. Constraint force only exerts a force on the direction of illegal movement in our case the penetration normal. We don't want this force to do any work, contribute or restrict any motion of legal direction.

      $\lambda$ is a scalar, called Lagrangian multiplier. To understand why constraint force working on \(J^T\) $J^{T}$ direction (remember J is a 12 by 1 matrix, so $J^{T}$ is a 1 by 12 matrix or a 12-dimensional vector), try to remember the equation for a three-dimensional plane.

      Now we can draw similarity between equation(1) and equation(2), where $\vec{n}^{T}$ is similar to J and $\vec{v}$ is similar to V. So we can interpret equation(1) as a 12 dimensional plane, we can conclude that $J^{T}$ as the normal of this plane. If a point is outside a plane, the shortest distance from this point to the surface is the normal direction.

      After we calculate the Lagrangian multiplier, we have a way to get back the impulse from equation(3). Then, we can apply this impulse to each rigid body.
      Baumgarte Stabilization
      Note that solving the velocity constraint doesn't mean that we satisfy the position constraint. When we solve the velocity constraint, there is already a violation in the position constraint. We call this violation position drift. What we achieve is stopping the two bodies from penetrating deeper (The penetration depth will stop growing). It might be fine for a slow-moving object as the position drift is not noticeable, but it will be a problem as the object moving faster. The animation below demonstrates what happens when we solve the velocity constraint.
      [caption id="attachment_38" align="alignnone" width="800"]
      So instead of purely solving the velocity constraint, we add a bias term to fix any violation that happens in position constraint. 

      So what is the value of the bias? As mentioned before we need this bias to fix positional drift. So we want this bias to be in proportion to penetration depth.

      This method is called Baumgarte Stabilization and $\beta$ is a baumgarte term. The right value for this term might differ for different scenarios. We need to tweak this value between 0 and 1 to find the right value that makes our simulation stable.

       
      Sequential Impulse
      If our world consists only of two rigid bodies and one contact constraint. Then the above method will work decently. But in most games, there are more than two rigid bodies. One body can collide and penetrate with two or more bodies. We need to satisfy all the contact constraint simultaneously. For a real-time application, solving all these constraints simultaneously is not feasible. Erin Catto proposes a practical solution, called sequential impulse. The idea here is similar to Project Gauss-Seidel. We calculate $\lambda$ and $\Delta V$ for each constraint one by one, from constraint one to constraint n(n = number of constraint). After we finish iterating through the constraints and calculate $\Delta V$, we repeat the process from constraint one to constraint n until the specified number of iteration. This algorithm will converge to the actual solution.The more we repeat the process, the more accurate the result will be. In Box2d, Erin Catto set ten as the default for the number of iteration.
      Another thing to notice is that while we satisfy one constraint we might unintentionally satisfy another constraint. Say for example that we have two different contact constraint on the same rigid body.

      When we solve $\dot{C1}$, we might incidentally make $\dot{d2} >= 0$. Remember that equation(5), is a formula for $\dot{C}: \dot{d} = 0$ not $\dot{C}: \dot{d} >= 0$. So we don't need to apply it to $\dot{C2}$ anymore. We can detect this by looking at the sign of $\lambda$. If the sign of $\lambda$ is negative, that means the constraint is already satisfied. If we use this negative lambda as an impulse, it means we pull it closer instead of pushing it apart. It is fine for individual $\lambda$ to be negative. But, we need to make sure the accumulation of $\lambda$ is not negative. In each iteration, we add the current lambda to normalImpulseSum. Then we clamp the normalImpulseSum between 0 and positive infinity. The actual Lagrangian multiplier that we will use to calculate the impulse is the difference between the new normalImpulseSum and the previous normalImpulseSum
      Restitution
      Okay, now we have successfully resolve contact penetration in our physics engine. But what about simulating objects that bounce when a collision happens. The property to bounce when a collision happens is called restitution. The coefficient of restitution, denoted $C_{r}$, is the ratio of the parting speed after the collision and the closing speed before the collision.

      The coefficient of restitution only affects the velocity along the normal direction. So we need to do the dot operation with the normal vector.

      Notice that in this specific case the $V_{initial}$ is similar to JV. If we look back at our constraint above, we set $\dot{d}$ to zero because we assume that the object does not bounce back($C_{r}=0$).So, if $C_{r} != 0$, instead of 0, we can modify our constraint so the desired velocity is $V_{final}$.

      We can merge our old bias term with the restitution term to get a new bias value.

      // init constraint // Calculate J(M^-1)(J^T). This term is constant so we can calculate this first for (int i = 0; i < constraint->numContactPoint; i++) { ftContactPointConstraint *pointConstraint = &constraint->pointConstraint; pointConstraint->r1 = manifold->contactPoints.r1 - (bodyA->transform.center + bodyA->centerOfMass); pointConstraint->r2 = manifold->contactPoints.r2 - (bodyB->transform.center + bodyB->centerOfMass); real kNormal = bodyA->inverseMass + bodyB->inverseMass; // Calculate r X normal real rnA = pointConstraint->r1.cross(constraint->normal); real rnB = pointConstraint->r2.cross(constraint->normal); // Calculate J(M^-1)(J^T). kNormal += (bodyA->inverseMoment * rnA * rnA + bodyB->inverseMoment * rnB * rnB); // Save inverse of J(M^-1)(J^T). pointConstraint->normalMass = 1 / kNormal; pointConstraint->positionBias = m_option.baumgarteCoef * manifold->penetrationDepth; ftVector2 vA = bodyA->velocity; ftVector2 vB = bodyB->velocity; real wA = bodyA->angularVelocity; real wB = bodyB->angularVelocity; ftVector2 dv = (vB + pointConstraint->r2.invCross(wB) - vA - pointConstraint->r1.invCross(wA)); //Calculate JV real jnV = dv.dot(constraint->normal pointConstraint->restitutionBias = -restitution * (jnV + m_option.restitutionSlop); } // solve constraint while (numIteration > 0) { for (int i = 0; i < m_constraintGroup.nConstraint; ++i) { ftContactConstraint *constraint = &(m_constraintGroup.constraints); int32 bodyIDA = constraint->bodyIDA; int32 bodyIDB = constraint->bodyIDB; ftVector2 normal = constraint->normal; ftVector2 tangent = normal.tangent(); for (int j = 0; j < constraint->numContactPoint; ++j) { ftContactPointConstraint *pointConstraint = &(constraint->pointConstraint[j]); ftVector2 vA = m_constraintGroup.velocities[bodyIDA]; ftVector2 vB = m_constraintGroup.velocities[bodyIDB]; real wA = m_constraintGroup.angularVelocities[bodyIDA]; real wB = m_constraintGroup.angularVelocities[bodyIDB]; //Calculate JV. (jnV = JV, dv = derivative of d, JV = derivative(d) dot normal)) ftVector2 dv = (vB + pointConstraint->r2.invCross(wB) - vA - pointConstraint->r1.invCross(wA)); real jnV = dv.dot(normal); //Calculate Lambda ( lambda real nLambda = (-jnV + pointConstraint->positionBias / dt + pointConstraint->restitutionBias) * pointConstraint->normalMass; // Add lambda to normalImpulse and clamp real oldAccumI = pointConstraint->nIAcc; pointConstraint->nIAcc += nLambda; if (pointConstraint->nIAcc < 0) { pointConstraint->nIAcc = 0; } // Find real lambda real I = pointConstraint->nIAcc - oldAccumI; // Calculate linear impulse ftVector2 nLinearI = normal * I; // Calculate angular impulse real rnA = pointConstraint->r1.cross(normal); real rnB = pointConstraint->r2.cross(normal); real nAngularIA = rnA * I; real nAngularIB = rnB * I; // Apply linear impulse m_constraintGroup.velocities[bodyIDA] -= constraint->invMassA * nLinearI; m_constraintGroup.velocities[bodyIDB] += constraint->invMassB * nLinearI; // Apply angular impulse m_constraintGroup.angularVelocities[bodyIDA] -= constraint->invMomentA * nAngularIA; m_constraintGroup.angularVelocities[bodyIDB] += constraint->invMomentB * nAngularIB; } } --numIteration; }  
      General Step to Solve Constraint
      In this article, we have learned how to solve contact penetration by defining it as a constraint and solve it. But this framework is not only used to solve contact penetration. We can do many more cool things with constraints like for example implementing hinge joint, pulley, spring, etc.
      So this is the step-by-step of constraint resolution:
      Define the constraint in the form $\dot{C}: JV + b = 0$. V is always $\begin{bmatrix} \vec{v1} \\ w1 \\ \vec{v2} \\ w2\end{bmatrix}$ for every constraint. So we need to find J or the Jacobian Matrix for that specific constraint. Decide the number of iteration for the sequential impulse. Next find the Lagrangian multiplier by inserting velocity, mass, and the Jacobian Matrix into this equation: Do step 3 for each constraint, and repeat the process as much as the number of iteration. Clamp the Lagrangian multiplier if needed. This marks the end of this article. Feel free to ask if something is still unclear. And please inform me if there are inaccuracies in my article. Thank you for reading.
      NB: Box2d use sequential impulse, but does not use baumgarte stabilization anymore. It uses full NGS to resolve the position drift. Chipmunk still use baumgarte stabilization.
      References
      Allen Chou's post on Constraint Resolution A Unified Framework for Rigid Body Dynamics An Introduction to Physically Based Modeling: Constrained Dynamics Erin Catto's Box2d and presentation on constraint resolution Falton Debug Visualizer 18_01_2018 22_40_12.mp4
      equation.svg

    • By AxeGuywithanAxe
      I wanted to see how others are currently handling descriptor heap updates and management.
      I've read a few articles and there tends to be three major strategies :
      1 ) You split up descriptor heaps per shader stage ( i.e one for vertex shader , pixel , hull, etc)
      2) You have one descriptor heap for an entire pipeline
      3) You split up descriptor heaps for update each update frequency (i.e EResourceSet_PerInstance , EResourceSet_PerPass , EResourceSet_PerMaterial, etc)
      The benefits of the first two approaches is that it makes it easier to port current code, and descriptor / resource descriptor management and updating tends to be easier to manage, but it seems to be not as efficient.
      The benefits of the third approach seems to be that it's the most efficient because you only manage and update objects when they change.
    • By khawk
      CRYENGINE has released their latest version with support for Vulkan, Substance integration, and more. Learn more from their announcement and check out the highlights below.
      Substance Integration
      CRYENGINE uses Substance internally in their workflow and have released a direct integration.
       
      Vulkan API
      A beta version of the Vulkan renderer to accompany the DX12 implementation. Vulkan is a cross-platform 3D graphics and compute API that enables developers to have high-performance real-time 3D graphics applications with balanced CPU/GPU usage. 

       
      Entity Components
      CRYENGINE has addressed a longstanding issue with game code managing entities within the level. The Entity Component System adds a modular and intuitive method to construct games.
      And More
      View the full release details at the CRYENGINE announcement here.

      View full story
    • By khawk
      The AMD GPU Open website has posted a brief tutorial providing an overview of objects in the Vulkan API. From the article:
      Read more at http://gpuopen.com/understanding-vulkan-objects/.


      View full story
    • By localstarlight
      I've been puzzling over this for days now, and have hit a wall so need some help!
      I'm using VR motion controllers in Unreal Engine. I need an object to follow along with the motion controller, but to maintain a certain orientation. If the motion controller / object were to move around on the flat XY plane (UE4 is Z up so the XY plane is the horizontal plane), the top of the object would be pointing upwards, and the object just turns corners following the direction of movement (turning much like a car would when driving around). At each tick, I am taking the current position of the motion controller, and making a normalised directional vector using (CurrentPositionXYZ - PreviousPositionXYZ).
      Using this direction (forward vector), I am then making a rotation from the forward and up vectors. In order to do that I am first finding out the right vector by taking the cross product of a global up vector (0,0,1) with the forward vector. Then I take the cross product of the right and forward vectors to get the actual up vector. Then I make the rotation with forward and up vectors.
      Here is what this looks like if the motion controller movement were to be locked to the XY plane and move in a circle.

      All looks good, and exactly what I was hoping for. However, if the circular movement was to occur in either of the other planes (XZ or YZ) then we have a problem at two points on the circle when the forward (directional) vector lines up with the global up vector I'm using. Because these calculations have no way of knowing that the object is moving in a certain direction, when the direction changes to move from negative Y to positive Y, for example, the whole thing flips.
      Here's what this looks like with a circle drawn in the YZ plane:
       
      This is obviously no good, as it breaks the consistency of movement. As such, I wrote some logic that detects if the X or Y value crosses 0 when the direction is pointing up/down, and then reverses the global up vector. Assuming there is no movement/value in X, the movement graph for this looks like this:

      Red = X; Green = Y; Blue = Z  || Circle in YZ plane starting movement in -Y direction.
      And with the up vector flip logic added, it looks like this:

      Looking good!
      Except there's a massive problem. This all works fine if the movement is constrained to a plane (in this case the YZ plane, but also works perfectly in the XZ plane), but breaks down if the circular movement is drawn eve slightly off-axis. Here's what happens:

      There's a section where everything flips. The size of that section is determined by the magnitude of X, and relates to the part of the graph pointed to in this diagram:

      Here X is set at a constant -0.2.
      As soon as the Y value passes the value of X and approaches zero, the value in X starts having greater and greater influence over the direction, and then as Y starts moving further away from zero again, the influence drops away again. Although this makes sense, it's not what I want, and I can't figure out how to achieve what I want. I'm having trouble even defining the nature of the problem. Perhaps (probably) I'm doing this in a really stupid way, and there's a much simpler solution for what I'm trying to achieve. I could really use some help!
      Given a progression of known locations through 3D space, what is the best way to move an object along them such that its axes bank (if that's the correct term) realistically and don't do any weird flipping around? All help massively appreciated!
       
       
  • Advertisement