• Advertisement
  • Popular Tags

  • Popular Now

  • Advertisement
  • Similar Content

    • By khawk
      LunarG has released new Vulkan SDKs for Windows, Linux, and macOS based on the 1.1.73 header. The new SDK includes:
      New extensions: VK_ANDROID_external_memory_android_hardware_buffer VK_EXT_descriptor_indexing VK_AMD_shader_core_properties VK_NV_shader_subgroup_partitioned Many bug fixes, increased validation coverage and accuracy improvements, and feature additions Developers can download the SDK from LunarXchange at https://vulkan.lunarg.com/sdk/home.

      View full story
    • By khawk
      LunarG has released new Vulkan SDKs for Windows, Linux, and macOS based on the 1.1.73 header. The new SDK includes:
      New extensions: VK_ANDROID_external_memory_android_hardware_buffer VK_EXT_descriptor_indexing VK_AMD_shader_core_properties VK_NV_shader_subgroup_partitioned Many bug fixes, increased validation coverage and accuracy improvements, and feature additions Developers can download the SDK from LunarXchange at https://vulkan.lunarg.com/sdk/home.
    • By mark_braga
      I have a pretty good experience with multi gpu programming in D3D12. Now looking at Vulkan, although there are a few similarities, I cannot wrap my head around a few things due to the extremely sparse documentation (typical Khronos...)
      In D3D12 -> You create a resource on GPU0 that is visible to GPU1 by setting the VisibleNodeMask to (00000011 where last two bits set means its visible to GPU0 and GPU1)
      In Vulkan - I can see there is the VkBindImageMemoryDeviceGroupInfoKHR struct which you add to the pNext chain of VkBindImageMemoryInfoKHR and then call vkBindImageMemory2KHR. You also set the device indices which I assume is the same as the VisibleNodeMask except instead of a mask it is an array of indices. Till now it's fine.
      Let's look at a typical SFR scenario:  Render left eye using GPU0 and right eye using GPU1
      You have two textures. pTextureLeft is exclusive to GPU0 and pTextureRight is created on GPU1 but is visible to GPU0 so it can be sampled from GPU0 when we want to draw it to the swapchain. This is in the D3D12 world. How do I map this in Vulkan? Do I just set the device indices for pTextureRight as { 0, 1 }
      Now comes the command buffer submission part that is even more confusing.
      There is the struct VkDeviceGroupCommandBufferBeginInfoKHR. It accepts a device mask which I understand is similar to creating a command list with a certain NodeMask in D3D12.
      So for GPU1 -> Since I am only rendering to the pTextureRight, I need to set the device mask as 2? (00000010)
      For GPU0 -> Since I only render to pTextureLeft and finally sample pTextureLeft and pTextureRight to render to the swap chain, I need to set the device mask as 1? (00000001)
      The same applies to VkDeviceGroupSubmitInfoKHR?
      Now the fun part is it does not work  . Both command buffers render to the textures correctly. I verified this by reading back the textures and storing as png. The left texture is sampled correctly in the final composite pass. But I get a black in the area where the right texture should appear. Is there something that I am missing in this? Here is a code snippet too
      void Init() { RenderTargetInfo info = {}; info.pDeviceIndices = { 0, 0 }; CreateRenderTarget(&info, &pTextureLeft); // Need to share this on both GPUs info.pDeviceIndices = { 0, 1 }; CreateRenderTarget(&info, &pTextureRight); } void DrawEye(CommandBuffer* pCmd, uint32_t eye) { // Do the draw // Begin with device mask depending on eye pCmd->Open((1 << eye)); // If eye is 0, we need to do some extra work to composite pTextureRight and pTextureLeft if (eye == 0) { DrawTexture(0, 0, width * 0.5, height, pTextureLeft); DrawTexture(width * 0.5, 0, width * 0.5, height, pTextureRight); } // Submit to the correct GPU pQueue->Submit(pCmd, (1 << eye)); } void Draw() { DrawEye(pRightCmd, 1); DrawEye(pLeftCmd, 0); }  
    • By turanszkij
      Hi,
      I finally managed to get the DX11 emulating Vulkan device working but everything is flipped vertically now because Vulkan has a different clipping space. What are the best practices out there to keep these implementation consistent? I tried using a vertically flipped viewport, and while it works on Nvidia 1050, the Vulkan debug layer is throwing error messages that this is not supported in the spec so it might not work on others. There is also the possibility to flip the clip scpace position Y coordinate before writing out with vertex shader, but that requires changing and recompiling every shader. I could also bake it into the camera projection matrices, though I want to avoid that because then I need to track down for the whole engine where I upload matrices... Any chance of an easy extension or something? If not, I will probably go with changing the vertex shaders.
    • By Agustin Andreucci
      Hi everyone,

      I got stuck with my approach to aiming down sights. I've set up my first person arms mesh to be a child of the camera. Added a socket on the right hand of the skeleton to which I attach my weapon and there is a socket in the weapon used to signal the point at which the camera should sit during aiming down sight. When the players aims holding left click what I'm trying to do is to calculate the offset of the sight socket to the root of the FPP Mesh while on the aiming down sights pose and move the camera based on that.

      I'm using the code attached below.
      It's Unreal Engine but the problem seemed generic enough to qualify as an algebra one. What I'm doing is obtaining the transformations from the camera to the sight bone. That gives me the sight transform relative to the camera. After that I just negate the translation vector and add it to the first child of the camera. I was expecting to have the sight bone sit at the exact same point as the camera.
      What happens is that it is almost there but it's not correctly aligned. I believe my math and the general idea is right.
      Any ideas why this could be failing?
      Here is how I apply the resulting transform and below how I obtain it.
      Thank you very much for reading.
      FVector Translation = DeltaTransform.GetTranslation(); Translation.Z = 165.0f; /*Translation.X += Translation.Y; Translation.Y = Translation.X - Translation.Y; Translation.X = Translation.X - Translation.Y;*/ DeltaTransform *= DefaultMesh1PTransform; DeltaTransform.SetTranslation(Translation); Mesh1PCamOffsetLocationDelta = DeltaTransform.GetTranslation(); Mesh1PCamOffsetRotationDelta = DeltaTransform.GetRotation(); bInterpMesh1PCamOffset = bInterp; UE_LOG(LogTemp, Warning, TEXT("DeltaInverse Location x: %f, y: %f, z: %f"), (DeltaTransform).GetLocation().X, (DeltaTransform).GetLocation().Y, (DeltaTransform).GetLocation().Z); Mesh1P->SetRelativeLocation(DeltaTransform.GetLocation() * -1); UE_LOG(LogTemp, Warning, TEXT("Rotation yaw: %f, pitch: %f, roll: %f"), DeltaTransform.GetRotation().Rotator().Yaw, DeltaTransform.GetRotation().Rotator().Pitch, DeltaTransform.GetRotation().Rotator().Roll); FTransform HandToComponent; HandToComponent.SetIdentity(); FTransform BoneTransform; USkeletalMeshComponent* Mesh1P = CarryingPlayer->Mesh1P; // Get transform to weapon attach socket space HandToComponent *= FirearmMesh->GetSocketTransform(FName("sight_point_socket"), RTS_ParentBoneSpace); HandToComponent *= Mesh1P->GetSocketTransform(Firearm->GetCarrySocket(), RTS_ParentBoneSpace); // Get transform from the right hand bone relative to component (root bone) FName CurrentBoneName = FName("hand_r"); while (CurrentBoneName != NAME_None) { ArmsADSIdleAnimSequence->GetBoneTransform(BoneTransform, Mesh1P->GetBoneIndex(CurrentBoneName), 0.0f, true); HandToComponent *= BoneTransform; CurrentBoneName = Mesh1P->GetParentBone(CurrentBoneName); } return HandToComponent;
  • Advertisement
  • Advertisement
Sign in to follow this  

Vulkan Can't get my orthographic projection matrix to work

Recommended Posts

Hello guys,

My math is failing and can't get my orthographic projection matrix to work in Vulkan 1.0 (my implementation works great in D3D11 and D3D12). Specifically, there's nothing being drawn on the screen when using an ortho matrix but my perspective projection matrix work fantastic!

I use glm with defines GLM_FORCE_LEFT_HANDED and GLM_FORCE_DEPTH_ZERO_TO_ONE (to handle 0 to 1 depth).

This is how i define my matrices:

m_projection_matrix = glm::perspective(glm::radians(fov), aspect_ratio, 0.1f, 100.0f);
m_ortho_matrix = glm::ortho(0.0f, (float)width, (float)height, 0.0f, 0.1f, 100.0f); // I also tried 0.0f and 1.0f for depth near and far, the same I set and work for D3D but in Vulkan it doesn't work either.

Then I premultiply both matrices with a "fix matrix" to invert the Y axis:

glm::mat4 matrix_fix =
{1.0f, 0.0f, 0.0f, 0.0f,
0.0f, -1.0f, 0.0f, 0.0f,
0.0f, 0.0f, 1.0f, 0.0f,
0.0f, 0.0f, 0.0f, 1.0f};

m_projection_matrix = m_projection_matrix * matrix_fix;
m_ortho_matrix = m_ortho_matrix * matrix_fix;

This fix matrix works good in tandem with GLM_FORCE_DEPTH_ZERO_TO_ONE.

Model/World matrix is the identity matrix:

glm::mat4 m_world_matrix(1.0f);

Then finally this is how i set my view matrix:

// Yes, I use Euler angles (don't bring the gimbal lock topic here, lol). They work great with my cameras in D3D too!
m_view_matrix = glm::yawPitchRoll(glm::radians(m_rotation.y), glm::radians(m_rotation.x), glm::radians(m_rotation.z));
m_view_matrix = glm::translate(m_view_matrix, -m_position);

That's all guys, in my shaders I correctly multiply all 3 matrices with the position vector and as I said, the perspective matrix works really good but my ortho matrix displays no geometry.

EDIT: My vertex data is also on the right track, I use the same geometry in D3D and it works great: 256.0f units means 256 points/dots/pixels wide.

What could I possibly be doing wrong or missing?

Big thanks guys any help would be greatly appreciated. Keep on coding, cheers.

 

Edited by HateWork

Share this post


Link to post
Share on other sites
Advertisement
m_ortho_matrix = glm::ortho(0.0f, (float)width, (float)height, 0.0f, 0.1f, 100.0f);

width and height here should not be the screen width and height, but rather the width and height in world space. The width usually depends om how ”zoomed in” you want to be, and height will be ”width * (resHeight/ resWidth)”

hope this helps

Quote

 

Share this post


Link to post
Share on other sites
On 1/10/2018 at 11:27 PM, Finoli said:

m_ortho_matrix = glm::ortho(0.0f, (float)width, (float)height, 0.0f, 0.1f, 100.0f);

width and height here should not be the screen width and height, but rather the width and height in world space. The width usually depends om how ”zoomed in” you want to be, and height will be ”width * (resHeight/ resWidth)”

hope this helps

Hi, thanks for your reply. I'm not setting any "zoom level", i'm 1:1 with the resolution. Any other examples of glm::ortho set the screen resolution, but no matter anyway, all the values I try fail.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Advertisement