Jump to content
  • Advertisement

mark_braga

Member
  • Content count

    56
  • Joined

  • Last visited

Community Reputation

6 Neutral

3 Followers

About mark_braga

  • Rank
    Member

Personal Information

  • Role
    Programmer
  • Interests
    Art
    Business
    Design
    Production
    Programming
    QA

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. I am trying to get the DirectX Control Panel to let me do something like changing the break severity but everything is greyed out. Is there any way I can make the DirectX Control Panel work? Here is a screenshot of the control panel.
  2. I have a pretty good experience with multi gpu programming in D3D12. Now looking at Vulkan, although there are a few similarities, I cannot wrap my head around a few things due to the extremely sparse documentation (typical Khronos...) In D3D12 -> You create a resource on GPU0 that is visible to GPU1 by setting the VisibleNodeMask to (00000011 where last two bits set means its visible to GPU0 and GPU1) In Vulkan - I can see there is the VkBindImageMemoryDeviceGroupInfoKHR struct which you add to the pNext chain of VkBindImageMemoryInfoKHR and then call vkBindImageMemory2KHR. You also set the device indices which I assume is the same as the VisibleNodeMask except instead of a mask it is an array of indices. Till now it's fine. Let's look at a typical SFR scenario: Render left eye using GPU0 and right eye using GPU1 You have two textures. pTextureLeft is exclusive to GPU0 and pTextureRight is created on GPU1 but is visible to GPU0 so it can be sampled from GPU0 when we want to draw it to the swapchain. This is in the D3D12 world. How do I map this in Vulkan? Do I just set the device indices for pTextureRight as { 0, 1 } Now comes the command buffer submission part that is even more confusing. There is the struct VkDeviceGroupCommandBufferBeginInfoKHR. It accepts a device mask which I understand is similar to creating a command list with a certain NodeMask in D3D12. So for GPU1 -> Since I am only rendering to the pTextureRight, I need to set the device mask as 2? (00000010) For GPU0 -> Since I only render to pTextureLeft and finally sample pTextureLeft and pTextureRight to render to the swap chain, I need to set the device mask as 1? (00000001) The same applies to VkDeviceGroupSubmitInfoKHR? Now the fun part is it does not work . Both command buffers render to the textures correctly. I verified this by reading back the textures and storing as png. The left texture is sampled correctly in the final composite pass. But I get a black in the area where the right texture should appear. Is there something that I am missing in this? Here is a code snippet too void Init() { RenderTargetInfo info = {}; info.pDeviceIndices = { 0, 0 }; CreateRenderTarget(&info, &pTextureLeft); // Need to share this on both GPUs info.pDeviceIndices = { 0, 1 }; CreateRenderTarget(&info, &pTextureRight); } void DrawEye(CommandBuffer* pCmd, uint32_t eye) { // Do the draw // Begin with device mask depending on eye pCmd->Open((1 << eye)); // If eye is 0, we need to do some extra work to composite pTextureRight and pTextureLeft if (eye == 0) { DrawTexture(0, 0, width * 0.5, height, pTextureLeft); DrawTexture(width * 0.5, 0, width * 0.5, height, pTextureRight); } // Submit to the correct GPU pQueue->Submit(pCmd, (1 << eye)); } void Draw() { DrawEye(pRightCmd, 1); DrawEye(pLeftCmd, 0); }
  3. I do the map and unmap operations outside the command buffer recording so I can call them every frame
  4. I have solved the problem. Seems like I was overthinking. Thanks
  5. That is a nice feature of Vulkan which DirectX12 does not have. Not many times you get a chance to say this. Thanks
  6. I am working on reusing as many command buffers as I can by pre-recording them at load time. This gives a significant boost on CPU although now I cannot get the GPU timestamps since there is no way to read back. I Map the readback buffer before and Unmap it after reading is done. Does this mean I need a persistently mapped readback buffer? void Init() { beginCmd(cmd); cmdBeginQuery(cmd); // Do a bunch of stuff cmdEndQuery(cmd); endCmd(cmd); } void Draw() { CommandBuffer* cmd = commands[frameIdx]; submit(cmd); } The begin and end query do exactly what the names say.
  7. Is there any way to know the active solution configuration using a build macro? I know $(Configuration) is for active project configuration but I can't find one for active solution
  8. I am working on a rendering framework. We have adopted the DX12 style where you create all pipelines for all permutations at load time. I am just wondering whether there is a limit to the number of pipelines you can create or if you have to pay some hidden cost if you have pipelines just lying around until you have to actually use it (For example: Choose MSAAX2 pipeline after the user picks it from the settings menu) Or should I only create the pipelines I need at load time and then re-create them whenever necessary?
  9. I am working on a compute shader in Vulkan which does some image processing and has 1024 * 5=5120 loop iterations (5 outer and 1024 inner) If I do this, I get a device lost error after the succeeding call to queueSubmit after the image processing queueSubmit // Image processing dispatch submit(); waitForFence(); // All calls to submit after this will give the device lost error If I lower the number of loops from 1024 to 256 => 5 * 256 = 1280 loop iterations, it works fine. The shader does some pretty heavy arithmetic operations but the number of resources bound is 3 (one SRV, one UAV, and one sampler). The thread group size is x=16 ,y=16,z=1 So my question - Is there a hardware limit to the number of loop executions/number of instructions per shader?
  10. Thats a good idea. But does it crank up the verbosity only on my local machine or it is actually changing project settings? Edit: The .tlog folder has all the custom build commands stored in the custombuild.command.1.tlog file. This makes it straightforward. Thanks for the help
  11. Currently, our framework only supports loading binary shader code. The way we do it in Visual Studio is through custom build tool per shader file. Now we also want to support shader recompilation when a user presses some key. This is tricky since now we don't have access to the source files. So I was thinking, if there is some way to export all the commands from the custom build tool to a text file, then we can just load that text file and execute all commands inside and then just do the regular shader loading as we do it right now. Example custom build tool command for a shader which has vertex, pixel, and geometry shader stages: fxc %(Identity) /E VSMain /T vs_5_0 $(HLSLCompileFlags) /Fo %(Identity)\..\Binary\%(Filename).vert.bin fxc %(Identity) /E PSMain /T ps_5_0 $(HLSLCompileFlags) /Fo %(Identity)\..\Binary\%(Filename).frag.bin fxc %(Identity) /E GSMain /T gs_5_0 $(HLSLCompileFlags) /Fo %(Identity)\..\Binary\%(Filename).geom.bin So here is the process- - Run custom build rules for the shader files - In post-build event of the project, collect all these custom build tool commands and dump them to a text file - Now if user wants to recompile, just load this text file and execute all commands inside How would I get access to these custom build tool commands in the post-build event?
  12. mark_braga

    NonUniformResourceIndex in Vulkan glsl

    Hey Matias, Thanks for the code. But I am still getting the wavefront tiling artifacts. Is there no way to get over this problem? Since hlsl can do it, I assume it is not a hardware limitation. This makes me curious - How do cross compilers handle this when translating hlsl to glsl or they just don support it
  13. I am working on a project which needs to share render targets between Vulkan and DirectX12. I have enabled the external memory extension and now allocate the memory for the render targets by adding the VkExportMemoryInfoKHR to the pNext chain of VkMemoryAllocateInfo. Similarly I have added the VkExternalMemoryImageCreateInfo to the pNext chain of VkImageCreateInfo. After calling the get win32 handle function, I get some handle pointer which is not null (I assume it is valid). VkExternalMemoryImageCreateInfoKHR externalImageInfo = {}; if (gExternalMemoryExtensionKHR) { externalImageInfo.sType = VK_STRUCTURE_TYPE_EXTERNAL_MEMORY_IMAGE_CREATE_INFO_KHR; externalImageInfo.pNext = NULL; externalImageInfo.handleTypes = VK_EXTERNAL_MEMORY_HANDLE_TYPE_OPAQUE_FD_BIT_KHR | VK_EXTERNAL_MEMORY_HANDLE_TYPE_OPAQUE_WIN32_BIT_KHR | VK_EXTERNAL_MEMORY_HANDLE_TYPE_OPAQUE_WIN32_KMT_BIT_KHR | VK_EXTERNAL_MEMORY_HANDLE_TYPE_D3D11_TEXTURE_BIT_KHR | VK_EXTERNAL_MEMORY_HANDLE_TYPE_D3D11_TEXTURE_KMT_BIT_KHR | VK_EXTERNAL_MEMORY_HANDLE_TYPE_D3D12_HEAP_BIT_KHR | VK_EXTERNAL_MEMORY_HANDLE_TYPE_D3D12_RESOURCE_BIT_KH imageCreateInfo.pNext = &externalImageInfo; } vkCreateImage(...); VkExportMemoryAllocateInfoKHR exportInfo = { VK_STRUCTURE_TYPE_EXPORT_MEMORY_ALLOCATE_INFO_KHR }; exportInfo.handleTypes = VK_EXTERNAL_MEMORY_HANDLE_TYPE_OPAQUE_FD_BIT_KHR | VK_EXTERNAL_MEMORY_HANDLE_TYPE_OPAQUE_WIN32_BIT_KHR | VK_EXTERNAL_MEMORY_HANDLE_TYPE_OPAQUE_WIN32_KMT_BIT_KHR | VK_EXTERNAL_MEMORY_HANDLE_TYPE_D3D11_TEXTURE_BIT_KHR | VK_EXTERNAL_MEMORY_HANDLE_TYPE_D3D11_TEXTURE_KMT_BIT_KHR | VK_EXTERNAL_MEMORY_HANDLE_TYPE_D3D12_HEAP_BIT_KHR | VK_EXTERNAL_MEMORY_HANDLE_TYPE_D3D12_RESOURCE_BIT_KHR; memoryAllocateInfo.pNext = &exportInfo; vkAllocateMemory(...); VkMemoryGetWin32HandleInfoKHR info = { VK_STRUCTURE_TYPE_MEMORY_GET_WIN32_HANDLE_INFO_KHR, NULL }; info.memory = pTexture->GetMemory(); info.handleType = VK_EXTERNAL_MEMORY_HANDLE_TYPE_D3D12_RESOURCE_BIT_KHR; VkResult res = vkGetMemoryWin32HandleKHR(vulkanDevice, &info, &pTexture->pSharedHandle); ASSERT(VK_SUCCESS == res); Now when I try to call OpenSharedHandle from a D3D12 device, it crashes inside nvwgf2umx.dll with the integer division by zero error. I am now lost and have no idea what the other handle types do. For example: How do we get the D3D12 resource from the VK_EXTERNAL_MEMORY_HANDLE_TYPE_OPAQUE_WIN32_BIT_KHR handle? I also found some documentation on this link but it doesn't help much. https://javadoc.lwjgl.org/org/lwjgl/vulkan/NVExternalMemoryWin32.html This is all assuming the extension works as expected since it has made it to the KHR
  14. I tried to create a resource on one device and use it on another and it works without specifying any flags or even opening any handles. Any reason why the validation won't complain? This is the overview of what I did and it works just fine even if the render target texture was created on pDevice0 and the render target is cleared on a command list created on pDevice1 initDevice(pDevice0); addRenderTarget(pDevice0, &pRenderTarget); initDevice(pDevice1); // This works without any problems clearRTV(pDevice1->pCmdList, pRenderTarget, WHITE);
  15. Thanks for posting these links. They have given a lot of insight. So just to confirm if I want to create a 2D view of face 3 of a cubemap, will the code look something like this? SRV_DESC desc = {}; desc.dimension = TEXTURE_2D_ARRAY; desc.tex2DArray.firstArraySlice = 3; // Face 3 in the cubemap desc.tex2DArray.arraySize = 1; // Only need one face addSRV(pCubeMapResource, &desc); Thank you
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

Participate in the game development conversation and more when you create an account on GameDev.net!

Sign me up!