Jump to content
  • Advertisement
mark_braga

Vulkan Vulkan to DirectX12 texture sharing

Recommended Posts

I am working on a project which needs to share render targets between Vulkan and DirectX12. I have enabled the external memory extension and now allocate the memory for the render targets by adding the VkExportMemoryInfoKHR to the pNext chain of VkMemoryAllocateInfo. Similarly I have added the VkExternalMemoryImageCreateInfo to the pNext chain of VkImageCreateInfo.

After calling the get win32 handle function, I get some handle pointer which is not null (I assume it is valid).

VkExternalMemoryImageCreateInfoKHR externalImageInfo = {};
if (gExternalMemoryExtensionKHR)
{
	externalImageInfo.sType = VK_STRUCTURE_TYPE_EXTERNAL_MEMORY_IMAGE_CREATE_INFO_KHR;
	externalImageInfo.pNext = NULL;
	externalImageInfo.handleTypes =
		VK_EXTERNAL_MEMORY_HANDLE_TYPE_OPAQUE_FD_BIT_KHR |
		VK_EXTERNAL_MEMORY_HANDLE_TYPE_OPAQUE_WIN32_BIT_KHR |
		VK_EXTERNAL_MEMORY_HANDLE_TYPE_OPAQUE_WIN32_KMT_BIT_KHR |
		VK_EXTERNAL_MEMORY_HANDLE_TYPE_D3D11_TEXTURE_BIT_KHR |
		VK_EXTERNAL_MEMORY_HANDLE_TYPE_D3D11_TEXTURE_KMT_BIT_KHR |
		VK_EXTERNAL_MEMORY_HANDLE_TYPE_D3D12_HEAP_BIT_KHR |
		VK_EXTERNAL_MEMORY_HANDLE_TYPE_D3D12_RESOURCE_BIT_KH
	imageCreateInfo.pNext = &externalImageInfo;
}
vkCreateImage(...);

VkExportMemoryAllocateInfoKHR exportInfo = { VK_STRUCTURE_TYPE_EXPORT_MEMORY_ALLOCATE_INFO_KHR };
exportInfo.handleTypes =
	VK_EXTERNAL_MEMORY_HANDLE_TYPE_OPAQUE_FD_BIT_KHR |
	VK_EXTERNAL_MEMORY_HANDLE_TYPE_OPAQUE_WIN32_BIT_KHR |
	VK_EXTERNAL_MEMORY_HANDLE_TYPE_OPAQUE_WIN32_KMT_BIT_KHR |
	VK_EXTERNAL_MEMORY_HANDLE_TYPE_D3D11_TEXTURE_BIT_KHR |
	VK_EXTERNAL_MEMORY_HANDLE_TYPE_D3D11_TEXTURE_KMT_BIT_KHR |
	VK_EXTERNAL_MEMORY_HANDLE_TYPE_D3D12_HEAP_BIT_KHR |
	VK_EXTERNAL_MEMORY_HANDLE_TYPE_D3D12_RESOURCE_BIT_KHR;
memoryAllocateInfo.pNext = &exportInfo;
vkAllocateMemory(...);

VkMemoryGetWin32HandleInfoKHR info = { VK_STRUCTURE_TYPE_MEMORY_GET_WIN32_HANDLE_INFO_KHR, NULL };
info.memory = pTexture->GetMemory();
info.handleType = VK_EXTERNAL_MEMORY_HANDLE_TYPE_D3D12_RESOURCE_BIT_KHR;
VkResult res = vkGetMemoryWin32HandleKHR(vulkanDevice, &info, &pTexture->pSharedHandle);
ASSERT(VK_SUCCESS == res);

Now when I try to call OpenSharedHandle from a D3D12 device, it crashes inside nvwgf2umx.dll with the integer division by zero error.

I am now lost and have no idea what the other handle types do.

For example: How do we get the D3D12 resource from the VK_EXTERNAL_MEMORY_HANDLE_TYPE_OPAQUE_WIN32_BIT_KHR handle?

I also found some documentation on this link but it doesn't help much.

https://javadoc.lwjgl.org/org/lwjgl/vulkan/NVExternalMemoryWin32.html

This is all assuming the extension works as expected since it has made it to the KHR

Edited by mark_braga

Share this post


Link to post
Share on other sites
Advertisement

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

  • Advertisement
  • Advertisement
  • Popular Tags

  • Similar Content

    • By LukeCassa005
      I'm writing a small 3D Vulkan game engine using C++. I'm working in a team, and the other members really don't know almost anything about C++. About three years ago i found this new programming language called D wich seems very interesting, as it's very similar to C++. My idea was to implement core systems like rendering, math, serialization and so on using C++ and then wrapping all with a D framework, easier to use and less complicated. Is it worth it or I should stick only to C++ ? Does it have less performance compared to a pure c++ application ?
    • By tanzanite7
      Cannot get rid of z-fighting (severity varies between: no errors at all - ~40% fail).
      * up-to-date validation layer has nothing to say.
      * pipelines are nearly identical (differences: color attachments, descriptor sets for textures, depth write, depth compare op - LESS for prepass and EQUAL later).
      * did not notice anything funny when comparing the draw commands via NSight either - except, see end of this post.
      * "invariant gl_Position" for all participating vertex shaders makes no difference ('invariant' does not show up in decompile, but is present in SPIR-V).
      * gl_Position calculations are identical for all (also using identical source data: push constants + vertex attribs)
      However, when decompiling SPIR-V back to GLSL via NSight i noticed something rather strange:
      Depth prepass has "gl_Position.z = 2.0 * gl_Position.z - gl_Position.w;" added to it. What is this!? "gl_Position.y = -gl_Position.y;", which is always added to everything, i can understand - vulcans NDC is vertically flipped by default in comparison to OpenGL. That is fine. What is the muckery with z there for? And why is it only selectively added?
      Looking at my perspective projection code (the usual matrix multiplication, just simplified):
      vec4 projection(vec3 v) { return vec4(v.xy * par.proj.xy, v.z * par.proj.z + par.proj.w, -v.z); }
      All it ends up doing is doubling w-part of 'proj' in z (proj = vec4(1.0, 1.33.., -1.0, 0.2)). How does anything show at all given that i draw with compare op EQUAL. Decompile bug?
      I am out of ideas.
    • By ZachBethel
      I have a rather specific question. I'm trying to learn about linked multi GPU in Vulkan 1.1; the only real source I can find (other than the spec itself) is the following video:
       
       
      Anyway, each node in the linked configuration gets its own internal heap pointer. You can swizzle the node mask to your liking to make one node pull from another's memory. However, the only way to perform the "swizzling" is to rebind a new VkImage / VkBuffer instance to the same VkDeviceMemory handle (but with a different node configuration). This is effectively aliasing the memory between two instances with identical properties.
      I'm curious whether this configuration requires special barriers. How do image barriers work in this case? Does a layout transition on one alias automatically affect the other. I'm coming from DX12 land where placed resources require custom aliasing barriers, and each placed resource has its own independent state. It seems like Vulkan functions a bit differently. 
      Thanks.
    • By BearishSun
      bs::framework is a newly released, free and open-source C++ game development framework. It aims to provide a modern C++14 API & codebase, focus on high-end technologies comparable to commercial engine offerings and a highly optimized core capable of running demanding projects. Additionally it aims to offer a clean, simple architecture with lightweight implementations that allow the framework to be easily enhanced with new features and therefore be ready for future growth.
      Some of the currently available features include a physically based renderer based on Vulkan, DirectX and OpenGL, unified shading language, systems for animation, audio, GUI, physics, scripting, heavily multi-threaded core, full API documentation + user manuals, support for Windows, Linux and macOS and more.
      The next few updates are focusing on adding support for scripting languages like C#, Python and Lua, further enhancing the rendering fidelity and adding sub-systems for particle and terrain rendering.
      A complete editor based on the framework is also in development, currently available in pre-alpha stage.
      You can find out more information on www.bsframework.io.

      View full story
    • By BearishSun
      bs::framework is a newly released, free and open-source C++ game development framework. It aims to provide a modern C++14 API & codebase, focus on high-end technologies comparable to commercial engine offerings and a highly optimized core capable of running demanding projects. Additionally it aims to offer a clean, simple architecture with lightweight implementations that allow the framework to be easily enhanced with new features and therefore be ready for future growth.
      Some of the currently available features include a physically based renderer based on Vulkan, DirectX and OpenGL, unified shading language, systems for animation, audio, GUI, physics, scripting, heavily multi-threaded core, full API documentation + user manuals, support for Windows, Linux and macOS and more.
      The next few updates are focusing on adding support for scripting languages like C#, Python and Lua, further enhancing the rendering fidelity and adding sub-systems for particle and terrain rendering.
      A complete editor based on the framework is also in development, currently available in pre-alpha stage.
      You can find out more information on www.bsframework.io.
    • By tanzanite7
      Trying to figure out why input attachment reads as black with NSight VS plugin - and failing.
      This is what i can see at the invocation point of the shader:
      * attachment is filled with correct data (just a clear to bright red in previous renderpass) and used by the fragment shader:
      // SPIR-V decompiled to GLSL #version 450 layout(binding = 0) uniform sampler2D accum; // originally: layout(input_attachment_index=0, set=0, binding=0) uniform subpassInput accum; layout(location = 0) out vec4 fbFinal; void main(){ fbFinal = vec4(texelFetch(accum, ivec2(gl_FragCoord.xy), 0).xyz + vec3(0.0, 0.0, 1.0), 1.0); // originally: fbFinal = vec4(subpassLoad(accum).rgb + vec3(0.0, 0.0, 1.0), 1.0); } * the resulting image is bright blue - instead of the expected bright purple (red+blue)
      How can this happen?
      'fbFinal' format is B8G8R8A8_UNORM and 'accum' format is R16G16B16A16_UNORM - ie. nothing weird.
    • By turanszkij
      Hi, running Vulkan with the latest SDK, validation layers enabled I just got the following warning:
      That is really strange, because in DX11 we can have 15 constant buffers per shader stage. And my device (Nvidia GTX 1050 is DX11 compatible of course) Did anyone else run into the same issue? How is it usually handled? I would prefer not enforcing less amount of CBs for the Vulkan device and be as closely compliant to DX11 as possible. Any idea what could be the reason behind this limitation?
    • By DiligentDev
      Hello everyone!
      For my engine, I want to be able to automatically generate pipeline layouts based on shader resources. That works perfectly well in D3D12 as shader resources are not required to specify descriptor tables, so I use reflection system and map different shader registers to tables as I need. In Vulkan, however, looks like descriptor sets must be specified in both SPIRV bytecode and when creating pipeline layout (why is that?). So it looks like I will have to mess around with the bytecode to tweak bindings and descriptor sets. I looked at SPIRV-cross but it seems like it can't emit SPIRV (funny enough). I also use glslang to compile GLSL to SPIRV and for some reason, binding decoration is only present for these resources that I explicitly defined.
      Does anybody know if there is a tool to change bindings in SPIRV bytecode?
       
    • By turanszkij
      Hi, I am having problems with all of my compute shaders in Vulkan. They are not writing to resources, even though there are no problems in the debug layer, every descriptor seem correctly bound in the graphics debugger, and the shaders definitely take time to execute. I understand that this is probably a bug in my implementation which is a bit complex, trying to emulate a DX11 style rendering API, but maybe I'm missing something trivial in my logic here? Currently I am doing these:
      Set descriptors, such as VK_DESCRIPTOR_TYPE_STORAGE_BUFFER for a read-write structured buffer (which is non formatted buffer) Bind descriptor table / validate correctness by debug layer Dispatch on graphics/compute queue, the same one that is feeding graphics rendering commands.  Insert memory barrier with both stagemasks as VK_PIPELINE_STAGE_ALL_COMMANDS_BIT and srcAccessMask VK_ACCESS_SHADER_WRITE_BIT to dstAccessMask VK_ACCESS_SHADER_READ_BIT Also insert buffer memory barrier just for the storage buffer I wanted to write Both my application behaves like the buffers are empty, and Nsight debugger also shows empty buffers (ssems like everything initialized to 0). Also, I tried the most trivial shader, writing value of 1 to the first element of uint buffer. Am I missing something trivial here? What could be an other way to debug this further?
       
    • By khawk
      LunarG has released new Vulkan SDKs for Windows, Linux, and macOS based on the 1.1.73 header. The new SDK includes:
      New extensions: VK_ANDROID_external_memory_android_hardware_buffer VK_EXT_descriptor_indexing VK_AMD_shader_core_properties VK_NV_shader_subgroup_partitioned Many bug fixes, increased validation coverage and accuracy improvements, and feature additions Developers can download the SDK from LunarXchange at https://vulkan.lunarg.com/sdk/home.

      View full story
  • Advertisement
  • Popular Now

  • Forum Statistics

    • Total Topics
      631396
    • Total Posts
      2999789
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

Participate in the game development conversation and more when you create an account on GameDev.net!

Sign me up!