Jump to content
  • Advertisement
mark_braga

Vulkan Vulkan version of DX12 UAVBarrier

Recommended Posts

Does anyone know what is Vulkan's version of the UAVBarrier in DX12?

In my situation, I have two compute shaders. The first one clears the uav and second one writes to the uav.

void ComputePass(Cmd* pCmd)
{
	cmdDispatch(pCmd, pClearBufferPipeline);

	// Barrier to make sure clear buffer shader and fill buffer shader dont execute in parallel
	cmdUavBarrier(pCmd, pUavBuffer);

	cmdDispatch(pCmd, pFillBufferPipeline);
}

My best guess was the VkMemoryBarrier but I am not very familiar with vulkan barriers. So any info on this would really help.

Thank you.

Share this post


Link to post
Share on other sites
Advertisement

More likely it's VkBufferMemoryBarrier what you're after (assuming UAV == ShaderStorageBuffer).

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

  • Advertisement
  • Advertisement
  • Popular Tags

  • Similar Content

    • By NikiTo
      Recently I read that the APIs are faking some behaviors, giving to the user false impressions.
      I assume Shader Model 6 issues the wave instructions to the hardware for real, not faking them.

      Is Shader Model 6, mature enough? Can I expect the same level of optimization form Model 6 as from Model 5? Should I expect more bugs from 6 than 5?
      Would the extensions of the manufacturer provide better overall code than the Model 6, because, let say, they know their own hardware better?

      What would you prefer to use for your project- Shader Model 6 or GCN Shader Extensions for DirectX?

      Which of them is easier to set up and use in Visual Studio(practically)?
    • By LukeCassa005
      I'm writing a small 3D Vulkan game engine using C++. I'm working in a team, and the other members really don't know almost anything about C++. About three years ago i found this new programming language called D wich seems very interesting, as it's very similar to C++. My idea was to implement core systems like rendering, math, serialization and so on using C++ and then wrapping all with a D framework, easier to use and less complicated. Is it worth it or I should stick only to C++ ? Does it have less performance compared to a pure c++ application ?
    • By mark_braga
      I am trying to get the DirectX Control Panel to let me do something like changing the break severity but everything is greyed out.
      Is there any way I can make the DirectX Control Panel work?
      Here is a screenshot of the control panel.
       

    • By Keith P Parsons
      I seem to remember seeing a version of directx 11 sdk that was implemented in directx12 on the microsoft website but I can't seem to find it anymore. Does any one else remember ever seeing this project or was it some kind off fever dream I had? It would be a nice tool for slowly porting my massive amount of directx 11 code to 12 overtime.
    • By NikiTo
      In the shader code, I need to determine to which AppendStructuredBuffers the data should append. And the AppendStructuredBuffers are more than 30.
      Is declaring 30+ AppendStructuredBuffers going to overkill the shader? Buffers descriptors should consume SGPRs.

      Some other way to distribute the output over multiple AppendStructuredBuffers?

      Is emulating the push/pop functionality with one single byte address buffer worth it? Wouldn't it be much slower than using AppendStructuredBuffer?
    • By tanzanite7
      Cannot get rid of z-fighting (severity varies between: no errors at all - ~40% fail).
      * up-to-date validation layer has nothing to say.
      * pipelines are nearly identical (differences: color attachments, descriptor sets for textures, depth write, depth compare op - LESS for prepass and EQUAL later).
      * did not notice anything funny when comparing the draw commands via NSight either - except, see end of this post.
      * "invariant gl_Position" for all participating vertex shaders makes no difference ('invariant' does not show up in decompile, but is present in SPIR-V).
      * gl_Position calculations are identical for all (also using identical source data: push constants + vertex attribs)
      However, when decompiling SPIR-V back to GLSL via NSight i noticed something rather strange:
      Depth prepass has "gl_Position.z = 2.0 * gl_Position.z - gl_Position.w;" added to it. What is this!? "gl_Position.y = -gl_Position.y;", which is always added to everything, i can understand - vulcans NDC is vertically flipped by default in comparison to OpenGL. That is fine. What is the muckery with z there for? And why is it only selectively added?
      Looking at my perspective projection code (the usual matrix multiplication, just simplified):
      vec4 projection(vec3 v) { return vec4(v.xy * par.proj.xy, v.z * par.proj.z + par.proj.w, -v.z); }
      All it ends up doing is doubling w-part of 'proj' in z (proj = vec4(1.0, 1.33.., -1.0, 0.2)). How does anything show at all given that i draw with compare op EQUAL. Decompile bug?
      I am out of ideas.
    • By ZachBethel
      I have a rather specific question. I'm trying to learn about linked multi GPU in Vulkan 1.1; the only real source I can find (other than the spec itself) is the following video:
       
       
      Anyway, each node in the linked configuration gets its own internal heap pointer. You can swizzle the node mask to your liking to make one node pull from another's memory. However, the only way to perform the "swizzling" is to rebind a new VkImage / VkBuffer instance to the same VkDeviceMemory handle (but with a different node configuration). This is effectively aliasing the memory between two instances with identical properties.
      I'm curious whether this configuration requires special barriers. How do image barriers work in this case? Does a layout transition on one alias automatically affect the other. I'm coming from DX12 land where placed resources require custom aliasing barriers, and each placed resource has its own independent state. It seems like Vulkan functions a bit differently. 
      Thanks.
    • By Sobe118
      I am rendering a large number of objects for a simulation. Each object has instance data and the size of the instance data * number of objects is greater than 4GB. 
      CreateCommittedResource is giving me: E_OUTOFMEMORY Ran out of memory. 
      My PC has 128GB (only 8% ish used prior to testing this), I am running the DirectX app as x64. <Creating a CPU sided resource so GPU ram doesn't matter here, but using Titan X cards if that's a question>
      Simplified code test that recreates the issue (inserted the code into Microsofts D3D12HelloWorld): 
      unsigned long long int siz = pow(2, 32) + 1024; D3D12_FEATURE_DATA_D3D12_OPTIONS options; //MaxGPUVirtualAddressBitsPerResource = 40 m_device->CheckFeatureSupport(D3D12_FEATURE_D3D12_OPTIONS, &options, sizeof(options)); HRESULT oops = m_device->CreateCommittedResource( &CD3DX12_HEAP_PROPERTIES(D3D12_HEAP_TYPE_UPLOAD), D3D12_HEAP_FLAG_NONE, &CD3DX12_RESOURCE_DESC::Buffer(siz), D3D12_RESOURCE_STATE_GENERIC_READ, nullptr, IID_PPV_ARGS(&m_vertexBuffer)); if (oops != S_OK) { printf("Uh Oh"); } I tried enabling "above 4G" in the bios, which didn't do anything. I also tested using malloc to allocate a > 4G array, that worked in the app without issue. 
      Are there more options or build setup that needs to be done? (Using Visual Studio 2015)
      *Other approaches to solving this are welcome too. I thought about splitting up the set of items to render into a couple of sets with a size < 4G each but would rather have one set of objects. 
      Thank you.
    • By _void_
      Hey guys!
      I am not sure how to specify array slice for GatherRed function on Texture2DArray in HLSL.
      According to MSDN, "location" is one float value. Is it a 3-component float with 3rd component for array slice?
      Thanks!
    • By lubbe75
      I have a winforms project that uses SharpDX (DirectX 12). The SharpDX library provides a RenderForm (based on a System.Windows.Forms.Form). 
      Now I need to convert the project to WPF instead. What is the best way to do this?
      I have seen someone pointing to a library, SharpDX.WPF at Codeplex, but according to their info it only provides support up to DX11.
      (Sorry if this has been asked before. The search function seems to be down at the moment)
  • Advertisement
  • Popular Now

  • Forum Statistics

    • Total Topics
      631375
    • Total Posts
      2999659
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

Participate in the game development conversation and more when you create an account on GameDev.net!

Sign me up!