• Advertisement
  • Popular Tags

  • Popular Now

  • Advertisement
  • Similar Content

    • By khawk
      LunarG has released new Vulkan SDKs for Windows, Linux, and macOS based on the 1.1.73 header. The new SDK includes:
      New extensions: VK_ANDROID_external_memory_android_hardware_buffer VK_EXT_descriptor_indexing VK_AMD_shader_core_properties VK_NV_shader_subgroup_partitioned Many bug fixes, increased validation coverage and accuracy improvements, and feature additions Developers can download the SDK from LunarXchange at https://vulkan.lunarg.com/sdk/home.

      View full story
    • By khawk
      LunarG has released new Vulkan SDKs for Windows, Linux, and macOS based on the 1.1.73 header. The new SDK includes:
      New extensions: VK_ANDROID_external_memory_android_hardware_buffer VK_EXT_descriptor_indexing VK_AMD_shader_core_properties VK_NV_shader_subgroup_partitioned Many bug fixes, increased validation coverage and accuracy improvements, and feature additions Developers can download the SDK from LunarXchange at https://vulkan.lunarg.com/sdk/home.
    • By mark_braga
      I have a pretty good experience with multi gpu programming in D3D12. Now looking at Vulkan, although there are a few similarities, I cannot wrap my head around a few things due to the extremely sparse documentation (typical Khronos...)
      In D3D12 -> You create a resource on GPU0 that is visible to GPU1 by setting the VisibleNodeMask to (00000011 where last two bits set means its visible to GPU0 and GPU1)
      In Vulkan - I can see there is the VkBindImageMemoryDeviceGroupInfoKHR struct which you add to the pNext chain of VkBindImageMemoryInfoKHR and then call vkBindImageMemory2KHR. You also set the device indices which I assume is the same as the VisibleNodeMask except instead of a mask it is an array of indices. Till now it's fine.
      Let's look at a typical SFR scenario:  Render left eye using GPU0 and right eye using GPU1
      You have two textures. pTextureLeft is exclusive to GPU0 and pTextureRight is created on GPU1 but is visible to GPU0 so it can be sampled from GPU0 when we want to draw it to the swapchain. This is in the D3D12 world. How do I map this in Vulkan? Do I just set the device indices for pTextureRight as { 0, 1 }
      Now comes the command buffer submission part that is even more confusing.
      There is the struct VkDeviceGroupCommandBufferBeginInfoKHR. It accepts a device mask which I understand is similar to creating a command list with a certain NodeMask in D3D12.
      So for GPU1 -> Since I am only rendering to the pTextureRight, I need to set the device mask as 2? (00000010)
      For GPU0 -> Since I only render to pTextureLeft and finally sample pTextureLeft and pTextureRight to render to the swap chain, I need to set the device mask as 1? (00000001)
      The same applies to VkDeviceGroupSubmitInfoKHR?
      Now the fun part is it does not work  . Both command buffers render to the textures correctly. I verified this by reading back the textures and storing as png. The left texture is sampled correctly in the final composite pass. But I get a black in the area where the right texture should appear. Is there something that I am missing in this? Here is a code snippet too
      void Init() { RenderTargetInfo info = {}; info.pDeviceIndices = { 0, 0 }; CreateRenderTarget(&info, &pTextureLeft); // Need to share this on both GPUs info.pDeviceIndices = { 0, 1 }; CreateRenderTarget(&info, &pTextureRight); } void DrawEye(CommandBuffer* pCmd, uint32_t eye) { // Do the draw // Begin with device mask depending on eye pCmd->Open((1 << eye)); // If eye is 0, we need to do some extra work to composite pTextureRight and pTextureLeft if (eye == 0) { DrawTexture(0, 0, width * 0.5, height, pTextureLeft); DrawTexture(width * 0.5, 0, width * 0.5, height, pTextureRight); } // Submit to the correct GPU pQueue->Submit(pCmd, (1 << eye)); } void Draw() { DrawEye(pRightCmd, 1); DrawEye(pLeftCmd, 0); }  
    • By turanszkij
      Hi,
      I finally managed to get the DX11 emulating Vulkan device working but everything is flipped vertically now because Vulkan has a different clipping space. What are the best practices out there to keep these implementation consistent? I tried using a vertically flipped viewport, and while it works on Nvidia 1050, the Vulkan debug layer is throwing error messages that this is not supported in the spec so it might not work on others. There is also the possibility to flip the clip scpace position Y coordinate before writing out with vertex shader, but that requires changing and recompiling every shader. I could also bake it into the camera projection matrices, though I want to avoid that because then I need to track down for the whole engine where I upload matrices... Any chance of an easy extension or something? If not, I will probably go with changing the vertex shaders.
    • By Alexa Savchenko
      I publishing for manufacturing our ray tracing engines and products on graphics API (C++, Vulkan API, GLSL460, SPIR-V): https://github.com/world8th/satellite-oem
      For end users I have no more products or test products. Also, have one simple gltf viewer example (only source code).
      In 2016 year had idea for replacement of screen space reflections, but in 2018 we resolved to finally re-profile project as "basis of render engine". In Q3 of 2017 year finally merged to Vulkan API. 
       
       
  • Advertisement
  • Advertisement
Sign in to follow this  

Vulkan Is OpenCL slowly dying?

This topic is 556 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I've recently noticed that it looks like the support for OpenCL looks like being slowly dropped in favor of using Vulkan (although it might be only in game industry, as I assume OpenCL is still used in places where rendering is not going to be the thing),

 

In the end, they both (CL and compute on Vulkan) are compiled into SPIR-V as far as I know, so for vendors it is not really a big problem to support both.

 

Nevertheless, what are your opinions?

Share this post


Link to post
Share on other sites
Advertisement

I'm curious what leads you to believe that support for OpenCL is waning. From what I can figure, most graphics developers didn't care for CL much due to clumsy interop with GL, which is why compute was added directly to GL core in 4.5. It's much more useful to game/graphics developers in that format than CL is, and CL has moved to more scientific computing type uses. I don't think Vulkan plays in at all.

 

Yah, I've only seen OpenCL and the like used for simulation/neural network/etc. stuff. Graphics tend to stick with whatever's native as possible.

Share this post


Link to post
Share on other sites

Take into account also WebGL. WebGL is quite popular now. And I suppose that due to difficulties to implement Vulkan in browsers it will be so for a long time.

Share this post


Link to post
Share on other sites

I've recently noticed that it looks like the support for OpenCL looks like being slowly dropped in favor of using Vulkan (although it might be only in game industry, as I assume OpenCL is still used in places where rendering is not going to be the thing),

 

Agree, but even for non rendering tasks Compute Shader will be preferred because you can do both async if you use Graphics API for everything.

Also NVidias lack of support makes OpenCL inpractical because 1.2 has no indirect dispatch.

 

I do a large project in OpenCL and Vulkan (to get some profiler data - CodeXL does not work yet for Vulkan).

Ignoring the dispatch problem and looking only at GPU time and AMD, the performance varies about 20-30%. Sometimes VK wins, sometimes OpenCL.

Next Vulkan will have data sharing, so personally i'm still considering OpenCL as an alternative in extreme cases.

Share this post


Link to post
Share on other sites

The thing with OpenCL vs Vulkan is the former will prioritize accuracy while the latter will prioritize performance. Although some Vulkan implementations could provide strong IEEE and double support extensions, it doesn't change the fact it will be fancy add on whereas OpenCL it is a must-have and the core focus.

 

 

Take into account also WebGL. WebGL is quite popular now. And I suppose that due to difficulties to implement Vulkan in browsers it will be so for a long time.

There won't be a WebVulkan as Vulkan provides a degree of low level access that is a massive security nightmare that browser cannot afford to allow.

Edited by Matias Goldberg

Share this post


Link to post
Share on other sites
Don't think it's dying, though it's never been super popular in the games industry (it doesn't port to XBox, PlayStation, Windows+DirectX, etc.).

Remember that OpenCL is taking some new forms though. HSA uses are increasingly interested in variants of OpenCL, like the original SPIR work, that allows a C/C++/
whatever compiler to translate functions into kernels that can run on GPUs, FPGAs, and other compute-oriented coprocessors. On Linux particularly, which is the dominate platform for server and embedded work, OpenCL/SPIR is a primary part of that support.

As other compilers catch up to those language features they'll likely use whatever their platform's native compute SDK requires (DirectCompute, Vulkan/SPIR-V, CUDA/PTX, etc.)

Games have also downplayed compute because there's not really any spare GPU cycles on gamers' machines. Graphics uses every bit of GPU power, the integrated GPUs are usually disabled automatically when the discrete GPU is running on all but the newest chipsets, and using the integrated GPU will limit the CPU speed (thermal envelope constraints).

There probably could be more compute use on the server for games with dedicated hosts, but the games industry just didn't seem to have caught up to that idea yet. E.g. we want to experiment with it for some of our server-side processing, but our datacenters refuse to even give us the option of having GPU access (possibly exacerbated by VM technology).

Share this post


Link to post
Share on other sites
Games have also downplayed compute because there's not really any spare GPU cycles on gamers' machines. Graphics uses every bit of GPU power, the integrated GPUs are usually disabled automatically when the discrete GPU is running on all but the newest chipsets, and using the integrated GPU will limit the CPU speed (thermal envelope constraints).

 

I don't think that's true at all. Many games and engines now are leaning heavily on compute, both as part of the graphics pipeline itself and for physics. A lot of the destruction physics stuff gets off-loaded to GPU nowadays. And the Battlefield/Frostbite slides lay out a very effective vision for how compute can be very beneficial to a graphics pipeline.

Edited by Promit

Share this post


Link to post
Share on other sites

I don't think that's true at all. Many games and engines now are leaning heavily on compute, both as part of the graphics pipeline itself and for physics.
 

 

Graphics, yes, I guess I personally consider that a very different thing than compute, though - it's still graphics. Sorry I wasn't clearer about that distinction up front.

 

From a games' perspective, that basically means you get compute if you're doing a visual effect but not if you're doing e.g. an AI algorithm, typically anyways.

 

Physics compute usage (actually running on the GPU) is still quite rare, even with libraries like PhysX. The closest you really come is non-gameplay-relevant physics like particles, hair, or cloth simulation on the GPU; aka "effects physics" that are basically a render-quality feature that make the game _look_ more alive while having no effect on the game _feeling_ more alive. Some of the more popular physics libraries in games don't even provide the option of GPU-assisted gameplay physics.

Share this post


Link to post
Share on other sites

Thanks for the input.

 

What made me originally believe that the support for OpenCL is waning is the fact that there the updates for it are not that frequent (although on the other hand, with exceptions for new extensions, OpenGL 4.5 is like 2 years old standard too ... Vulkan updates are a different thing, but that is a fresh api).

 

Maybe it was just my feeling, as I haven't seen pretty much anyone working with it in last year or so (although I'm still keeping OpenCL code in my code base and using those kernels). Although as my requirements are not aiming for perfect precision, I might be better of going the compute shader way (with GLSL).

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement