• Advertisement
  • Popular Tags

  • Popular Now

  • Advertisement
  • Similar Content

    • By khawk
      LunarG has released new Vulkan SDKs for Windows, Linux, and macOS based on the 1.1.73 header. The new SDK includes:
      New extensions: VK_ANDROID_external_memory_android_hardware_buffer VK_EXT_descriptor_indexing VK_AMD_shader_core_properties VK_NV_shader_subgroup_partitioned Many bug fixes, increased validation coverage and accuracy improvements, and feature additions Developers can download the SDK from LunarXchange at https://vulkan.lunarg.com/sdk/home.

      View full story
    • By khawk
      LunarG has released new Vulkan SDKs for Windows, Linux, and macOS based on the 1.1.73 header. The new SDK includes:
      New extensions: VK_ANDROID_external_memory_android_hardware_buffer VK_EXT_descriptor_indexing VK_AMD_shader_core_properties VK_NV_shader_subgroup_partitioned Many bug fixes, increased validation coverage and accuracy improvements, and feature additions Developers can download the SDK from LunarXchange at https://vulkan.lunarg.com/sdk/home.
    • By mark_braga
      I have a pretty good experience with multi gpu programming in D3D12. Now looking at Vulkan, although there are a few similarities, I cannot wrap my head around a few things due to the extremely sparse documentation (typical Khronos...)
      In D3D12 -> You create a resource on GPU0 that is visible to GPU1 by setting the VisibleNodeMask to (00000011 where last two bits set means its visible to GPU0 and GPU1)
      In Vulkan - I can see there is the VkBindImageMemoryDeviceGroupInfoKHR struct which you add to the pNext chain of VkBindImageMemoryInfoKHR and then call vkBindImageMemory2KHR. You also set the device indices which I assume is the same as the VisibleNodeMask except instead of a mask it is an array of indices. Till now it's fine.
      Let's look at a typical SFR scenario:  Render left eye using GPU0 and right eye using GPU1
      You have two textures. pTextureLeft is exclusive to GPU0 and pTextureRight is created on GPU1 but is visible to GPU0 so it can be sampled from GPU0 when we want to draw it to the swapchain. This is in the D3D12 world. How do I map this in Vulkan? Do I just set the device indices for pTextureRight as { 0, 1 }
      Now comes the command buffer submission part that is even more confusing.
      There is the struct VkDeviceGroupCommandBufferBeginInfoKHR. It accepts a device mask which I understand is similar to creating a command list with a certain NodeMask in D3D12.
      So for GPU1 -> Since I am only rendering to the pTextureRight, I need to set the device mask as 2? (00000010)
      For GPU0 -> Since I only render to pTextureLeft and finally sample pTextureLeft and pTextureRight to render to the swap chain, I need to set the device mask as 1? (00000001)
      The same applies to VkDeviceGroupSubmitInfoKHR?
      Now the fun part is it does not work  . Both command buffers render to the textures correctly. I verified this by reading back the textures and storing as png. The left texture is sampled correctly in the final composite pass. But I get a black in the area where the right texture should appear. Is there something that I am missing in this? Here is a code snippet too
      void Init() { RenderTargetInfo info = {}; info.pDeviceIndices = { 0, 0 }; CreateRenderTarget(&info, &pTextureLeft); // Need to share this on both GPUs info.pDeviceIndices = { 0, 1 }; CreateRenderTarget(&info, &pTextureRight); } void DrawEye(CommandBuffer* pCmd, uint32_t eye) { // Do the draw // Begin with device mask depending on eye pCmd->Open((1 << eye)); // If eye is 0, we need to do some extra work to composite pTextureRight and pTextureLeft if (eye == 0) { DrawTexture(0, 0, width * 0.5, height, pTextureLeft); DrawTexture(width * 0.5, 0, width * 0.5, height, pTextureRight); } // Submit to the correct GPU pQueue->Submit(pCmd, (1 << eye)); } void Draw() { DrawEye(pRightCmd, 1); DrawEye(pLeftCmd, 0); }  
    • By turanszkij
      Hi,
      I finally managed to get the DX11 emulating Vulkan device working but everything is flipped vertically now because Vulkan has a different clipping space. What are the best practices out there to keep these implementation consistent? I tried using a vertically flipped viewport, and while it works on Nvidia 1050, the Vulkan debug layer is throwing error messages that this is not supported in the spec so it might not work on others. There is also the possibility to flip the clip scpace position Y coordinate before writing out with vertex shader, but that requires changing and recompiling every shader. I could also bake it into the camera projection matrices, though I want to avoid that because then I need to track down for the whole engine where I upload matrices... Any chance of an easy extension or something? If not, I will probably go with changing the vertex shaders.
    • By Alexa Savchenko
      I publishing for manufacturing our ray tracing engines and products on graphics API (C++, Vulkan API, GLSL460, SPIR-V): https://github.com/world8th/satellite-oem
      For end users I have no more products or test products. Also, have one simple gltf viewer example (only source code).
      In 2016 year had idea for replacement of screen space reflections, but in 2018 we resolved to finally re-profile project as "basis of render engine". In Q3 of 2017 year finally merged to Vulkan API. 
       
       
  • Advertisement
  • Advertisement
Sign in to follow this  

Vulkan When is sRGB conversion being done in Vulkan?

This topic is 536 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I'm in the process of moving my old Direct3D11 graphics code to Vulkan, and figured that I might as well get gamma correct from the beginning, but it's not quite right.

 

The problem is that the resulting image is too bright. What's being done as of now is simply to render objects with an albedo map to a G-buffer and then blitting that to the swapchain. There are many things that can influence the final result, and considering that the final result contradicts my intuition, there ought to be something I don't understand and I hope somebody can verify what is right and explain what is wrong.

 

Previously (which actually looked fine) I was using the first fit for swapchain format and color space when creating the swapchain, which in my case meant a UNORM format. In lack of other options, Vulkan seems to force color space to be sRGB, and since the specification indicates that the color space defines how the presentation engine interprets the data, I figured I might as well just use an sRGB format to be correct because I don't do any manual conversions.

 

This change made my image too bright, so my thought was that maybe the textures were loaded wrong the whole time. For DDS textures, the library provides the format, for which I use the Vulkan equivalent. For PNG textures I have to specify the format myself which is RGBA8_UNORM. I was surprised that the textures were not sRGB, but I double checked opening the images in VS and RenderDoc.

 

My G-buffer is RGBA8_UNORM.

 

All in all, I render linear textures to a linear G-buffer followed by blitting to an sRGB swapchain with sRGB color space. If I understand correctly, hardware should do sRGB conversions when needed, most notably when blitting the G-buffer. To me, this seems like it should work. Funny thing is that changing G-buffer to sRGB produces the same result.

 

What about shaders? If I sample an sRGB texture (which I don't right now), will the data be linear in the shader? If I write from the fragment shader, will there be any conversion depending on the render target format or does it just write the data as is, assuming that I have correctly converted it myself?

Share this post


Link to post
Share on other sites
Advertisement

Data sampled from an sRGB image view is always converted to linear when read by a shader. Similarly, writing to an sRGB image view will convert the linear values written by the shader to sRGB automatically. Blending is also performed in linear space for sRGB render targets.

 

Compared to OpenGL, Vulkan's sRGB features are much simpler. sRGB is simply an encoding for color data. It's simply an encoding with more precision around 0.0 and less around 1.0. That's all you really need to know when you use it. It's similar to how 16-bit floats, 10-bit floats, RGB9E5, etc work. You don't have to care about the actual encoding, just write and read linear colors. The intermediate encoding just affects the precision.

Edited by theagentd

Share this post


Link to post
Share on other sites

I would be surprised if sRGB is different in Vulkan vs OpenGL vs Direct3D. Like the previous post mentioned its just an encoding. Chances are the result you are seeing is correct, but because you've been use to the incorrect behavior all along, the correct behavior is now an anomaly. With that said, with sRGB, there are 2 things to be consider, 1. The shader inputs ( textures ) and the the target of the shader output ( render target, swap chains etc ). So just changing your textures is just a part of the puzzle. sRGB texture samples are converted to linear when read in shader. When you write fragment output, the output is assumed to be linear. If the rendertarget is a sRGB format rendertarget then conversion from linear to sRGB will happen when the fragment is written to the rendertarget.

. For PNG textures I have to specify the format myself which is RGBA8_UNORM. I was surprised that the textures were not sRGB, but I double checked opening the images in VS and RenderDoc.


Why would you expect it to be sRGB when you are not uploading the texture as such ? Is RGBA8_UNORM a sRGB format ?

If the data in the texture is color data, chances are its sRGB. I would suggest read this post http://http.developer.nvidia.com/GPUGems3/gpugems3_ch24.html as it may clear up a few questions you may have.

Share this post


Link to post
Share on other sites

Nice to see that my assumptions seem to be correct. The question was probably a bit ambiguous; the Direct3D version was likely correct, the "previous" version was just what I went with when getting Vulkan up and running and does not refer to the Direct3D version. I'm aware of what sRGB implies, I just wanted to know the particular rules of conversions in Vulkan.

 

I have also tested performing a fullscreen pass that simply outputs a gradient, and comparing to reference pictures of correct gamma indicates that something is indeed wrong. At the end of the day I'm none the wiser and have probably overlooked something. I will continue searching and see what can be the issue and look back here if somebody has chipped in with more suggestions or perhaps sources to conversion rules (in the spec I only found conversion rules regarding blitting, but nothing concerning shaders).

 

. For PNG textures I have to specify the format myself which is RGBA8_UNORM. I was surprised that the textures were not sRGB, but I double checked opening the images in VS and RenderDoc.


Why would you expect it to be sRGB when you are not uploading the texture as such ? Is RGBA8_UNORM a sRGB format ?

 

The format to use is just a remnant of when I was getting things up and running. I expected the texture to be sRGB because many images are stored with that encoding, but was surprised that it was not and could conclude that the texture was not the problem. I included that statement in the question for completeness as to what I considered when trying to figure out what might be erroneous.

Share this post


Link to post
Share on other sites

Your texture is almost certainly sRGB.

 

An image file normally contains a sequence of RGB triplets defining the colour of each pixel. However, this doesn't tell us exactly what 'shade' the colours are. A colour profile steps in here to define the spectral properties of the RGB values in the image - so without a profile the numbers are meaningless.

 

Fortunately sRGB is the colour space of the internet. This means it's now pretty much universally accepted that any untagged images files (i.e. they have no embedded profile) can be safely assumed to be sRGB.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement