Jump to content
  • Advertisement
GuyWithBeard

Gamma correction - sanity check

Recommended Posts

So I am finally fixing up my renderer to work in linear space after not caring (enough) about it for many years, and I wanted to do a sanity check on my plan to make sure I am doing it correctly. Up to this point I have had pretty much everything in DXGI_FORMAT_R8G8B8A8_UNORM, including textures (diffuse), swap chains and any offscreen buffers. My plan is roughly the following:

- Load diffuse textures as DXGI_FORMAT_R8G8B8A8_UNORM_SRGB. Normal maps and other non-color textures should still be non-sRGB as far as I know.

- Create the swap chain back buffers in the DXGI_FORMAT_R8G8B8A8_UNORM_SRGB format.

- Render the scene (both the main render pass + all other color render passes) into offscreen buffers in DXGI_FORMAT_R8G8B8A8_UNORM format. My plan is to have one main texture which acts as the "back buffer" during the rendering process. It is there simply to have a linear-space buffer to work with.

- After all rendering is finished, I will copy the linear space "back buffer" to the actual back buffer of the swap chain.

Does that sound right? Also, will the graphics API allow me to copy the linear buffer to the back buffer with a normal texture-to-texture copy or do I need to handle the linear-to-gamma change somehow? I have heard that the APIs handle the conversion when rendering to an sRGB buffer (ie. output from a PS) but is the same performed when copying from a linear texture to an sRGB texture? My targeted APIs are currently DX11, DX12 and Vulkan.

Cheers!
 

Share this post


Link to post
Share on other sites
Advertisement

sRGB is linear in a sense of brightness perception and sRGB->linear correction is obviously non linear. To store linear values you need at least 10 bits of precision, as after SRGB->linear conversion you need more precision in the bottom of the color range. All this means that if you don't do HDR (or pre-expose inside shaders) then you need to use at least 8888_srgb as your main color target. If you want to expose at the end of the pipeline, then you need also some extra range and precision and need to use at least 11_11_10_float as your main color target.

Share this post


Link to post
Share on other sites

Interesting. I was reading this article: https://developer.nvidia.com/gpugems/GPUGems3/gpugems3_ch24.html and they make a big deal about the fact that the conversion to sRGB should the the very last step before displaying on the screen. It seems to me that having a main color buffer in linear space would be good since then you could both write to it and read from it for intermediate steps without having to care about sRGB at all. That's why I planned to have a linear space main color buffer.

Just to be clear, if I stick to the plan but change the main color buffer to DXGI_FORMAT_R8G8B8A8_UNORM_SRGB and then read from it as I apply, say, a fullscreen effect such as FXAA, the result will be incorrect as I will be reading sRGB values when I should be reading linear values. Right?

So in this case, the only thing left to do, if I want correct behavior, is to use a floting point main render target?

Share this post


Link to post
Share on other sites
3 hours ago, GuyWithBeard said:

- Create the swap chain back buffers in the DXGI_FORMAT_R8G8B8A8_UNORM_SRGB format.

Why not using non-sRGB for the back buffer to support custom gamma correction (brightness adjustment)?

 

3 hours ago, GuyWithBeard said:

- Render the scene (both the main render pass + all other color render passes) into offscreen buffers in DXGI_FORMAT_R8G8B8A8_UNORM format. My plan is to have one main texture which acts as the "back buffer" during the rendering process. It is there simply to have a linear-space buffer to work with.

Why not using a half float / channel for HDR support?

The buffers of your GBuffer that deal with sRGB colors will benefit from UNORM_SRGB. Using only 1 byte/channel for storing data in sRGB color space as data in linear color space, has not enough precision (you will need 10 bits or so per channel).

Share this post


Link to post
Share on other sites
11 minutes ago, matt77hias said:

Why not using non-sRGB for the back buffer to support custom gamma correction (brightness adjustment)?

I was going by the GPU gems3 article I linked above where they suggested you use an sRGB format to get efficient linear-to-gamma conversion.

As for the rest, my renderer is forward (+) so there is no G-buffer and I don't plan to do HDR any time soon.

EDIT: ...but as I read your comments it seems like I should perhaps go the HDR route, so feel free to throw any good articles at an HDR-virgin like me.

Edited by GuyWithBeard

Share this post


Link to post
Share on other sites
7 minutes ago, GuyWithBeard said:

I was going by the GPU gems3 article I linked above where they suggested you use an sRGB format to get efficient linear-to-gamma conversion.

If you use sRGB formats, then the hardware will do the decoding and encoding between sRGB and linear space for you. But at the very last step where you transfer the content of your non-back buffer to your back buffer, you probably want to customize the common gamma value of 2.2. Some game provide the option to adjust the brightness: "move the slider till the image is barely visible".

11 minutes ago, GuyWithBeard said:

EDIT: ...but as I read your comments it seems like I should perhaps go the HDR route, so feel free to throw any good articles at an HDR-virgin like me.

I basically work with half floats (16 bits) per color channel for storing my "images".

Only at the very end of my pipeline, I transfer the HDR content to the LDR back buffer after applying my final operations: eye adaptation, tone mapping and custom gamma correction.

Share this post


Link to post
Share on other sites
1 hour ago, GuyWithBeard said:

Just to be clear, if I stick to the plan but change the main color buffer to DXGI_FORMAT_R8G8B8A8_UNORM_SRGB and then read from it as I apply, say, a fullscreen effect such as FXAA, the result will be incorrect as I will be reading sRGB values when I should be reading linear values. Right?

So in this case, the only thing left to do, if I want correct behavior, is to use a floting point main render target?

No. When you read from R8G8B8A8_unorm_srgb HW automatically does sRGB->linear conversion for you. Similar case with writes - you get linear->sRGB at the end. Basically, this means that you can use either R10G10B10A2_unorm or R8G8B8A8_unorm_srgb as your intermediate main color target without any observable difference in case of outputting that directly to the screen. If you don't need HDR then you don't have to use a fp render target - you can use R8G8B8A8_unorm_srgb or R10G10B10A2_unorm.

EDIT: BTW custom gamma is usually done on top of HW sRGB, as sRGB is more complicated than simple x^1/2.2.

Edited by knarkowicz

Share this post


Link to post
Share on other sites
3 hours ago, knarkowicz said:

EDIT: BTW custom gamma is usually done on top of HW sRGB, as sRGB is more complicated than simple x^1/2.2.

It is an approximation which can be good or bad for certain regions. If I do it manually, I use Frostbite's approximation.

 

Though, I only convert color coefficients, which are used as multipliers for textures, from sRGB to linear color space on the application side. This does not result in a loss of precision, since you would typically use a float4 (which has more precision than an UNORM texel) for transferring such coefficients to the GPU.

The hardware performs all the encoding and decoding between sRGB and linear color space for textures. You cannot do this manually, because the encoding and decoding are non-linear operations: sampling + conversion != conversion + sampling. The hardware performs the conversion before sampling, which is the correct order. If you want to do the conversion yourself, you run into problems since your conversion happens after the sampling which would only be correct if you restrict yourself to point samplers.

3 hours ago, knarkowicz said:

R10G10B10A2_unorm

You can also consider using this for the back buffer. Though the transfer function is different?

Share this post


Link to post
Share on other sites
45 minutes ago, matt77hias said:

You can also consider using this for the back buffer.

Yeah, I guess the alpha channel is practically unused in the backbuffer. On a related note, it seems that would only be possible for the "work back buffer" or whatever it should be called, as my Vulkan driver reports that the only two formats supported for the actual back buffer are VK_FORMAT_B8G8R8A8_UNORM and VK_FORMAT_B8G8R8A8_SRGB.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

Participate in the game development conversation and more when you create an account on GameDev.net!

Sign me up!