Gamma correction - sanity check

Recommended Posts

So I am finally fixing up my renderer to work in linear space after not caring (enough) about it for many years, and I wanted to do a sanity check on my plan to make sure I am doing it correctly. Up to this point I have had pretty much everything in DXGI_FORMAT_R8G8B8A8_UNORM, including textures (diffuse), swap chains and any offscreen buffers. My plan is roughly the following:

- Load diffuse textures as DXGI_FORMAT_R8G8B8A8_UNORM_SRGB. Normal maps and other non-color textures should still be non-sRGB as far as I know.

- Create the swap chain back buffers in the DXGI_FORMAT_R8G8B8A8_UNORM_SRGB format.

- Render the scene (both the main render pass + all other color render passes) into offscreen buffers in DXGI_FORMAT_R8G8B8A8_UNORM format. My plan is to have one main texture which acts as the "back buffer" during the rendering process. It is there simply to have a linear-space buffer to work with.

- After all rendering is finished, I will copy the linear space "back buffer" to the actual back buffer of the swap chain.

Does that sound right? Also, will the graphics API allow me to copy the linear buffer to the back buffer with a normal texture-to-texture copy or do I need to handle the linear-to-gamma change somehow? I have heard that the APIs handle the conversion when rendering to an sRGB buffer (ie. output from a PS) but is the same performed when copying from a linear texture to an sRGB texture? My targeted APIs are currently DX11, DX12 and Vulkan.

Cheers!
 

Share this post


Link to post
Share on other sites

sRGB is linear in a sense of brightness perception and sRGB->linear correction is obviously non linear. To store linear values you need at least 10 bits of precision, as after SRGB->linear conversion you need more precision in the bottom of the color range. All this means that if you don't do HDR (or pre-expose inside shaders) then you need to use at least 8888_srgb as your main color target. If you want to expose at the end of the pipeline, then you need also some extra range and precision and need to use at least 11_11_10_float as your main color target.

Share this post


Link to post
Share on other sites

Interesting. I was reading this article: https://developer.nvidia.com/gpugems/GPUGems3/gpugems3_ch24.html and they make a big deal about the fact that the conversion to sRGB should the the very last step before displaying on the screen. It seems to me that having a main color buffer in linear space would be good since then you could both write to it and read from it for intermediate steps without having to care about sRGB at all. That's why I planned to have a linear space main color buffer.

Just to be clear, if I stick to the plan but change the main color buffer to DXGI_FORMAT_R8G8B8A8_UNORM_SRGB and then read from it as I apply, say, a fullscreen effect such as FXAA, the result will be incorrect as I will be reading sRGB values when I should be reading linear values. Right?

So in this case, the only thing left to do, if I want correct behavior, is to use a floting point main render target?

Share this post


Link to post
Share on other sites
3 hours ago, GuyWithBeard said:

- Create the swap chain back buffers in the DXGI_FORMAT_R8G8B8A8_UNORM_SRGB format.

Why not using non-sRGB for the back buffer to support custom gamma correction (brightness adjustment)?

 

3 hours ago, GuyWithBeard said:

- Render the scene (both the main render pass + all other color render passes) into offscreen buffers in DXGI_FORMAT_R8G8B8A8_UNORM format. My plan is to have one main texture which acts as the "back buffer" during the rendering process. It is there simply to have a linear-space buffer to work with.

Why not using a half float / channel for HDR support?

The buffers of your GBuffer that deal with sRGB colors will benefit from UNORM_SRGB. Using only 1 byte/channel for storing data in sRGB color space as data in linear color space, has not enough precision (you will need 10 bits or so per channel).

Share this post


Link to post
Share on other sites
11 minutes ago, matt77hias said:

Why not using non-sRGB for the back buffer to support custom gamma correction (brightness adjustment)?

I was going by the GPU gems3 article I linked above where they suggested you use an sRGB format to get efficient linear-to-gamma conversion.

As for the rest, my renderer is forward (+) so there is no G-buffer and I don't plan to do HDR any time soon.

EDIT: ...but as I read your comments it seems like I should perhaps go the HDR route, so feel free to throw any good articles at an HDR-virgin like me.

Edited by GuyWithBeard

Share this post


Link to post
Share on other sites
7 minutes ago, GuyWithBeard said:

I was going by the GPU gems3 article I linked above where they suggested you use an sRGB format to get efficient linear-to-gamma conversion.

If you use sRGB formats, then the hardware will do the decoding and encoding between sRGB and linear space for you. But at the very last step where you transfer the content of your non-back buffer to your back buffer, you probably want to customize the common gamma value of 2.2. Some game provide the option to adjust the brightness: "move the slider till the image is barely visible".

11 minutes ago, GuyWithBeard said:

EDIT: ...but as I read your comments it seems like I should perhaps go the HDR route, so feel free to throw any good articles at an HDR-virgin like me.

I basically work with half floats (16 bits) per color channel for storing my "images".

Only at the very end of my pipeline, I transfer the HDR content to the LDR back buffer after applying my final operations: eye adaptation, tone mapping and custom gamma correction.

Share this post


Link to post
Share on other sites
1 hour ago, GuyWithBeard said:

Just to be clear, if I stick to the plan but change the main color buffer to DXGI_FORMAT_R8G8B8A8_UNORM_SRGB and then read from it as I apply, say, a fullscreen effect such as FXAA, the result will be incorrect as I will be reading sRGB values when I should be reading linear values. Right?

So in this case, the only thing left to do, if I want correct behavior, is to use a floting point main render target?

No. When you read from R8G8B8A8_unorm_srgb HW automatically does sRGB->linear conversion for you. Similar case with writes - you get linear->sRGB at the end. Basically, this means that you can use either R10G10B10A2_unorm or R8G8B8A8_unorm_srgb as your intermediate main color target without any observable difference in case of outputting that directly to the screen. If you don't need HDR then you don't have to use a fp render target - you can use R8G8B8A8_unorm_srgb or R10G10B10A2_unorm.

EDIT: BTW custom gamma is usually done on top of HW sRGB, as sRGB is more complicated than simple x^1/2.2.

Edited by knarkowicz

Share this post


Link to post
Share on other sites
3 hours ago, knarkowicz said:

EDIT: BTW custom gamma is usually done on top of HW sRGB, as sRGB is more complicated than simple x^1/2.2.

It is an approximation which can be good or bad for certain regions. If I do it manually, I use Frostbite's approximation.

 

Though, I only convert color coefficients, which are used as multipliers for textures, from sRGB to linear color space on the application side. This does not result in a loss of precision, since you would typically use a float4 (which has more precision than an UNORM texel) for transferring such coefficients to the GPU.

The hardware performs all the encoding and decoding between sRGB and linear color space for textures. You cannot do this manually, because the encoding and decoding are non-linear operations: sampling + conversion != conversion + sampling. The hardware performs the conversion before sampling, which is the correct order. If you want to do the conversion yourself, you run into problems since your conversion happens after the sampling which would only be correct if you restrict yourself to point samplers.

3 hours ago, knarkowicz said:

R10G10B10A2_unorm

You can also consider using this for the back buffer. Though the transfer function is different?

Share this post


Link to post
Share on other sites
45 minutes ago, matt77hias said:

You can also consider using this for the back buffer.

Yeah, I guess the alpha channel is practically unused in the backbuffer. On a related note, it seems that would only be possible for the "work back buffer" or whatever it should be called, as my Vulkan driver reports that the only two formats supported for the actual back buffer are VK_FORMAT_B8G8R8A8_UNORM and VK_FORMAT_B8G8R8A8_SRGB.

Share this post


Link to post
Share on other sites

A swapchain in 8888 either unorm or srgb still expect srgb content for display. So you are likely to keep it srgb unless you do a copy from a separate 8888 unorm view of your offscreen surface.

Offscreen surface need enough precision so 8888srgb or hdr. You will see soon that you need tonemap so don't assume ldr all the way. 

 

 

Share this post


Link to post
Share on other sites
14 minutes ago, GuyWithBeard said:

only two formats supported for the actual back buffer are VK_FORMAT_B8G8R8A8_UNORM and VK_FORMAT_B8G8R8A8_SRGB.

I don't know Vulkan, but do these map to the similarly named DXGI_FORMATs? It seems strange that you only have BGRA and no RGBA? I needed the latter to use my back buffer as an UAV (though, now I do not reuse the back buffer for some intermediate calculations).

4 minutes ago, galop1n said:

Offscreen surface need enough precision so 8888srgb or hdr. You will see soon that you need tonemap so don't assume ldr all the way. 

But I don't get the combination LDR and forward+? Do you just use forward+ because you use lots of lights per view, but few per tile? If you had lots of lights per pixel, light contributions would accumulate a lot, making LDR less sufficient?

Edited by matt77hias

Share this post


Link to post
Share on other sites

To be honest I don't quite have a use for Forward+ at the moment. That's why I put it in parentheses. I just happened to stumble upon this fine article and figured it looked interesting: https://www.3dgep.com/forward-plus/

Anyway, I might get to HDR at some point but rendering is only a small part of the development I am doing so I simply haven't gotten to it yet. I am by no means a graphics programmer per se.

And speaking of, I have moved over to using sRGB textures and an sRGB back buffer now on both Vulkan and DX12 and I would like to do some more sanity checking if you don't mind.

The image output is a lot brighter, as you would expect. However, simply clearing the window to a solid color (ie. not doing any real drawing) is also a lot brighter and I am unsure if that is correct. I made my clear color [128, 128, 128] and it seems to come out as [188, 188, 188] (checked by doing a print-screen and looking up the color in GIMP). Since I should now be working in linear space which is then converted to gamma space before being presented to the screen I would expect the color to come out the same. Black comes out as black and white comes out as white, so it seems to be allright. However, I thought the whole point of working in linear was that half-way between black and white would come out as that.

Where did I go wrong? (and thanks in advance for explaining these fundamental things)

EDIT: Textures also seem to come out a lot brighter, so there's clearly something wrong. I will have to investigate.

Edited by GuyWithBeard

Share this post


Link to post
Share on other sites
47 minutes ago, GuyWithBeard said:

Black comes out as black and white comes out as white, so it seems to be allright.

That would always be the case. The linear and sRGB corrected curves have the same value at the lowest (0) and highest (1 or 255) value, but the curves differ in between.

 

I think the most easy thing you can verify first is a simple sprite. Create a sprite with an sRGB format, load the sprite as resource with an sRGB format and render (no operations) that sprite to a back buffer with an sRGB format. You should see the same result as in your (sRGB) image viewer.

Edited by matt77hias

Share this post


Link to post
Share on other sites
1 minute ago, matt77hias said:

That would always be the case. The linear and sRGB corrected curves have the same value at the lowest (0) and highest (1 or 255) value, but the curves differ in between.

Yes, I know that. But if the pipeline was configured correctly, shouldn't a clear color of 128 come out as 128?

Share this post


Link to post
Share on other sites
12 minutes ago, GuyWithBeard said:

Yes, I know that. But if the pipeline was configured correctly, shouldn't a clear color of 128 come out as 128?

Assume you have sRGB textures and an sRGB back buffer:

You write: 128 (linear color space)

The hardware performs the sRGB conversion:  ~ (128/255)^(1/2.2)*255 = 186 (sRGB color space)

If you sample that texel, the hardware will give you again ~ (186/255)^(2.2)*255 = 128 (linear color space), but if you just look at your screen you will see the value 186 (sRGB color space).

 

Stated differently you want to see the middle grey intensity between black and white, but the intensity curve of your display is non-linear, so you adapt your linear intensity value of 128 to 186 which will be half way between black and white for your display.

Edited by matt77hias

Share this post


Link to post
Share on other sites

Yes, but in this case I render the color instead of sampling it. Would rendering the color to screen and doing a print-screen of that equate to your third sampling step? Ie. should I expect 128 or 186? (actually it came out as 188).

Share this post


Link to post
Share on other sites
3 minutes ago, GuyWithBeard said:

actually it came out as 188

I do not know the exact linear to sRGB function. A gamma of 2.2 is a (cheap) approximation.

3 minutes ago, GuyWithBeard said:

Ie. should I expect 128 or 186?

You will see 186 on your display, you will see 128 in your shader calculations. (see my edit to my previous post)

Edited by matt77hias

Share this post


Link to post
Share on other sites

I feel like you are dodging my actual question :) Anyway, thanks thus far. I will have to read up on this a bit I feel...

EDIT: For the record, Matt77hias, edited his previous posts to provide more info. Thanks dude!

Edited by GuyWithBeard

Share this post


Link to post
Share on other sites

Ok I'll try to rephrase:

Lets assume we have a sprite with an UNORM_SRGB with a single stored color of [0.73,0.73,0.73,1]

We now want to sample from that texture in a pixel shader which will write to the back buffer which also has a UNORM_SRGB format. If we sample from the texture, the hardware knows that we deal with an SRGB texture and performs the conversion from SRGB to linear color space before sampling: [0.73,0.73,0.73,1] > [0.5,0.5,0.5,1]. Then, the sampling is performed and we get [0.5,0.5,0.5,1]. Next, we write that color to the back buffer. The hardware knows that we deal with an SRGB back buffer and performs the conversion from linear to SRGB color space (after blending) [0.5,0.5,0.5,1] > [0.73,0.73,0.73,1].

The display will then apply that "signal" and we will perceive the "signal" as half way between pure black and pure white.

If you write the image (e.g. snapshot) to disk and use some image viewer, the image viewer will say that each texel has a raw value of [0.73,0.73,0.73,1].

Edited by matt77hias

Share this post


Link to post
Share on other sites

No need to apologize, this is very helpful.

Some of my confusion came from the fact that I tried to do the same sanity check in Unity, by setting the project to use linear color space and clearing the screen to [128,128,128,1] and it actually comes out as 128 when color picked in GIMP, while my render window is cleared to 186. Although I realize I have no idea what Unity is doing behind the scenes so this probably is not helpful at all.

Anyway, I realized that I have been converting my textures incorrectly to sRGB which probably explains why they are showing up brighter than they should. I'll fix that and output one of them on screen and see how they look...

Share this post


Link to post
Share on other sites

Well, I managed to get the textures to look correct. I incorrectly assumed that texconv (of DX SDK fame) would be able to detect if an image was already in sRGB format or not. Turns out my images lacked the necessary metadata, so now my texture converter assumes a texture is in sRGB if it is set to output into sRGB, and that took care of the overly bright textures.

Share this post


Link to post
Share on other sites
11 hours ago, GuyWithBeard said:

I incorrectly assumed that texconv (of DX SDK fame) would be able to detect if an image was already in sRGB format or not.

I use texconv as well. You need to manually specify whether the input and/or output represents sRGB:

-srgb, -srgbi, or -srgbo: Use sRGB if both the input and output data are in the sRGB color format (ie. gamma ~2.2). Use sRGBi if only the input is in sRGB; use sRGBo if only the output is in sRGB.

But sometimes it is really trial and error. DirectXTex even has a flag forceSRGB for loading .dds files which do not have an explicit sRGB format but should be treated as if the raw data represents sRGB values. So even the .dds format which explicitly encodes the format, can be misused. :o Nothing is certain anymore.

Share this post


Link to post
Share on other sites

Yep. I missed that exact flag. However, texconv seems to do linear-to-SRGB conversion even if you leave it out as long as you specify an SRGB format as output format. So, what I assume happened was that my input texture, already in SRGB, was treated as linear and was converted to SRGB a second time. This caused the texture to come out very bright.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now