How Do You Handle Gamma Correction?

Started by
10 comments, last by cozzie 7 years, 8 months ago

Hi guys.

I was reading a book and doing some exercices, when I ran into the topic of gamma correction.

This is what I understood:

- in the old days monitors where not able to display 'gamma correct' / realistic looking lighting

- LCD and later/ newer monitors can do this without any problems

- the 'default' gamma correction for PC's can be achieved by raising color to the power of 2.2 (or 2 for practical/ performance reasons).

For mac that would be 1.8.

The general approach which delivers realistic/ gamma correct lighting:

- input textures (with colors, no normals/displacement etc.) and handpicked light colors are mostly in gamma space

- before doing calculations with them in your shaders, you convert them to linear space (by raising it to the power of 2)

- do all lighting calculations in linear space

My questions;

A - what do you think about the above/ is this the/ a good approach?

B - if so, how do you handle making the textures 'gamma-incorrect':

**. realtime in the shader directly after sampling the pixel?

**. in the asset pipeline, so the source textures with colors are already gamma-incorrect in the input?

Option 2 sounds better performance wise, because no 'decoding' is done in realtime.

C - would you convert light colors to linear space before or after multiplying it with it's intensity?

D - will the final output color be OK / gamma correct if all calculations are done in linear space?

Any input is appreciated.

Crealysm game & engine development: http://www.crealysm.com

Looking for a passionate, disciplined and structured producer? PM me

Advertisement
I would think it's best to keep all your textures and intermediate values in a sane color space (Physically meaningful values, or at least values where arithmetic just works), and implement tone mapping as the last step of rendering. Isn't this what everyone does by now?

before doing calculations with them in your shaders, you convert them to linear space (by raising it to the power of 2)

you need to raise it by the inverse, 1/2.2
Forget gamma/2.0/2.2/1.8.
You just need sRGB and linear.

Display manufacturers decided on sRGB as the common "gamma" space.
Colour artwork created on one of these monitors will implicitly be an "sRGB file" because that's how the artist was visualising it.
You should buy a calibration tool and ensure all your artists are using sRGB-compliant monitors to ensure that your source data is reliable.

You can't do math in sRGB space because it's curved, so we convert their art to linear before lighting.

B) there's linear->sRGB and sRGB->linear hardware built into the texture filter and output merger. It's free.
You really don't want to preconvert because sRGB is basically a compression scheme allowing human perception to fit into 8 bits. Without the sRGB curve, your colour textures would have to be 10 to 16 bits per channel to achieve the same quality level.
Which is also why it's best to do lighting in a 16 bit buffer.

C) yes, because intensity values can be higher than 1.0 (aka 255), but sRGB colours cannot.

D) if you assume the user has an sRGB display (which you should) then you need to convert your final image from linear RGB space back into sRGB space so that their display interprets it correctly.
If you want to be user friendly and assume they've got a non-compliant TV, you can instead convert into a "gamma space" of their choosing, e.g. From POW 1.8 to POW 2.6, with 2.2 as the default setting.

Thanks all.

@Hodgman: based on the theory, for C you probably mean before :)

So let's summarize:

1 * source assets/art/textures/light colors etc. should all be in sRGB space

2 * convert from sRGB to linear space (free, built into texture filter)

3 * lighting calculations are all done in linear space

4 * convert end result color from linear space to sRGB (free, built in the Output Merger)

Clear, but how do I do the conversions at step 2 and 4?

What does it exactly mean that both conversions are 'free', does this mean that if you 'input' sRGB textures/ lighting colors etc., the end result will always be ok, or do I have to enable this somewhere?

Note 1; regarding manually picked lighting colors, since they're picked 'on screen' they probable are already sRGB :)

Note 2; I didn't get into tone mapping yet, is that something that I should pickup/ get into separately or are they the same thing?

(I believe tone mapping is a post processing effect you can you manually through shaders)

Crealysm game & engine development: http://www.crealysm.com

Looking for a passionate, disciplined and structured producer? PM me

In D3D11, you enable the free conversions by selecting an sRGB texture format for your SRVs/RTVs.

When fetching from an sRGB SRV, 8bit sRGB -> float linear RGB will occur.
When writing to an sRGB RTV, the linear float RGB return values of your pixel shader will be converted to 8bit sRGB before they get written to memory.

Picking lighting colours is a bit of a black art. Try it the right way and wrong way and see what your artists like better :)

If you're doing lighting in a bigger than 8bit buffer - e.g. R16G16B16A16_FLOAT is common - then your lighting values might be beyond the [0,1] range (aka HDR). Tone mapping is the process of taking these "unbounded" lighting values and remapping them into the [0,1] range, so they can be converted to sRGB and displayed.

If you do nothing, your tonemapper is: saturate(lighting) :lol:

The next simplest is a linear scale: lighting * scale

And then you get into more complex ones such as: lighting/(1+lighting)

This would be done in a post-process shader that reads a linear lighting SRV, does the remapping, and writes to an sRGB RTV.
Alternatively, you would end the shader with return pow(x,1/gamma) and be writing into a non-sRGB RTV for manual display gamma adjustment (e.g. If you want to give the user a gamma slider).

Thanks, that clears things up.

In my case for now I'll just focus on doing it 'right' without tone mapping and the other complex thingies :)

So making sure I have the right texture formats for the SRV's and rendertargets that is.

Crealysm game & engine development: http://www.crealysm.com

Looking for a passionate, disciplined and structured producer? PM me

@Hodgman: I'm trying to implement this the correct way, but I'm not sure I'm getting it right.

- sRGB = the final result you see on screen, the base of the color textures, manually picked colors by artists etc.

- linear = the space in which lighting calculations should be done

- if you use the sRGB surface formats for both the backbuffer/rendertarget and the input SRV(s), you don't need to do any conversions in the shader

(like this: color = float4(color.rgb * color.rgb, color.a)

Is the above a correct summary?

If so, I've tried to do it like this:


// the backbuffer/ rendertarget, in sRGB
	HR(mSwapChain->ResizeBuffers(1, mClientWidth, mClientHeight, DXGI_FORMAT_R8G8B8A8_UNORM_SRGB, 0));
	ID3D11Texture2D* backBuffer;
	HR(mSwapChain->GetBuffer(0, __uuidof(ID3D11Texture2D), reinterpret_cast<void**>(&backBuffer)));
	HR(md3dDevice->CreateRenderTargetView(backBuffer, 0, &mRenderTargetView));

// loading a DDS texture and creating the SRV
	HR(CreateDDSTextureFromFileEx(md3dDevice, L"Textures/darkbrickdxt1.dds", 0, D3D11_USAGE_DEFAULT, D3D11_BIND_SHADER_RESOURCE, 0, 0, true, NULL, &mDiffuseMapSRV)); 

// 'true' = bool forceSRGB

// setting the right rendertarget
	md3dImmediateContext->ClearRenderTargetView(mRenderTargetView, color);


The results:

- before:http://www.sierracosworth.nl/gamedev/gamma_incorrect.jpg

- after/ correct: http://www.sierracosworth.nl/gamedev/gamma_correct.jpg

I have a feeling it's all good like this, but just like to be sure.

From what I understand is that the 'force sRGB' bool, makes sure that the input DDS file is always loaded as sRGB, so for example if the source DDS file is DXGI_FORMAT_BC3_UNORM, it will be forced/ converted to DXGI_FORMAT_BC3_UNORM_SRGB. Setting this to true shouldn't be necessary if I make sure the source textures are all in a xxxxx_sRGB format.

Crealysm game & engine development: http://www.crealysm.com

Looking for a passionate, disciplined and structured producer? PM me

I just created a simple command line/ open with / drag file to 'tool', which returns the DDS surface format.

The results were 'shocking' most example DDS files are already in sRGB, which would mess up gamma correctness with this approach.

Unless... input sRGB DDS textures are not converted to sRGB because they already are.

Not 100% sure how DDSTextureLoader does that, I'll have to do some testing to see the differences.

Crealysm game & engine development: http://www.crealysm.com

Looking for a passionate, disciplined and structured producer? PM me

I've been playing around with loading and using textures with correct gamma results, based on the input above.

Assuming the following approach is the right one, where I don't do any manual gamma corrections for handpicked colors or texture samplers:

- create RTV using a sRGB surface format

- create SRV's for textures using a sRGB surface format

Assuming this is correct, input textures should already be in SRGB format and I have to make sure their SRT's are also in a sRGB format.

I now do this:


HR(CreateDDSTextureFromFileEx(md3dDevice, L"Textures/input_bc1.dds", 0, D3D11_USAGE_DEFAULT, D3D11_BIND_SHADER_RESOURCE, 0, 0, true, NULL, &mDiffuseMapSRV)); 

Where the 'true' is for forcing sRGB, which I would say that this is not necessary if input DDS textures are always in a sRGB format. Correct?

The input texture/ DDS file in this case has format BC1_UNORM_SRGB.

My 3 remaining questions:

1. In the DDStextureloader source code, I see that forcing sRGB, changes the surface format for BOTH creating the texture as the SRV (D3D11_TEXTURE2D_DESC and D3D11_SHADER_RESOURCE_VIEW_DESC). Is this the correct way to go?

2. I use texconv to convert input textures to output DDS textures. For example I input a TGA texture and want a BC1_UNORM_SRGB output texture.

What confuses me is that you can set sRGB for the output in 2 ways:

- with the parameter -srgbo -> meaning the output is sRGB

- by defining the output format with -f, for example -f BC1_UNORM_SRGB

I've tested 3 combinations and compared the results:

A -srgbo -f BC1_UNORM_SRGB

B -f BC1_UNORM_SRGB

C -srgbo -f BC1_UNORM

My expectation was that alle 3 would give the same result, but that's only the case for A and B. C gives a visually different DDS with BC1_UNORM format, no SRGB (the texture is 'lighter' in it's colors). Any idea why this is?

My conclusion on this whole topic is that it's actually quite easy. Just make sure all assets (textures, manually picked colors) are sRGB and make sure your RTV(s) and SRVs are all in an sRGB format. That way you'll be getting gamma correct results without having to manually convert colors or texels in shaders.

Crealysm game & engine development: http://www.crealysm.com

Looking for a passionate, disciplined and structured producer? PM me

This topic is closed to new replies.

Advertisement