Gamma correction help

Started by
2 comments, last by _the_phantom_ 11 years, 3 months ago

I'm not exactly sure if the results I'm getting are correct, can someone tell me if this is how it should look like?

This is what I do in my deferred renderer:

- Render GBuffer (nothing happening here)

- Render light to accumulation buffer (read specular albedo as linear, meaning pow(color, 2.2f))

- Compose Pass (read albedo from gbuffer as linear and combine it with the lighting but don't convert back)

- Do post processing

- Do tonemapping and write back to the backbuffer (which is R8G8B8A8_UNORM_SRGB)

Is that the correct way of doing it ?

And here's some screens:

http://d.pr/i/mVEW

http://d.pr/i/jeyR

http://d.pr/i/LwSJ

Advertisement

Sorry, can't see your images.

You should do all calculation in linear space and only convert it while reading it from the texture and writing it back to the framebuffer, and the current hardware is able to handle sRGB textures automatically (no need to transform it yourself), further infos are found here.

For testing purpose I would render three bars (greyscale 0...100%):

1. bar: linear texture (e.g. tga)

2. bar: sRGB texture (use sRGB texture format)

3. bar: shader which generate linear ramp

All bars should look the same and should be comparable to an according color ramp image seen in a browser/image processing tool.

So when using a higher precision format like R16G16B16A16 it doesn't matter ? (meaning I don't have to manually do anything during read/write)?
Since I've only seen the LDR format as a separate SRGB format.
16bit formats, holding linear data, have enough precision that sRGB encoding of the data isn't required which is why you don't see sRGB versions of those formats.<br /><br />So if you are writing from your shader to a 16bit float target you don't need to encode to sRGB and thus don't need to decode; the only time you'll want to goto sRGB is when writing to an 8Bit/channel format.

This topic is closed to new replies.

Advertisement