Gamma correction. Is it lossless?

Started by
3 comments, last by Matias Goldberg 8 years, 1 month ago

I'm interested in the theory of gamma correction in games. Mainly the process and how it ends up looking just right.

Firstly, the guides mention that diffuse textures should be stored gamma-corrected and get converted when needed. When this occurs, how are the textures converted? Does it preserve all the details that the original texture had in floating point or something? When it then is time to gamma-correct again, what exactly happens? All I've read is that one simply applies it to the frame buffer, but since linear images lack detail in dark areas, will this not produce banding of some sort?

Also, do basically all modern engines use gamma-correction? What are some examples of high-profile games that don't?

Advertisement

The sRGB->Linear transformation for color textures will typically happen in the fixed-function texture units, before filtering. You can think of the process as going something like this:

result = 0.0
foreach(texel in filterFootPrint):
    encoded = ReadMemory(texel)
    decoded = DecodeTexel(encoded)  // Decompress from block compression and/or convert from 8-bit to intermediate precision (probably 16-bit)
    linear = sRGBToLinear(decoded)
    result += linear * FilterWeight(texel) // Apply bilinear/trilinear/anisotropic filtering
 
return ConvertToFloat(result)

It's important that the filtering and sRGB->Linear conversion happen at some precision that's higher than 8-bit, otherwise you will get banding. For sRGB conversion 16-bit fixed point or floating point is generally good enough for this purpose. The same goes for writing to a render target: the blending and linear->sRGB conversion need to happen at higher precision than the storage format, or you will get poor results. You will also get poor results if you write linear data to an 8-bit render target, since there will be insufficient precision in the darker range.

Probably the vast majority of modern game are gamma-correct. It's a bona fide requirement for PBR, which almost everyone is adopting in some form or another. I seem to recall someone mentioning that Unity maintains a non-gamma-correct rendering path for legacy compatibility, but don't quote me on that.

Is the conversion lossless? -Given enough bits, yes.

8-bit sRGB -> 10-bit linear in fixed point or 32-bit floating point (16-bit half floating point is not enough for 100% lossless)
10-bit linear fixed point -> 8-bit sRGB

What happens when you screw up the amount of bits? -You get banding. Mass Effect 3 was particularly bad at this:
a61cbe9c-e058-4546-85f6-00e1d2fff244.jpg
fbacbe86-b9bb-4f1d-8b51-74cd46ceeb09.jpg

That happens when you store linear RGB straight into RGBA8888 format. Either reconvert linear->sRGB sRGB->linear on the fly, or use more bits.

Is the conversion lossless? -Given enough bits, yes.

8-bit sRGB -> 10-bit linear in fixed point or 32-bit floating point (16-bit half floating point is not enough for 100% lossless)
10-bit linear fixed point -> 8-bit sRGB

What happens when you screw up the amount of bits? -You get banding. Mass Effect 3 was particularly bad at this:
a61cbe9c-e058-4546-85f6-00e1d2fff244.jpg
fbacbe86-b9bb-4f1d-8b51-74cd46ceeb09.jpg

That happens when you store linear RGB straight into RGBA8888 format. Either reconvert linear->sRGB sRGB->linear on the fly, or use more bits.

Woah, I clearly remember seeing banding in that scene myself. I always thought it was a side effect of trying to run it on an Intel HD 3000 (it ran surprisingly well).

What are the performance or memory sacrifices made for using either?

Woah, I clearly remember seeing banding in that scene myself. I always thought it was a side effect of trying to run it on an Intel HD 3000 (it ran surprisingly well).

Nope. It was the game. However at least AMD drivers clearly hack the game to fix the problem. If you install a modern AMD driver, the game will look fine. But with older drivers, it will look bad. I guess NVIDIA did the same. Don't know about Intel.

What are the performance or memory sacrifices made for using either?

If they use the HW converter from and to sRGB instead of storing directly as linear? Then the answer is none (or barely noticeable).
If they increase the number of bits? Then bigger bandwidth and memory consuption, plus other possible performance penalties (e.g. GCN bilinear filtering of RGBA Float16 is half rate, so you don't just double the bandwidth and video memory required, it also samples slower)

My guess this was a mistake that managed into shipping, rather than a performance tradeoff. Likely some intermediate RTT for a postprocessing effect is getting stored as linear in RGBA8888 by accident and no one fixed.
This has happened to me once. I noticed the problem but left it for later and went unfixed for months and I got used to the artifact to the point I didn't notice. When someone pointed me out, I was "holy sh##" the king is naked!. It's incredible what the mind can do to you. Bugs like these, are better fixed early on.

This topic is closed to new replies.

Advertisement