Linear color space

Started by
12 comments, last by Hodgman 11 years, 7 months ago
Thanks for the very detailed information!


If your render-target is created as an sRGB texture, then when you write to it, the hardware will perform the linear->sRGB conversion when writing values from your pixel shader automatically.


It seems to be easy to do this in (looking at OpenGL specifically now). Using a Frame Buffer Object with a target texture object of format GL_SRGB8_ALPHA8 (which is a required format). The only caveat is that the transformation to SRGB color space should be done last, and I would prefer not to have a dummy draw into a FBO just to get this transformation. I don't think it is possible to associate the attribute "SRGB" with the default frame buffer?

You can do glEnable(GL_FRAMEBUFFER_SRGB), but only if the value GL_FRAMEBUFFER_ATTACHMENT_COLOR_ENCODING of the destination buffer is GL_SRGB.

Of course, there is always the possibility of doing the transformation yourself, in the shader. But automatic built-in functionality is sometimes more optimized.
[size=2]Current project: Ephenation.
[size=2]Sharing OpenGL experiences: http://ephenationopengl.blogspot.com/
Advertisement
It's possible in DX, so it should be possible in GL as well. Although typically in the very last step you want to do the transformation yourself, so that you allow the user to tweak the curve slightly in order to compensate for the gamma of their display.

CRT = pow(linear, 1/2.2);

This would be an approximation of the sRGB algorithm, which is

if (linear <= 0.0031308)
CRT = linear * 12.92;
else
CRT = 1.055 * pow(linear, 1/2.4) - 0.055;

For high values, they are near. For low values, they start to diverge, especially very low values. What is the reason for using this approximation? I suppose one reason can be that of shader performance.

Possibly, it may be that the lower values will not be noticed in games. But that would seem to be a contradiction to the purpose of having extra resolution in the low value interval of sRGB. Actually, my game has some banding problems when drawing the outer limits of a spherical fog in a dark environment (while still not taking sRGB color space into account).

Sampled values from 8-bit textures will only have a resolution of 1/255 = 0.0039, which would not be used in the sRGB conversion unless using some form of HDR.
[size=2]Current project: Ephenation.
[size=2]Sharing OpenGL experiences: http://ephenationopengl.blogspot.com/
What is the reason for using this approximation?
In that code, I was assuming that the input values were in non-linear sRGB, but the desired output was "CRT RGB" (linear RGB with gamma 2.2) -- the code was decoding from sRGB to linear, and then re-encoding the linear values to "CRT RGB". i.e. it was an [font=courier new,courier,monospace]sRGB->CRT[/font] conversion function.
N.B. this isn't a very useful thing to do in practice, because if this operation is done with 8-bits inputs and outputs, you'll just be destroying information. If displaying 8-bit sRGB images on a CRT, it would be best to avoid doing the right thing™ (which is converting the data into the display's colour encoding) and just output the sRGB data unmodified, because "CRT RGB" and sRGB are so similar.

In a game, if you had sRGB textures and the user had a CRT monitor, the right thing to do would be to sRGB decode the textures to linear, do your shading in linear (at high precision, e.g. float/half) and then encode the result with x[sup]^(1/2.2)[/sup] to compensate for the CRT's natural response of x[sup]^2.2[/sup].
However, most user's don't have a CRT, so it's best to encode the final result with the standard sRGB curve instead of the CRT gamma 2.2 curve.

This topic is closed to new replies.

Advertisement