View more

View more

View more

### Image of the Day Submit

IOTD | Top Screenshots

### The latest, straight to your Inbox.

Subscribe to GameDev.net Direct to receive the latest updates and exclusive content.

# Clarifications: Gamma Correction, sRGB

Old topic!

Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

### #1Quat  Members

Posted 29 January 2014 - 01:58 PM

So from my readings, images authored and stored on the hard disk are typically gamma corrected.  So (x^1/2.2) is applied to them and they would appear brighter on disk to compensate for the monitors gamma curve x^2.2.

In DirectX, to convert to linear space, we specify an SRGB format like DXGI_FORMAT_R8G8B8A8_UNORM_SRGB.

When people talk about "sRGB" space, is that synonomous with "linear space"?  What space do they call the gamma corrected images (with the x^1/2.2 applied?  "Gamma corrected space"?

Second, the GPU Gems article (http://http.developer.nvidia.com/GPUGems3/gpugems3_ch24.html) says:

Alpha channels, normal maps, displacement values (and so on) are almost certainly already linear and should not be corrected further, nor should any textures that you were careful to paint or capture in a linear fashion.

I can see this for procedurally generated data maps.  But artists often use photoshop to make/edit height maps, alpha maps, and use tools like crazy bump to make normal maps.  When these maps are saved to disk, they will be gamma corrected, too, right (unless care was taken to save them as sRGB (linear format)?  So shouldn't they be converted to linear space as well?

-----Quat

### #2Chris_F  Members

Posted 29 January 2014 - 02:30 PM

POPULAR

There is a bit of confusion here that I can hopefully clear up. For starters, most 8-bit images aren't encoded with a gamma of 2.2, they are encoded using the sRGB standard which looks like this:

A gamma of 2.2 is only an approximation of sRGB.

Secondly, the reason images are encoded this way has absolutely nothing to do with your monitor. 8-bit images use non-linear encoding because they wouldn't have enough precision otherwise to prevent noticeable posterization from occurring. sRGB and gamma encodings steal precision away from the higher values and give it to the lower values. This works out because the human visual system is more sensitive to variations in low brightness. Most monitors utilize an 8-bit signal, so they use a gamma as well for the same reason. This is however completely independent of any encoding you use for your images. For instance, Macintosh computers use a gamma of 1.8 instead of 2.2. CRT monitors have an inherent gamma curve due to the fact that the response of the phosphors to electrons is non-linear.

sRGB space is not synonymous with linear space. In this context they are basically opposites. You work with images in linear space, ideally with a precision of at least 16-bits per component. If you want to store an image at 8-bit precision, then you convert it from linear space to sRGB space. When you work with an sRGB image, you convert it from sRGB space to linear space. You only use sRGB/gamma for values which encode luminosity information, since the whole point is to better utilize the limited precision based on how you brain and eyes perceive luminosity. So by default, sRGB texture formats only apply the automatic sRGB conversion to the RGB channels, and the alpha is assumed to be linear. There may be some circumstances when a non-linear encoding makes sense for other texture information as well.

Editing a highmap in Photoshop poses no issues. A value of 127 stored in the alpha channel for example will come out to a value of 0.5 in you shader.

Edited by Chris_F, 29 January 2014 - 02:32 PM.

Old topic!

Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.