• Create Account

### #Actualskytiger

Posted 31 March 2013 - 04:21 AM

Actually my previous pipeline is wrong, consider:

she records this with a camera that gamma corrects the value CORRECTED = pow(GRAY, 1 / 2.2)
now she views this sRGB Picture on her monitor
her monitor performs GRAY = pow(CORRECTED, 2.2)
and again she perceives "50% gray"

(this works because humans can adjust contrast based on context)

However as humans are more sensitive to differences between dark shades than light

THAT is where the human non-linear response is relevant to gamma correction
according to CIE the human response is approximately logarithmic
so it can be approximated with the same gamma of 2.2

Luckily gamma correction solves 2 problems with same solution:

1) Monitor Gamma
2) Human Sensitivity to Dark Shades

So sRGB can only be used by cameras because of a COINCIDENCE!

This then leads onto the discussion of whether a human can determine the
midpoint between "black" and "white" and how to construct a gradient
that LOOKS linear to a human being

MIDPOINT

50% Radiance        // Physically half way
18% Radiance        // Perceptually half way?

Photoshop creates a slightly S shaped "linear" gradient in linear color
which is then gamma corrected

Maybe Adobe think that whilst a human can adjust for their eyes' own
non-linearity they still need a bit of help ...

New Pipeline:

Camera (LIGHTEN)
Output  = Gamma Corrected Radiance (sRGB Texture)

Gamma Decoding Sampler (DARKEN)
Input   = Gamma Corrected Radiance (sRGB Texture)

(which is functionally the same as Photometric Luminance)

Gamma Encoding (LIGHTEN)

Monitor (DARKEN)

Human (NOCHANGE)

### #1skytiger

Posted 31 March 2013 - 04:12 AM

Actually my previous pipeline is wrong, consider:

she records this with a camera that gamma corrects the value CORRECTED = pow(GRAY, 1 / 2.2)
now she views this sRGB Picture on her monitor
her monitor performs GRAY = pow(CORRECTED, 2.2)
and again she perceives "50% gray"

(this works because humans can adjust contrast based on context)

However as humans are more sensitive to differences between dark shades than light

THAT is where the human non-linear response is relevant to gamma correction
according to CIE the human response is approximately logarithmic
so it can be approximated with the same gamma of 2.2

Luckily gamma correction solves 2 problems with same solution:

1) Monitor Gamma
2) Human Sensitivity to Dark Shades

This then leads onto the discussion of whether a human can determine the
midpoint between "black" and "white" and how to construct a gradient
that LOOKS linear to a human being

MIDPOINT

50% Radiance        // Physically half way
18% Radiance        // Perceptually half way?

Photoshop creates a slightly S shaped "linear" gradient in linear color
which is then gamma corrected

Maybe Adobe think that whilst a human can adjust for their eyes' own
non-linearity they still need a bit of help ...

New Pipeline:

Camera (LIGHTEN)
Output  = Gamma Corrected Radiance (sRGB Texture)

Gamma Decoding Sampler (DARKEN)
Input   = Gamma Corrected Radiance (sRGB Texture)

(which is functionally the same as Photometric Luminance)

Gamma Encoding (LIGHTEN)