Gamma from sRGB Texture to Human Brain

Started by
16 comments, last by skytiger 11 years, 1 month ago

See the CIE definition of "Lightness"

http://www.poynton.com/PDFs/Rehabilitation_of_gamma.pdf

http://en.wikipedia.org/wiki/Lightness

>> It was never intended to be applied on how the human eye & brain interprets light particles/waves

Exactly my point ... but a texture sampled by DirectX (or a JPEG opened in Photoshop) does not need gamma correction - it needs lightness correction

The gamma correction is only needed later ...

Advertisement

Exactly my point ... but a texture sampled by DirectX (or a JPEG opened in Photoshop) does not need gamma correction - it needs lightness correction

Can you explain what you mean there?

We use "gamma correction" to mean going from a linear colour space to a curved one, or vice versa.

If the texture is saved in the (curved) sRGB colour space, but we want it to be in a linear RGB space so we can perform math on it, then we need "gamma correction" (meaning, we apply the sRGB->linear curve).

Later, we want to display out linear RGB values on a monitor, which likely uses a curved colour space, so we again need "gamma correction" (meaning, we apply a linear->sRGB curve, or a linear->"gamma2.2" curve, or a linear->"gamma1.8" curve, or a linear->"gamma2.4" curve, etc, depending on the monitor).

But I think that few people realise that:

"gamma correction" on a digital camera
is actually lightness (aka brightness) correction
it has NOTHING to do with monitor gamma
and is used to ensure sufficient precision for the dark shades in an 8 bpp image (to avoid banding)

You do realize that...?
  • The GPU gems 3 article (main reference for many) talks about the banding when storing in an image file
  • DX Documentation also talks about it.
  • Digital Cameras store in 2.2 for compatibility; as like I said, historically we stored in gamma space because:
    • It just worked (lack of knowledge)
    • Lack of processing power (it's cheaper to store in gamma space then send directly to monitor 'as is')
    • We NOW still do that, but now consciously, because of the banding (lightness correction as you want to call it)
    • When a Digital Camera wants to be picky about this stuff (preserving colours, ignore compatibility) it uses a custom colour profile.
    • There's A LOT more involved in storing from an image captured by a ccd sensor to finally reaching the jpeg file? (ISO, Aperture, shutter speed, tone mapping, de-noising, come to mind)
  • The main way to deal with "preserving contrast and details" (aka. directly messing with lightness) is in the tone mapping. See Reinhard's original paper.

Yes

I have read alot of stuff about gamma correction

and my own pipeline is meant to be gamma correct all the way through

The reason for my post was to pin down once and for all the exact and unambiguous terminology and reasons for everything - which I haven't seen in one place yet

When I was thinking about gamma correction and its relationship to an HDR pipeline I am working on ... it didn't make sense

Also when considering the concept of a "linear gradient" in multiple color spaces ... things didn't make sense

Now it does ... they key is understanding the difference between "lightness correction" and "gamma correction"

and the fact that there are more relevant color spaces than I first thought:

what the human brain perceives is clearly relevant, and this has been standardized as CIELAB

where the L stands for Lightness which is a key concept missing from most gamma correction texts

I believe that the S shaped "linear gradient" in Photoshop is related to CIELAB

Also that DCC tools shouldn't be gamma correcting content - they should be lightness correcting it

If you had asked me a month ago if I understood gamma correction I would have said "yes"

But I didn't *really*

And I still think the answer to "how to create a linear gradient (in the human brain)" is not straightforward at all ...

Color perception in the eye is no way linear, it is not even RGB. In reality, the three types of receptors work on something like "3 somewhat overlapping, wiggly, asymetric bell curves" of which one is roughly some kind of blue (S), one is a kind of green (M),and one os some greenish-yellow (L).

The brain transforms this information into what you see as "colors" (including red, green, and blue), but there is no simple (linear, logarithmic, or similar) mapping corresponding to perception.

What's of practical interest is what your brain thinks is the correct color in a typical situation, and what you can represent well on a computer (preferrably having the hardware do any necessary transforms). sRGB works well for "monitor in somewhat dim room", and that is why you want to use it.

Can you explain what you mean there?



We use "gamma correction" to mean going from a linear colour space to a curved one, or vice versa.

If the texture is saved in the (curved) sRGB colour space, but we want it to be in a linear RGB space so we can perform math on it, then we need "gamma correction" (meaning, we apply the sRGB->linear curve).

The purpose of "gamma correcting" an albedo texture in Photoshop is to preserve detail in the dark areas which humans are sensitive to

it has NOTHING to do with monitor gamma

This human response curve is "Lightness" which is standardized by CIE as "how a standard human perceives Luminance"

and is the basis for the CIELAB color space

So DCC tools should be "Lightness correcting" to preserve information (avoid banding) according the the CIE transfer function for human perception

They don't have to concern themselves with Monitor gamma at all!

That is taken care of by sRGBWrite (or similar) at the end of the pipeline

HLSL shaders should be doing this:

float3 linear = LightnessDecode(tex2D(s, t));

(a sudden thought)

float3 linear = CIELABDecode(tex2D(s, t));

not this:

float3 linear = GammaDecode(tex2D(s, t));

The difference is probably so minor, nobody would notice

But at least gamma makes sense to me now ...

I created a real linear gradient in photoshop, then converted it to LAB color space, then saved

and compared with sRGB linear gradient ... there is a very slight difference in the darker shades

Also here is a blogpost about projecting LAB onto a Cylinder so you can create perceptually correct gradients:

http://vis4.net/blog/posts/avoid-equidistant-hsv-colors/

Ok, let's see if I finally got it right:

When I output a gradient which is the result of a mathematical linear interpolation between 0 and 1, then I'll perceive it to be linear since the sRGB cuve of the monitor (coincidentally) matches the human perception of lightness quite well.

As soon as I add gamma correction to this gradient I'll no longer perceive it to be linear, but instead it will be photometric linear, which leads to the effect that a value of 0.5 will be perceived as bright as a pattern made of alternating black and white stripes viewed from some distance, since (ideally) the monitor will emit the same amount of photons in the mean in both cases.

Would the CIE standard observer recognise a radiometric linear gradient as such?

Would they perceive the midpoint radiance as such?

Currently CIE have no standard for gradient perception

I believe that a human can partially compensate for their own non-linearity

And that to create a perceptually linear gradient you need a mostly radiometric linear with a slight S (darks darker, midpoint at 50%, lights lighter)

This document calls radiometric linearity "uniformity"

and perceptual linearity "homegeneity"

http://oled100.eu/pdf/OLED100.eu_Newsletter_February_2010.pdf

At work we have quality control people trained to "see"

They can see stuff other people can't

I can't think of a reason for a normal human being to care about a linear gradient ... not in the way we care about predicting gravity (to catch a ball) etc.

This topic is closed to new replies.

Advertisement