Jump to content

  • Log In with Google      Sign In   
  • Create Account

#ActualHodgman

Posted 31 March 2013 - 08:23 AM

  1. Author content in real world radiometric ratios (e.g. albedo).
  2. Encode content in 8-bit sRGB, because it happens to be a readily available 'compression' method for storage, which optimizes for human-perceptual difference in values.
  3. Decode content to floating-point linear RGB, so that we can do math correctly.
  4. Do a whole bunch of stuff in linear RGB, ending up with final linear radiometric RGB wavelength intensity values.
  5. Clamp these into a 0-1 range using a pleasing exposure function -- copy what cameras do so that the results feel familiar to the viewer.
  6. Encode these floating-point values using the colour-space of the display (let's say 8-bit "gamma 2.2" for a CRT -- x^1/2.2, but it could be sRGB, "gamma 1.8", or something else too!).
  7. The display then does the opposite transform (e.g. x^2.2) upon transmitting the picture out into the world.
  8. The resulting radiometic output of the display is then equivalent to the radiometric values we had at step 5 (but with a different linear scale).
  9. These values are then perceived by the viewer, in the same way that they perceive the radiometric values presented by the real-world.

The actual curve/linearity of their perception is irrelevant to us (us = people trying to display a radiometric image in the same was as the real world). All that matters to us is that the values that are output by the display are faithful to the radiometric values we calculated internally -- we care about ensuring that the display device outputs values that match our simulation. If the display does that correctly, then the image will be perceived the same as a real image.

The actual way that perception occurs is of much more interest to people trying to design optimal colour-spaces like sRGB wink.png

 

 

luminance = radiance * is_visible(wavelength) ? 1 : 0;

That should be:

luminance = radiance * weight(wavelength);

where weight returns a value from 0 to 1, depending on how well perceived that particular wavelength is by your eyes.

The XYZ colour space defines weighting curves for the visible wavelengths based on average human values (the exact values differ from person to person).

The weighting function peaks at the red, green and blue wavelengths (which are the individual peaks of the weighting functions for each type of cone cell), which is why we use them as our 3 simulated wavelengths. For low-light scenes, we should actually use a 4th colour, which is the wavelength where the rod-cell's peak responsiveness lies. For a true simulation, we should render all the visible wavelengths and calculate the weighted sum at the end for display, but that's way too complicated wink.png

 

One would first need to know where the texture comes from.
If its from a camera, was there some gamma applied to make it linear or was it obfuscated according to NTSC cause in the old time it was better to send analog signals in that way?
If its from an image editor, had the artist calibrated its computer and monitor correctly to get a linear response over all? Did the bitmap then get saved with the correct settings and correct gamma meta information added or did just some reverse guess of the image editing program which cant know driver settings or monitor settings get applied?

In a professional environment:
* When taking photos with a camera, a colour chart is included in the photo so that the colours can be correctly calibrated later, in spite of anything the camera is doing.
* Artists monitors are all calibrated to correctly display sRGB inputs (i.e. they perform the sRGB to linear curve, overall, across the OS/Driver/monitor).
* Any textures saved by these artists are then assumed to be in the sRGB curved encoding.

The best thing would be to forget about all this cruft and just have the camera translate light into a linear space, only use that in all intermediate stages

If we did all intermediate storage/processing in linear space, we'd have to use 16 bits per channel everywhere.
sRGB / gamma 2.2 are nice, because we can store images in 8 bits per channel without noticeable colour banding, due to them being closer to a perceptual colour space and allocating more bits towards dark colours, where humans are good at differentiation.

 

... but yes, in an ideal world, all images would be floating point HDR, and then only converted at the last moment, as required by a particular display device.


#5Hodgman

Posted 31 March 2013 - 08:18 AM

  1. Author content in real world radiometric ratios (e.g. albedo).
  2. Encode content in 8-bit sRGB, because it happens to be a readily available 'compression' method for storage, which optimizes for human-perceptual difference in values.
  3. Decode content to floating-point linear RGB, so that we can do math correctly.
  4. Do a whole bunch of stuff in linear RGB, ending up with final linear radiometric RGB wavelength intensity values.
  5. Clamp these into a 0-1 range using a pleasing exposure function -- copy what cameras do so that the results feel familiar to the viewer.
  6. Encode these floating-point values using the colour-space of the display (let's say 8-bit "gamma 2.2" for a CRT -- x^1/2.2, but it could be sRGB, "gamma 1.8", or something else too!).
  7. The display then does the opposite transform (e.g. x^2.2) upon transmitting the picture out into the world.
  8. The resulting radiometic output of the display is then equivalent to the radiometric values we had at step 5 (but with a different linear scale).
  9. These values are then perceived by the viewer, in the same way that they perceive the radiometric values presented by the real-world.

The actual curve/linearity of their perception is irrelevant to us (us = people trying to display a radiometric image in the same was as the real world). All that matters to us is that the values that are output by the display are faithful to the radiometric values we calculated internally -- we care about ensuring that the display device outputs values that match our simulation. If the display does that correctly, then the image will be perceived the same as a real image.

The actual way that perception occurs is of much more interest to people trying to design optimal colour-spaces like sRGB wink.png

 

luminance = radiance * is_visible(wavelength) ? 1 : 0;

That should be:

luminance = radiance * weight(wavelength);

where weight returns a value from 0 to 1, depending on how well perceived that particular wavelength is by your eyes.

The XYZ colour space defines weighting curves for the visible wavelengths based on average human values (the exact values differ from person to person).

 

One would first need to know where the texture comes from.
If its from a camera, was there some gamma applied to make it linear or was it obfuscated according to NTSC cause in the old time it was better to send analog signals in that way?
If its from an image editor, had the artist calibrated its computer and monitor correctly to get a linear response over all? Did the bitmap then get saved with the correct settings and correct gamma meta information added or did just some reverse guess of the image editing program which cant know driver settings or monitor settings get applied?

In a professional environment:
* When taking photos with a camera, a colour chart is included in the photo so that the colours can be correctly calibrated later, in spite of anything the camera is doing.
* Artists monitors are all calibrated to correctly display sRGB inputs (i.e. they perform the sRGB to linear curve, overall, across the OS/Driver/monitor).
* Any textures saved by these artists are then assumed to be in the sRGB curved encoding.

The best thing would be to forget about all this cruft and just have the camera translate light into a linear space, only use that in all intermediate stages

If we did all intermediate storage/processing in linear space, we'd have to use 16 bits per channel everywhere.
sRGB / gamma 2.2 are nice, because we can store images in 8 bits per channel without noticeable colour banding, due to them being closer to a perceptual colour space and allocating more bits towards dark colours, where humans are good at differentiation.

 

... but yes, in an ideal world, all images would be floating point HDR, and then only converted at the last moment, as required by a particular display device.


#4Hodgman

Posted 31 March 2013 - 08:15 AM

  1. Author content in real world radiometric ratios (e.g. albedo).
  2. Encode content in 8-bit sRGB, because it happens to be a readily available 'compression' method for storage, which optimizes for human-perceptual difference in values.
  3. Decode content to floating-point linear RGB, so that we can do math correctly.
  4. Do a whole bunch of stuff in linear RGB, ending up with final linear radiometric RGB wavelength intensity values.
  5. Clamp these into a 0-1 range using a pleasing exposure function -- copy what cameras do so that the results feel familiar to the viewer.
  6. Encode these floating-point values using the colour-space of the display (let's say 8-bit "gamma 2.2" for a CRT -- x^1/2.2, but it could be sRGB, "gamma 1.8", or something else too!).
  7. The display then does the opposite transform (e.g. x^2.2) upon transmitting the picture out into the world.
  8. The resulting radiometic output of the display is then equivalent to the radiometric values we had at step 5 (but with a different linear scale).
  9. These values are then perceived by the viewer, in the same way that they perceive the radiometric values presented by the real-world.

The actual curve/linearity of their perception is irrelevant to us (us = people trying to display a radiometric image in the same was as the real world). All that matters to us is that the values that are output by the display are faithful to the radiometric values we calculated internally -- we care about ensuring that the display device outputs values that match our simulation. If the display does that correctly, then the image will be perceived the same as a real image.

The actual way that perception occurs is of much more interest to people trying to design optimal colour-spaces like sRGB wink.png

 

One would first need to know where the texture comes from.
If its from a camera, was there some gamma applied to make it linear or was it obfuscated according to NTSC cause in the old time it was better to send analog signals in that way?
If its from an image editor, had the artist calibrated its computer and monitor correctly to get a linear response over all? Did the bitmap then get saved with the correct settings and correct gamma meta information added or did just some reverse guess of the image editing program which cant know driver settings or monitor settings get applied?

In a professional environment:
* When taking photos with a camera, a colour chart is included in the photo so that the colours can be correctly calibrated later, in spite of anything the camera is doing.
* Artists monitors are all calibrated to correctly display sRGB inputs (i.e. they perform the sRGB to linear curve, overall, across the OS/Driver/monitor).
* Any textures saved by these artists are then assumed to be in the sRGB curved encoding.

The best thing would be to forget about all this cruft and just have the camera translate light into a linear space, only use that in all intermediate stages

If we did all intermediate storage/processing in linear space, we'd have to use 16 bits per channel everywhere.
sRGB / gamma 2.2 are nice, because we can store images in 8 bits per channel without noticeable colour banding, due to them being closer to a perceptual colour space and allocating more bits towards dark colours, where humans are good at differentiation.

 

... but yes, in an ideal world, all images would be floating point HDR, and then only converted at the last moment, as required by a particular display device.


#3Hodgman

Posted 31 March 2013 - 08:05 AM

  1. Author content in real world radiometric ratios (e.g. albedo).
  2. Encode content in 8-bit sRGB, because it happens to be a readily available 'compression' method for storage, which optimizes for human-perceptual difference in values.
  3. Decode content to floating-point linear RGB, so that we can do math correctly.
  4. Do a whole bunch of stuff in linear RGB, ending up with final linear radiometric RGB wavelength intensity values.
  5. Clamp these into a 0-1 range using a pleasing exposure function -- copy what cameras do so that the results feel familiar to the viewer.
  6. Encode these floating-point values using the colour-space of the display (let's say 8-bit "gamma 2.2" for a CRT -- x^1/2.2).
  7. The display then does the opposite transform (x^2.2) upon transmitting the picture out to the world.
  8. The resulting radiometic output of the display is then equivalent to the radiometric values we had at step 5.
  9. These values are then perceived by the viewer, in the same way that they percieve real-world radiometric values.

The actual curve/linearity of their perception is irrelevant to us (us = people trying to display a radiometric image in the same was as the real world). All that matters to us is that the values that are output by the display are faithful to the radiometric values we calculated internally -- we care about ensuring that the display device outputs values that match our simulation. If the display does that correctly, then the image will be perceived the same as a real image.

The actual way that perception occurs is of much more interest to people trying to design optimal colour-spaces like sRGB wink.png

 

One would first need to know where the texture comes from.
If its from a camera, was there some gamma applied to make it linear or was it obfuscated according to NTSC cause in the old time it was better to send analog signals in that way?
If its from an image editor, had the artist calibrated its computer and monitor correctly to get a linear response over all? Did the bitmap then get saved with the correct settings and correct gamma meta information added or did just some reverse guess of the image editing program which cant know driver settings or monitor settings get applied?

In a professional environment:
* When taking photos with a camera, a colour chart is included in the photo so that the colours can be correctly calibrated later, in spite of anything the camera is doing.
* Artists monitors are all calibrated to correctly display sRGB inputs (i.e. they perform the sRGB to linear curve, overall, across the OS/Driver/monitor).
* Any textures saved by these artists are then assumed to be in the sRGB curved encoding.

The best thing would be to forget about all this cruft and just have the camera translate light into a linear space, only use that in all intermediate stages

If we did all intermediate storage/processing in linear space, we'd have to use 16 bits per channel everywhere.
sRGB / gamma 2.2 are nice, because we can store images in 8 bits per channel without noticeable colour banding, due to them being closer to a perceptual colour space and allocating more bits towards dark colours, where humans are good at differentiation.

 

... but yes, in an ideal world, all images would be floating point HDR, and then only converted at the last moment, as required by a particular display device.


#2Hodgman

Posted 31 March 2013 - 08:02 AM

  1. Author content in real world radiometric ratios (e.g. albedo).
  2. Encode content in 8-bit sRGB, because it happens to be a readily available 'compression' method for storage, which optimizes for human-perceptual difference in values.
  3. Decode content to floating-point linear RGB, so that we can do math correctly.
  4. Do a whole bunch of stuff in linear RGB, ending up with final linear radiometric RGB wavelength intensity values.
  5. Clamp these into a 0-1 range using a pleasing exposure function -- copy what cameras do so that the results feel familiar to the viewer.
  6. Encode these floating-point values using the colour-space of the display (let's say 8-bit "gamma 2.2" for a CRT -- x^1/2.2).
  7. The display then does the opposite transform (x^2.2) upon transmitting the picture out to the world.
  8. The resulting radiometic output of the display is then equivalent to the radiometric values we had at step 5.
  9. These values are then perceived by the viewer, in the same way that they percieve real-world radiometric values.

The actual curve/linearity of their perception is irrelevant to us (us = people trying to display a radiometric image in the same was as the real world). All that matters to us is that the values that are output by the display are faithful to the radiometric values we calculated internally -- we care about ensuring that the display device outputs values that match our simulation. If the display does that correctly, then the image will be perceived the same as a real image.

The actual way that perception occurs is of much more interest to people trying to design optimal colour-spaces like sRGB wink.png

 

One would first need to know where the texture comes from.
If its from a camera, was there some gamma applied to make it linear or was it obfuscated according to NTSC cause in the old time it was better to send analog signals in that way?
If its from an image editor, had the artist calibrated its computer and monitor correctly to get a linear response over all? Did the bitmap then get saved with the correct settings and correct gamma meta information added or did just some reverse guess of the image editing program which cant know driver settings or monitor settings get applied?

In a professional environment:
* When taking photos with a camera, a colour chart is included in the photo so that the colours can be correctly calibrated later, in spite of anything the camera is doing.
* Artists monitors are all calibrated to correctly display sRGB inputs (i.e. they perform the sRGB to linear curve, overall, across the OS/Driver/monitor).
* Any textures saved by these artists are then assumed to be in the sRGB curved encoding.

The best thing would be to forget about all this cruft and just have the camera translate light into a linear space, only use that in all intermediate stages

If we did all intermediate storage/processing in linear space, we'd have to use 16 bits per channel everywhere.
sRGB / gamma 2.2 are nice, because we can store images in 8 bits per channel without noticeable colour banding, due to them being closer to a perceptual colour space and allocating more bits towards dark colours, where humans are good at differentiation.


#1Hodgman

Posted 31 March 2013 - 07:56 AM

  1. Author content in real world radiometric ratios (e.g. albedo).
  2. Encode content in 8-bit sRGB, because it happens to be a readily available 'compression' method for storage, which optimizes for human-perceptual difference in values.
  3. Decode content to floating-point linear RGB, so that we can do math correctly.
  4. Do a whole bunch of stuff in linear RGB, ending up with final linear radiometric RGB wavelength intensity values.
  5. Clamp these into a 0-1 range using a pleasing exposure function -- copy what cameras do so that the results feel familiar to the viewer.
  6. Encode these floating-point values using the colour-space of the display (let's say 8-bit "gamma 2.2" for a CRT -- x^1/2.2).
  7. The display then does the opposite transform (x^2.2) upon transmitting the picture out to the world.
  8. The resulting radiometic output of the display is then equivalent to the radiometric values we had at step 5.
  9. These values are then perceived by the viewer, in the same way that they percieve real-world radiometric values. The actual curve/linearity of their perception is irrelevant. All that matters is that the values that are output by the display are faithful to the radiometric values we calculated internally.

 

One would first need to know where the texture comes from.
If its from a camera, was there some gamma applied to make it linear or was it obfuscated according to NTSC cause in the old time it was better to send analog signals in that way?
If its from an image editor, had the artist calibrated its computer and monitor correctly to get a linear response over all? Did the bitmap then get saved with the correct settings and correct gamma meta information added or did just some reverse guess of the image editing program which cant know driver settings or monitor settings get applied?

In a professional environment:
* When taking photos with a camera, a colour chart is included in the photo so that the colours can be correctly calibrated later, in spite of anything the camera is doing.
* Artists monitors are all calibrated to correctly display sRGB inputs (i.e. they perform the sRGB to linear curve, overall, across the OS/Driver/monitor).
* Any textures saved by these artists are then assumed to be in the sRGB curved encoding.

The best thing would be to forget about all this cruft and just have the camera translate light into a linear space, only use that in all intermediate stages

If we did all intermediate storage/processing in linear space, we'd have to use 16 bits per channel everywhere.
sRGB / gamma 2.2 are nice, because we can store images in 8 bits per channel without noticeable colour banding, due to them being closer to a perceptual colour space and allocating more bits towards dark colours, where humans are good at differentiation.


PARTNERS