Sign in to follow this  
larspensjo

OpenGL Linear color space

Recommended Posts

I am trying to learn about when to use, and when to not use, linear and non-linear color space. This is how I understand it, please correct me where I am wrong or have incomplete understanding:[list]
[*]Many texture manipulations need to be done in linear space, e.g. anti aliasing and light manipulations.
[*]Many tools, like Photoshop, save pictures in non-linear format by default.
[*]You can specify SRGB as a bitmap format (e.g. GL_SRGB in OpenGL to glTexImage2D), and the graphic drivers (or the hardware?) will automatically transform the bitmap you sample from non-linear to linear.
[*]If you transform it yourself, you do that by setting each color component to c^2.2. This would be an approximation of the SRGB.
[*]You can transform each color channel independently on the others.
[*]As a last step, outputting the pixels to the screen, you need to transform it back into non-linear space, using c^(1/2.2) for each channel.
[*]The value 2.2 depends on the display you use. It looks like Apple use 1.8.
[/list]
I am not sure about the glGenerateMipmap() function. Will it take the linear/non-linear attribute (SRGB) into account when applying the filter functions?

Is the approximation above good enough, for showing pixels on the screen? Or is it that the exact SRGB encoding need to be used?

Is there any automatic support in the hardware for the final pixel transformation to show on screen?

In most example and tutorials "out there", you don't find any gamma correction being done. So either I don't understand this, or there is a general lack of understanding elsewhere (or something in between :-).

Share this post


Link to post
Share on other sites
It sounds like you understand [img]http://public.gamedev.net//public/style_emoticons/default/smile.png[/img]
[quote name='larspensjo' timestamp='1347870871' post='4980814']In most example and tutorials "out there", you don't find any gamma correction being done. So either I don't understand this, or there is a general lack of understanding elsewhere[/quote]Yes, "gamma correct rendering" has only become popular in real-time graphics over, maybe the past 5 or so years (guessing), along with the growth in popularity of HDR and physically-based lighting. A lot of older real-time rendering literature ignores gamma issues.
[quote]I am trying to learn about when to use, and when to not use, linear and non-linear color space[/quote]You can think of sRGB as a "compression" format for the number of bits required to store an image without colour banding. Humans are good at differentiating between different colours, especially dark colours. The sRGB curve allocates "more bits" to representing darker colours, which means it allows 8-bit images to have less noticeable colour-banding in dark areas.
I've seen it stated that to get the same precision in dark areas, a linear image would need 10-16 bits.
The bad thing is that doing math in curved spaces is difficult -- we're used to flat spaces, like the [url="http://en.wikipedia.org/wiki/Number_line"]number-line[/url], or the [url="http://en.wikipedia.org/wiki/Cartesian_coordinate_system"]cartesian plane[/url], but "gamma spaces" ([i]like sRGB, or "gamma 2.2"[/i]) aren't flat, they're curved.
[quote]I am not sure about the glGenerateMipmap() function. Will it take the linear/non-linear attribute (SRGB) into account when applying the filter functions?[/quote]OpenGL and Dx10/11 should do this correctly (convert to linear, downsample, convert to sRGB), but DX9 does not.[quote]You can specify SRGB as a bitmap format (e.g. GL_SRGB in OpenGL to glTexImage2D), and the graphic drivers (or the hardware?) will automatically transform the bitmap you sample from non-linear to linear.[/quote]When you sample from the texture, the texture-fetch hardware will apply the inverse sRGB curve to convert it from 8-bit sRGB to floating-point linear.[quote]As a last step, outputting the pixels to the screen, you need to transform it back into non-linear space, using c^(1/2.2) for each channel.[/quote]If your render-target is created as an sRGB texture, then when you write to it, the hardware will perform the linear->sRGB conversion when writing values from your pixel shader automatically.
It's common to just assume that the user's monitor is an sRGB monitor (because, it's pretty much *the* standard), but yes, a lot of monitors aren't actually sRGB -- I've seen gamma 2.4 and gamma 1.8 monitors before. To get the correct appearance on these monitors, it would be better to manually convert to e.g. gamma 1.8 rather than to convert to sRGB.[quote]Many tools, like Photoshop, save pictures in non-linear format by default[/quote]If you're painting a picture in an application that doesn't do any "gamma correction", then the data in your file is in the same "gamma space" as your monitor.
That is to say -- the image will only look the way that I saw it (while painting it), if it's displayed on another monitor with the same gamma-curve. If I paint my artwork on a gamma-1.8 screen, and then view it on an sRGB screen, it will look different ([i]because the original data was painted in the "gamma 1.8 space"[/i]).
For this reason, it's common for games studios to buy expensive calibration equipment to make sure that [i]all[/i] of their artists monitors are correctly calibrated to the sRGB curve. Edited by Hodgman

Share this post


Link to post
Share on other sites
Just wanted to clarify this one. For some reason, they have used "sRGB" to denote "linear color space" in DirectX and OpenGL, which are just two separate things.

Indeed, you can convert from linear to non-linear color spaces and vice-versa by using [url="http://en.wikipedia.org/wiki/Gamma_correction"]Gamma correction[/url].

RGB color space by itself lacks any standard or definition, so sRGB was proposed as a standard, which is defined by specifying [url="http://en.wikipedia.org/wiki/White_point"]white point[/url] and three [url="http://en.wikipedia.org/wiki/Chromaticities"]chromaticities[/url]. For instance, there is also [url="http://en.wikipedia.org/wiki/Wide-gamut_RGB_color_space"]Wide gamut RGB[/url], [url="http://en.wikipedia.org/wiki/Adobe_RGB_color_space"]Adobe RGB[/url] and so on.

Now, the conversion from one color space to another, where the color gamut is different, you would need to convert your initial color space to [url="http://en.wikipedia.org/wiki/CIE_XYZ"]CIE XYZ[/url] by using linear transformation and then to the desired color space.

This is why it is simply wrong to call sRGB "linear" and non-sRGB "non-linear" and do the conversion between both using gamma correction. In reality, both typical RGB and sRGB may or may not be linear.

In fact, typically, you can assume that your RGB color space is [b]actually linear[/b]. You don't need to voluntarily apply any gamma correction there. Since it lacks standard definition, you can simply assume that when you work with RGB, you work in sRGB, or in Adobe RGB - whatever your choice is. In order to properly standarize your color space, you would need to convert it to one of perceptually uniform (or supposedly) color spaces such as [url="http://en.wikipedia.org/wiki/CIELAB"]CIELAB[/url], [url="http://en.wikipedia.org/wiki/CIELUV"]CIELUV[/url], DIN99, ATD95, [url="http://en.wikipedia.org/wiki/CIECAM"]CIECAM[/url], or at least CIE XYZ, which can actually represent all visible colors by human eye, unlike RGB, which is limited by triangle in CIE diagram.

Now, the problem is that most LCD displays apply huge gamma correction to the input image. Not only that, they may also pre-process images and oversaturate them too. Why? To sell better since higher contrast and crispier images appear prettier, but in the end you receive a very distorted image. [u]This is not your problem, it is a problem of display's manufacturers and vendors![/u] You simply can't make an application that will predict all of the monitors out there, so it's their responsibility to generate final image as accurate as possible.

I don't know why they introduced "sRGB" into DirectX and OpenGL - after all, suposedly, you are already working in sRGB and it's display's job to properly represent input sRGB data so that output strictly conforms to sRGB, or any other standard. If you do gamma correction in your application - well, you still don't know how display is going to re-transform your image data, so in the end you may actually get less accurate results.

My guess is that they introduced so-called "sRGB" in APIs just for the hype of it, e.g.: "We can now store textures and front-buffer in gamma-adjusted format! WOW!" (like we couldn't do it back in 1969).

You may check some of the following bibliography to figure out more about different color spaces (you can see by the dates that this is a very studied topic, yet it seems that people making changes in DirectX/OpenGL standards regarding sRGB have never read them):

1. Poynton, Charles. Digital Video and HDTV Algorithms and Interfaces. Morgan Kaufmann, 2003.
2. Poynton, Charles. "Frequently-Asked Questions about Color." [url="http://www.poynton.com/ColorFAQ.html"]http://www.poynton.com/ColorFAQ.html[/url]
3. Hill, Francis S. Computer Graphics using OpenGL. Prentice Hall, 2000.
4. Hearn, Donald, and Pauline M. Baker. Computer Graphics, C Version. Prentice Hall, 1996.
5. Luo, Ronnier M., Guihua Cui, and Changjun Li. "Uniform Colour Spaces Based on CIECAM02 Colour Appearance Model." Color Research & Application (Wiley InterScience) 31, no. 4 (June 2006): 320-330.
6. Lindbloom, Bruce J. "Accurate Color Reproduction for Computer Graphics Applications." Computer Graphics 23, no. 3 (July 1989): 117-126.
7. Brewer, C. A. "Color Use Guidelines for Data Representation." Proceedings of the Section on Statistical Graphics. Alexandria VA: American Statistical Association, 1999. 55-60.
8. MacAdam, David L. "Visual Sensitivities to Color Differences in Daylight." (Journal of the Optical Society of America) 32, no. 5 (May 1942): 247-273.
9. Schanda, Janos. Colorimetry: Understanding the CIE system. Wiley Interscience, 2007.
10. Pratt, William K. Digital Image Processing. 3rd Edition. Wiley-Interscience, 2001.
11. Keith, Jack. Video Demystified: A Handbook for the Digital Engineer. 5th Edition. Fremont, CA: Newnes, 2007.

Share this post


Link to post
Share on other sites
[quote name='Lifepower' timestamp='1347888622' post='4980867']
Just wanted to clarify this one. For some reason, they have used "sRGB" to denote "linear color space" in DirectX and OpenGL, which are just two separate things.[/quote]No, sRGB is [b]not[/b] linear color space so your accusations levelled against DirectX/OpenGL are wildly inaccurate.

For why it was so important to add sRGB support to GPUs, read this primer: [url="http://http.developer.nvidia.com/GPUGems3/gpugems3_ch24.html"]http://http.develope...gems3_ch24.html[/url]

Yes, the [url="http://en.wikipedia.org/wiki/SRGB"]sRGB[/url] standard does define a standard linear RGB space ([i]as an intermediate step[/i]) based on a standardized red/green/blue linear transform from [url="http://en.wikipedia.org/wiki/CIE_1931_color_space"]CIE XYZ[/url] space...
...but sRGB is a curved "gamma corrected" transformation of these standard linear RGB values.
sRGB isn't even a simple "gamma correction" transform. It's similar to gamma correction of 2.2, but it's actually a piecewise transform with a linear toe at the bottom and a gamma of 2.4 at the top.
Linear RGB to sRGB = [url="http://www.wolframalpha.com/input/?i=Piecewise%5B%7B%7Bx*12.92%2C+x%3C%3D0.0031308+%26+x%3E%3D0%7D%2C%7B1.055*%28x%5E%281%2F2.4%29%29-0.055%2C+x%3E0.0031308+%26+x%3C%3D1%7D%7D%5D"]http://www.wolframal...31308 & x<=1}}][/url]
sRGB to linear RGB = [url="http://www.wolframalpha.com/input/?i=Piecewise%5B%7B%7Bx%2F12.92%2C+x%3C%3D0.04045+%26+x%3E%3D0%7D%2C%7B%28%28x%2B0.055%29%2F1.055%29%5E2.4%2C+x%3E0.04045+%26+x%3C%3D1%7D%7D%5D"]http://www.wolframal...04045 & x<=1}}][/url]

It's the standard color space for the WWW, and it's being pushed as a standard color space for TVs, computer monitors, cameras, etc... If a display performs sRGB "gamma correction" on the signal, then that's a good thing -- they're supposed to assume that the input signal is sRGB ([i]~gamma 2.2[/i]) and adjust voltages accordingly to produce the appropriate perceptually linear luminance response.

Yes it's true that different monitors will do different, wacky things to the input signal, but the world is getting saner in this regard thanks to most manufacturers agreeing to adopt a single gamma standard. The right thing™ to do these days is to perform all of your math in a linear color space, and then output an sRGB signal, unless otherwise asked not to. Edited by Hodgman

Share this post


Link to post
Share on other sites
[quote name='Hodgman' timestamp='1347889890' post='4980871']
No, sRGB is [b]not[/b] linear color space so your accusations levelled against DirectX/OpenGL are wildly inaccurate.[/quote]
No, my accusations against sRGB in DirectX/OpenGL are based on fact that conversion between RGB and sRGB is thought in terms of gamma correction, while in reality RGB and sRGB may actually be the same thing. In any case, you cannot convert between the two using gamma correction, so [url="http://msdn.microsoft.com/en-us/library/windows/desktop/bb173460%28v=vs.85%29.aspx"]this SDK article[/url], for instance, is misleading.

[quote name='Hodgman' timestamp='1347889890' post='4980871']For why it was so important to add sRGB support to GPUs, read this primer: [url="http://http.developer.nvidia.com/GPUGems3/gpugems3_ch24.html"]http://http.develope...gems3_ch24.html[/url]
[/quote]
Have you read the article yourself? The article you provided can be used as an exercise to find out logical fallacies. Begging the question and fallacy of composition are amongst the first ones visible.

[quote name='Hodgman' timestamp='1347889890' post='4980871']Yes, the [url="http://en.wikipedia.org/wiki/SRGB"]sRGB[/url] standard does define a standard linear RGB space ([i]as an intermediate step[/i]) based on a standardized red/green/blue linear transform from [url="http://en.wikipedia.org/wiki/CIE_1931_color_space"]CIE XYZ[/url] space...[/quote]
So, you've repeated what I've said, then added phrase "as an intermediate step" (to what, by the way?) and now you are saying that:

[quote name='Hodgman' timestamp='1347889890' post='4980871']...but sRGB is a curved "gamma corrected" transformation of these standard linear RGB values.
sRGB isn't even a simple "gamma correction" transform. It's similar to gamma correction of 2.2, but it's actually a piecewise transform with a linear toe at the bottom and a gamma of 2.4 at the top.
[/quote]
Nonsense. sRGB is just a color space, nothing more. It's not "gamma correction of 2.2", left alone "piecewise transform with linear toe at [gibberish]". Proof by verbosity is a logical fallacy (but you already know that), please don't do that.

[quote name='Hodgman' timestamp='1347889890' post='4980871']It's the standard color space for the WWW, and it's being pushed as a standard color space for TVs, computer monitors, cameras, etc... If a display performs sRGB "gamma correction" on the signal, then that's a good thing -- they're supposed to assume that the input signal is sRGB ([i]~gamma 2.2[/i]) and adjust voltages accordingly to produce the appropriate perceptually linear luminance response.[/quote]
Two separate arguments. Yes, sRGB is a standard and popular color space. But starting from "display performs sRGB gamma correction" - just a senseless manipulation of words.

Share this post


Link to post
Share on other sites
[quote name='Lifepower' timestamp='1347892637' post='4980887']
So, you've repeated what I've said, then added phrase "as an intermediate step" (to what, by the way?) and now you are saying that
...
Nonsense. sRGB is just a color space, nothing more. It's not "gamma correction of 2.2",
[/quote]No, you misread - the sRGB standard defines two colour spaces -- one is a linear RGB colour-space, which is used as an intermediate between CIE XYZ and sRGB-proper.
Once you've got colour data in this "linear RGB" space, you can perform the above transforms on it to get the values into the non-linear sRGB space.

Regarding sRGB being similar to "gamma 2.2" -- the above functions to convert to/from linear/sRGB ([i]the "piecewise gibberish"[/i]) can be approximated by x[sup]^2.2[/sup] and x[sup]^(1/2.2)[/sup] ([i]i.e. the regular "gamma correction" process with a power of 2.2[/i]).

Linear RGB colour spaces can be used to describe physical quantities of energy, not just colours. If I've got 100 "units" of photons at the "red" wavelength and send them through a half-silvered-mirror so I end up with only half of them, then I've now got "50" units of red photons. This kind of math does not work in non-linear spaces like sRGB.

Likewise if I've got a black/white checker pattern (0 & 255) and mathematically average it, I get a "50% grey" image (127). In a linear color-space, this value is exactly half as bright as the original white squares. However, in non-linear spaces this math doesn't work. For example, in sRGB 127 is ~21% as bright as 255. If you down-scale an sRGB image of a black/white checkerboard, the resulting colour should be ~187 (which corresponds to "half as bright as white").

e.g. the left half of this image is resized by performing the naive math ([i]averaging sRGB values directly resulting in 127[/i]).
The right half performs the math correctly ([i]convert sRGB values to linear, average to 127, then convert back to sRGB, resulting in 187[/i]).
If I squint at the image from a distance ([i]to manually average the black/white pattern in my eye[/i]), the right hand side all looks almost the same brightness, but the left hand side is obviously too dark.
[img]http://www.4p8.com/eric.brasseur/gamma_bad_2.2_0.5_imagemagick.jpg[/img]

The linked article from nVidia shows the disastrous consequences from trying to perform math in a non-linear colour space.
[quote]while in reality RGB and sRGB may actually be the same thing[/quote]Yes, RGB is a loose term so it could mean anything.
But in rendering we deal with linear-RGB spaces, and non-linear RGB spaces ([i]such as "gamma 2.2" and sRGB[/i]).
I posted the equations to transform between "linear RGB" and sRGB above, so when we talk about them in rendering they're definitely not the same - one is mathematically linear and one is not!
[quote]Have you read the article yourself? The article you provided can be used as an exercise to find out logical fallacies. Begging the question and fallacy of composition are amongst the first ones visible.[/quote]Wow. That article describes the basics for achieving physically correct math in a renderer. What is your problem with it?[quote]you cannot convert between the two using gamma correction, so this SDK article, for instance, is misleading
[/quote]What's your problem with that article?
Maybe if you're disagreeing with nVidia, Microsoft, Kronos, and the GL ARB ([i]3Dlabs, Apple, ATI, Dell, IBM, Intel, SGI, Sun[/i]), the problem is actually that they know what they're doing and you're refusing to read wikipedia to catch up? ([i]the argumentum ad verecundiam fallacy, I know -- but while I'm here, what does make you an expert over them?[/i])
[quote]Yes, sRGB is a standard and popular color space. But starting from "display performs sRGB gamma correction" - just a senseless manipulation of words.[/quote]The display has to calibrate it's internal voltages so that when it receives a value of 255 it outputs at maxium luminosity, at 187 it outputs half-maximum luminosity, and at 127 it outputs at 21% maximum luminosity. That's the sRGB correction that the monitor must perform. Edited by Hodgman

Share this post


Link to post
Share on other sites
[quote name='Hodgman' timestamp='1347894192' post='4980893']
Maybe if you're disagreeing with nVidia, Microsoft, Kronos, and the GL ARB ([i]3Dlabs, Apple, ATI, Dell, IBM, Intel, SGI, Sun[/i]), the problem is actually that they know what they're doing and you're refusing to read wikipedia to catch up?[/quote]
This is [url="http://en.wikipedia.org/wiki/Argumentum_ad_populum"]Argumentum ad populum[/url].

The articles I've mentioned in my post are verified and have been passed scientific review (by several council members), while you provide your opinions backed up by your own words, some stuff on Internet and popular belief.

So you are saying that the companies you have randomly selected and mentioned have something to do in decision-making regarding misleading usage of sRGB term? With this, you automatically decide that I'm wrong and you are right?

[quote name='Hodgman' timestamp='1347894192' post='4980893']([i]Appeal to authority fallacy, I know -- but while I'm here, what does make you an expert over them?[/i])[/quote]
Yes, it's appeal to authority. You are moderator, so you are always right and if there is something you don't like, you rather attack the person (Appeal to the person fallacy) rather than provide sound arguments in discussion. While we're here - I don't consider myself authority and there are many things in the world that I don't know or understand, and I'm humble about it. Yes, my master's and doctoral thesis works were regarding practical applications in mobile systems of color theory and I have published 12 council-reviewed scientific works regarding different color spaces and applications, so this is why I have something to say about it. I could always be mistaken as well as people who review and judge my work, but while I try to base my points on proven facts, you try to prove something by use of Wikipedia, popular folklore and your Moderator badge.

Please, I know your intentions in answering OP's question was good, I just tried to clarify things as something that made its way to SDK does not necessarily mean it is correct. You don't have either to defend something blindly just because I've pointed out to a misconception in Microsoft manual.

Share this post


Link to post
Share on other sites
[quote name='Lifepower' timestamp='1347897416' post='4980915']You don't have either to defend something blindly just because I've pointed out to a misconception in Microsoft manual.[/quote]You said "[i]For some reason, they have used "sRGB" to denote "linear color space" in DirectX and OpenGL[/i]" which is absolutely 100% false, so is a statement that should be criticized. Which of your references backs up this statement of yours?

sRGB is defined as a non-linear transformation from a particular linear RGB colour space.

Also, you've said [[i]brackets mine[/i]]:
Indeed, you can convert from linear [[i]RGB[/i]] to non-linear [[i]sRGB[/i]] color spaces and vice-versa by using Gamma correction.
[i]And then:[/i]
In any case, you cannot convert between the two [[i]RGB and sRGB[/i]] using gamma correction, so this SDK article, for instance, is misleading.

The above checkerboard image and the linked nVidia article explain why you cannot perform your shading math in curved spaces such as sRGB, and thus why sRGB values have to be decoded to linear values for shading ([i]and possibly re-encoded to sRGB for display or storage[/i]).

Here's the short form of linear vs non-linear:
Math in sRGB: [font=courier new,courier,monospace](0+1)/2=0.22[/font]
Math in any linear space: [font=courier new,courier,monospace](0+1)/2=0.5[/font]

[b]What errors or misleading statements are there in the Microsoft and nVidia links that you've accused?[/b]

[quote name='Lifepower' timestamp='1347897416' post='4980915']So you are saying that the companies you have randomly selected and mentioned have something to do in decision-making regarding misleading usage of sRGB term? With this, you automatically decide that I'm wrong and you are right?[/quote]The companies I listed are responsible for the important nVidia page you've denounced and the design of D3D/GL, which you've denounced.

I decided that you're wrong because you're saying things that I know are wrong. I've worked on converting a lot of renderers from being "gamma ignorant" to performing proper gamma correction and linear-space lighting. You can't perform shading math in colour spaces like sRGB because of the non-linearity. This makes it fundamentally different from linear RGB colour spaces. This is the reason why it was so important to add them to D3D/GL.

[quote]You are moderator, so you are always right and if there is something you don't like, you rather attack the person[/quote]Moderators always being right is a ridiculous appeal to authority. We generally don't moderate threads that we've participated in either, so my ability to lock/hide abusive content is irrelevant.

I've explained where and why you were wrong, [b]which you've brushed off as nonsense, gibberish and senseless words.[/b] I think I've been quite polite regarding such condescension.
Despite 'appeal to popularity' being a fallacy, you do have to consider that perhaps you're [i]just wrong[/i] and you should go and re-read the sRGB wikipedia page -- Occam's razor and all that... but take it as a personal slight instead of reflecting on it if you must, or explain to me the flaw in the above math.

Share this post


Link to post
Share on other sites
[quote name='Hodgman' timestamp='1347898336' post='4980926']
sRGB is defined as a non-linear transformation from a particular linear RGB colour space.
[/quote]
You are wrong. sRGB is an application of standardization to RGB color space and it is defined by three primaries in CIE XYZ color space. The transformation between linear and non-linear color spaces is entirely different topic. I've already said this before.

[quote[b] name='Hodgman' timestamp='1347898336' post='4980926'][/b][b]What errors or misleading statements are there in the Microsoft and nVidia links that you've accused?[/b][/quote]
I've already said this in my earlier posts. The error is to mix gamma correction concepts along with RGB and sRGB color spaces together, trying to imply that at one point or another when you "convert" or "transform" (or similar term) from one to another, you need to do gamma correcion, or that at some point gamma correction is applied. My suggested correction is that there are separate topics and the introduction of sRGB texture format is poorly fundamented and sRGB color space name is misused.

[quote name='Hodgman' timestamp='1347898336' post='4980926']
I decided that you're wrong because you're saying things that I know are wrong.[/quote]
Just because you think/decide/believe I'm wrong, it doesn't make you right. It just makes you superficial.

[quote name='Hodgman' timestamp='1347898336' post='4980926']Moderators always being right is a ridiculous appeal to authority. We generally don't moderate threads that we've participated in either, so my ability to lock/hide abusive content is irrelevant.[/quote]
I was not referring to actual thread moderation, rather than you feel that you are right because you are moderator. Perhaps I'm wrong and maybe there are other reasons why you think you are automatically right.

[quote name='Hodgman' timestamp='1347898336' post='4980926']I've explained where and why you were wrong, [b]which you've brushed off as nonsense, gibberish and senseless words.[/b][/quote]
You have said that I'm wrong and failed to give any reasonable evidence to support your points, other than referring to popular belief, your own belief, mixing my phrases with new words among others. I wouldn't mind if you only posted your own points, but copying my text and then adding stuff of your own with the purpose of misguiding the discussion is just uncool. I think you just don't like being seen as wrong on forums where you moderate. [img]http://public.gamedev.net//public/style_emoticons/default/smile.png[/img]

[quote name='Hodgman' timestamp='1347898336' post='4980926']Despite 'appeal to popularity' being a fallacy, you do have to consider that perhaps you're [i]just wrong[/i] and you should go and re-read the sRGB wikipedia page -- Occam's razor and all that... but take it as a personal slight instead of reflecting on it if you must.
[/quote]
Why don't you follow your own advice? [img]http://public.gamedev.net//public/style_emoticons/default/smile.png[/img] On that note, I might suggest that you don't limit your reading to [s]Facebook[/s]Wikipedia only.

P.S. you might want to read some earlier versions of Wikipedia [url="http://en.wikipedia.org/w/index.php?title=SRGB&oldid=88097522"]sRGB entry[/url]. The end result is that when sRGB is viewed on CRT, the viewed gamma appears as 2.2, but again, this is CRT/Display issue, not the space itself. Coincidence and consequence are two different things. Just because gamma is mentioned, it doesn't mean (non-S)RGB has different gamma. In fact, I think mentioning gamma in sRGB discussion is not relevant.

Share this post


Link to post
Share on other sites
[quote name='Lifepower' timestamp='1347901258' post='4980945']
You are wrong. sRGB is an application of standardization to RGB color space and it is defined by three primaries in CIE XYZ color space. The transformation between linear and non-linear color spaces is entirely different topic. I've already said this before.
[/quote]If you don't beleive wikipedia, check it's references.
In this case, the sRGB standard is defined by IEC 61966-2-1:1999, which you can view a draft copy of for free [url="http://www.color.org/srgb.pdf"]here[/url]. Yes, it's defined by three XYZ primaries, [i]and[/i] a non-linear transformation of those primaries (which is similar to a "gamma 2.2" adjustment).

The OP was specifically asking about sRGB in OpenGL; you can read their definitions of the sRGB transform [url="http://www.opengl.org/registry/specs/EXT/texture_sRGB.txt"]here[/url] and [url="http://www.opengl.org/registry/specs/EXT/framebuffer_sRGB.txt"]here[/url].

Those three documents describe the same non-linear transforms that appear on wikipedia... but because I've linked to them on the internet instead of quoted an ISBN, you don't believe them?

So either you're saying these documents are wrong ([i]and that when i sample from an sRGB texture in my fragment shader, no non-linear transform of the texture data will take place[/i]) or that these documents are wrong to call this colour space sRGB, and they've actually misappropriated the name.
If the former, you can be refuted by experiment, if the latter, then it's irrelevant as the OP was asking about the "sRGB" space that's used by GL/D3D, which is also known as[b] IEC 61966-2-1:1999[/b].

I don't know what "sRGB" you're talking about, but what you've described is definitely [b]not[/b] IEC 61966-2-1.
Perhaps this whole time you've been describing the "linear RGB" space that's defined as an intermediate conversion between XYZ and sRGB, which sounds likely. The point is that we want to be doing our shading math in this linear RGB space ([i]at a high bit depth[/i]), but usually our input and output formats are sRGB, so we require the non-linear conversion ([i]which is approximate to "gamma 2.2" correction, as mentioned in the specification[/i]).

[quote]In fact, I think mentioning gamma in sRGB discussion is not relevant.[/quote]The fact is that in OpenGL and D3D sRGB and gamma [i]are[/i] related concepts, because as described in the above specification, sRGB is similar to a gamma 2.2 curve...

When I read a texel from an OpenGL sRGB texture, a non-linear transform described on the wikipedia page ([i]which can be approximated as x[sup]^2.2[/sup])[/i] is applied to it automatically.
When I write a pixel to an OpenGL sRGB render-target, a non-linear transform described on the wikipedia page ([i]which can be approximated as x[sup]^(1/2.2)[/sup][/i]) is applied to it automatically.

So if I assume that my input textures were authored in the sRGB space ([i]or on a CRT and are happy that CRT's are close enough to sRGB displays[/i]) and assume that the user's output display is an sRGB device, then by using sRGB textures an render-targets, I can perform all of my fragment-shading math in a linear colour space automatically ([i]but still have non-linear inputs and outputs[/i]), thanks to GL natively supporting these transforms.

This is the purpose of sRGB formats in OpenGL, as shown by the above OpenGL specifications.

[quote]you feel that you are right because you are moderator[/quote]No, that's insulting. I'm quoting the sRGB standard, and you're saying I'm wrong, the standard is wrong, and nVidia, ATI and Microsoft are wrong too. That's pretty simple. Why are you so opposed to learning about sRGB?
[quote]P.S. you might want to read some earlier versions of Wikipedia [url="http://en.wikipedia.org/w/index.php?title=SRGB&oldid=88097522"]sRGB entry[/url].[/quote]That old version still describes the [b]exact same[/b] linear transformation from XYZ [url="http://i.imgur.com/GAF0P.png"]followed by a non-linear transformation[/url]!!! How can you post this stuff up, and still argue that it's a linear space? Now I think you're just trolling...
[quote]You have said that I'm wrong and failed to give any reasonable evidence to support your points[/quote]The wikipedia page that I linked to contains proper references, shown above. Where is your evidence that sRGB is just a linear transform from XYZ with no non-linear part to it?
As well as this obviously false claim, you've attacked an nVidia and microsoft article without actually stating any actual points against them or providing evidence.
You've made claims about the purpose/usefulness of sRGB resources in GL/D3D without providing any evidence to back them up too.
Read the sRGB specification already.
[quote]The end result is that when sRGB is viewed on CRT, the viewed gamma appears as 2.2,[/quote]The display has nothing to do with it -- arguing about what a signal looks like when plugged into a display of a different colour space is irrelevant.
e.g. oh I sent the HSV bytes of [0,0,100] on my RGB CRT and it came out Blue!
Yes, CRT's often work in a vague "RGB gamma 2.2" colour space -- however, this is actually a good approximation of sRGB colour space ([i]sRGB was inspired by CRT's[/i]), so sRGB images look almost correct when viewed on these displays... However, to display it correctly in theory you should correctly decode the sRGB signal and re-encode it in the monitor's colour space ([i]but in practice with 8-bit inputs, this will do more harm than good[/i]), but I assume you already know this - e.g.
[code]if(srgb < 0.04045)
linear = srgb / 12.92;
else
linear = pow( ((srgb + 0.055)/1.055), 2.4 );
CRT = pow(linear, 1/2.2);[/code]


If you still don't believe that you can't do math in curved spaces, and that sRGB is a curved space, despite the specification saying so, try it for yourself:
* Pick any two XYZ colours, A[sub]xyz[/sub] and B[sub]xyz[/sub].
* Convert them to "linear RGB" (as defined in the first part of sRGB specification) to get A[sub]linear[/sub] and B[sub]linear[/sub].
* Compute their average C[sub]linear[/sub].
* Convert A[sub]linear[/sub] and B[sub]linear[/sub]. to sRGB following the full sRGB specification to get A[sub]srgb[/sub] and B[sub]srgb[/sub].
* Compute their average C[sub]srgb[/sub].
* Convert C[sub]srgb[/sub] back into "linear RGB" and compare against C[sub]linear[/sub] -- It will be very wrong in most cases.

The reason the GPU hardware supports sRGB as a native colour space now, is so we can use sRGB data for storage and display, while performing our math in a linear colour space, without having the pay the cost of transforming back and forth between the two colour spaces constantly (the hardware makes the conversion 'free'). This is a huge deal, because as the checkerboard image from earlier shows -- math in sRGB space makes no sense. Edited by Hodgman

Share this post


Link to post
Share on other sites
Thanks for the very detailed information!

[quote name='Hodgman' timestamp='1347879730' post='4980838']
If your render-target is created as an sRGB texture, then when you write to it, the hardware will perform the linear->sRGB conversion when writing values from your pixel shader automatically.
[/quote]

It seems to be easy to do this in (looking at OpenGL specifically now). Using a Frame Buffer Object with a target texture object of format GL_SRGB8_ALPHA8 (which is a required format). The only caveat is that the transformation to SRGB color space should be done last, and I would prefer not to have a dummy draw into a FBO just to get this transformation. I don't think it is possible to associate the attribute "SRGB" with the default frame buffer?

You can do glEnable(GL_FRAMEBUFFER_SRGB), but only if the value GL_FRAMEBUFFER_ATTACHMENT_COLOR_ENCODING of the destination buffer is GL_SRGB.

Of course, there is always the possibility of doing the transformation yourself, in the shader. But automatic built-in functionality is sometimes more optimized.

Share this post


Link to post
Share on other sites
It's possible in DX, so it should be possible in GL as well. Although typically in the very last step you want to do the transformation yourself, so that you allow the user to tweak the curve slightly in order to compensate for the gamma of their display.

Share this post


Link to post
Share on other sites
[quote name='Hodgman' timestamp='1347940554' post='4981126']
CRT = pow(linear, 1/2.2);
[/quote]
This would be an approximation of the sRGB algorithm, which is
[code]
if (linear <= 0.0031308)
CRT = linear * 12.92;
else
CRT = 1.055 * pow(linear, 1/2.4) - 0.055;
[/code]
For high values, they are near. For low values, they start to diverge, especially very low values. What is the reason for using this approximation? I suppose one reason can be that of shader performance.

Possibly, it may be that the lower values will not be noticed in games. But that would seem to be a contradiction to the purpose of having extra resolution in the low value interval of sRGB. Actually, my game has some banding problems when drawing the outer limits of a spherical fog in a dark environment (while still not taking sRGB color space into account).

Sampled values from 8-bit textures will only have a resolution of 1/255 = 0.0039, which would not be used in the sRGB conversion unless using some form of HDR.

Share this post


Link to post
Share on other sites
[quote name='larspensjo' timestamp='1348049656' post='4981620']What is the reason for using this approximation?[/quote]In that code, I was assuming that the input values were in non-linear sRGB, but the desired output was "CRT RGB" (linear RGB with gamma 2.2) -- the code was decoding from sRGB to linear, and then re-encoding the linear values to "CRT RGB". i.e. it was an [font=courier new,courier,monospace]sRGB->CRT[/font] conversion function.
N.B. this isn't a very useful thing to do in practice, because if this operation is done with 8-bits inputs and outputs, you'll just be destroying information. If displaying 8-bit sRGB images on a CRT, it would be best to avoid doing the right thing™ ([i]which is converting the data into the display's colour encoding[/i]) and just output the sRGB data unmodified, because "CRT RGB" and sRGB are so similar.

In a game, if you had sRGB textures and the user had a CRT monitor, the right thing to do would be to sRGB decode the textures to linear, do your shading in linear ([i]at high precision, e.g. float/half[/i]) and then encode the result with x[sup]^(1/2.2)[/sup] to compensate for the CRT's natural response of x[sup]^2.2[/sup].
However, most user's [b]don't[/b] have a CRT, so it's best to encode the final result with the standard sRGB curve instead of the CRT gamma 2.2 curve. Edited by Hodgman

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Forum Statistics

    • Total Topics
      628293
    • Total Posts
      2981870
  • Similar Content

    • By mellinoe
      Hi all,
      First time poster here, although I've been reading posts here for quite a while. This place has been invaluable for learning graphics programming -- thanks for a great resource!
      Right now, I'm working on a graphics abstraction layer for .NET which supports D3D11, Vulkan, and OpenGL at the moment. I have implemented most of my planned features already, and things are working well. Some remaining features that I am planning are Compute Shaders, and some flavor of read-write shader resources. At the moment, my shaders can just get simple read-only access to a uniform (or constant) buffer, a texture, or a sampler. Unfortunately, I'm having a tough time grasping the distinctions between all of the different kinds of read-write resources that are available. In D3D alone, there seem to be 5 or 6 different kinds of resources with similar but different characteristics. On top of that, I get the impression that some of them are more or less "obsoleted" by the newer kinds, and don't have much of a place in modern code. There seem to be a few pivots:
      The data source/destination (buffer or texture) Read-write or read-only Structured or unstructured (?) Ordered vs unordered (?) These are just my observations based on a lot of MSDN and OpenGL doc reading. For my library, I'm not interested in exposing every possibility to the user -- just trying to find a good "middle-ground" that can be represented cleanly across API's which is good enough for common scenarios.
      Can anyone give a sort of "overview" of the different options, and perhaps compare/contrast the concepts between Direct3D, OpenGL, and Vulkan? I'd also be very interested in hearing how other folks have abstracted these concepts in their libraries.
    • By aejt
      I recently started getting into graphics programming (2nd try, first try was many years ago) and I'm working on a 3d rendering engine which I hope to be able to make a 3D game with sooner or later. I have plenty of C++ experience, but not a lot when it comes to graphics, and while it's definitely going much better this time, I'm having trouble figuring out how assets are usually handled by engines.
      I'm not having trouble with handling the GPU resources, but more so with how the resources should be defined and used in the system (materials, models, etc).
      This is my plan now, I've implemented most of it except for the XML parts and factories and those are the ones I'm not sure of at all:
      I have these classes:
      For GPU resources:
      Geometry: holds and manages everything needed to render a geometry: VAO, VBO, EBO. Texture: holds and manages a texture which is loaded into the GPU. Shader: holds and manages a shader which is loaded into the GPU. For assets relying on GPU resources:
      Material: holds a shader resource, multiple texture resources, as well as uniform settings. Mesh: holds a geometry and a material. Model: holds multiple meshes, possibly in a tree structure to more easily support skinning later on? For handling GPU resources:
      ResourceCache<T>: T can be any resource loaded into the GPU. It owns these resources and only hands out handles to them on request (currently string identifiers are used when requesting handles, but all resources are stored in a vector and each handle only contains resource's index in that vector) Resource<T>: The handles given out from ResourceCache. The handles are reference counted and to get the underlying resource you simply deference like with pointers (*handle).  
      And my plan is to define everything into these XML documents to abstract away files:
      Resources.xml for ref-counted GPU resources (geometry, shaders, textures) Resources are assigned names/ids and resource files, and possibly some attributes (what vertex attributes does this geometry have? what vertex attributes does this shader expect? what uniforms does this shader use? and so on) Are reference counted using ResourceCache<T> Assets.xml for assets using the GPU resources (materials, meshes, models) Assets are not reference counted, but they hold handles to ref-counted resources. References the resources defined in Resources.xml by names/ids. The XMLs are loaded into some structure in memory which is then used for loading the resources/assets using factory classes:
      Factory classes for resources:
      For example, a texture factory could contain the texture definitions from the XML containing data about textures in the game, as well as a cache containing all loaded textures. This means it has mappings from each name/id to a file and when asked to load a texture with a name/id, it can look up its path and use a "BinaryLoader" to either load the file and create the resource directly, or asynchronously load the file's data into a queue which then can be read from later to create the resources synchronously in the GL context. These factories only return handles.
      Factory classes for assets:
      Much like for resources, these classes contain the definitions for the assets they can load. For example, with the definition the MaterialFactory will know which shader, textures and possibly uniform a certain material has, and with the help of TextureFactory and ShaderFactory, it can retrieve handles to the resources it needs (Shader + Textures), setup itself from XML data (uniform values), and return a created instance of requested material. These factories return actual instances, not handles (but the instances contain handles).
       
       
      Is this a good or commonly used approach? Is this going to bite me in the ass later on? Are there other more preferable approaches? Is this outside of the scope of a 3d renderer and should be on the engine side? I'd love to receive and kind of advice or suggestions!
      Thanks!
    • By nedondev
      I 'm learning how to create game by using opengl with c/c++ coding, so here is my fist game. In video description also have game contain in Dropbox. May be I will make it better in future.
      Thanks.
    • By Abecederia
      So I've recently started learning some GLSL and now I'm toying with a POM shader. I'm trying to optimize it and notice that it starts having issues at high texture sizes, especially with self-shadowing.
      Now I know POM is expensive either way, but would pulling the heightmap out of the normalmap alpha channel and in it's own 8bit texture make doing all those dozens of texture fetches more cheap? Or is everything in the cache aligned to 32bit anyway? I haven't implemented texture compression yet, I think that would help? But regardless, should there be a performance boost from decoupling the heightmap? I could also keep it in a lower resolution than the normalmap if that would improve performance.
      Any help is much appreciated, please keep in mind I'm somewhat of a newbie. Thanks!
    • By test opty
      Hi,
      I'm trying to learn OpenGL through a website and have proceeded until this page of it. The output is a simple triangle. The problem is the complexity.
      I have read that page several times and tried to analyse the code but I haven't understood the code properly and completely yet. This is the code:
       
      #include <glad/glad.h> #include <GLFW/glfw3.h> #include <C:\Users\Abbasi\Desktop\std_lib_facilities_4.h> using namespace std; //****************************************************************************** void framebuffer_size_callback(GLFWwindow* window, int width, int height); void processInput(GLFWwindow *window); // settings const unsigned int SCR_WIDTH = 800; const unsigned int SCR_HEIGHT = 600; const char *vertexShaderSource = "#version 330 core\n" "layout (location = 0) in vec3 aPos;\n" "void main()\n" "{\n" " gl_Position = vec4(aPos.x, aPos.y, aPos.z, 1.0);\n" "}\0"; const char *fragmentShaderSource = "#version 330 core\n" "out vec4 FragColor;\n" "void main()\n" "{\n" " FragColor = vec4(1.0f, 0.5f, 0.2f, 1.0f);\n" "}\n\0"; //******************************* int main() { // glfw: initialize and configure // ------------------------------ glfwInit(); glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3); glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 3); glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE); // glfw window creation GLFWwindow* window = glfwCreateWindow(SCR_WIDTH, SCR_HEIGHT, "My First Triangle", nullptr, nullptr); if (window == nullptr) { cout << "Failed to create GLFW window" << endl; glfwTerminate(); return -1; } glfwMakeContextCurrent(window); glfwSetFramebufferSizeCallback(window, framebuffer_size_callback); // glad: load all OpenGL function pointers if (!gladLoadGLLoader((GLADloadproc)glfwGetProcAddress)) { cout << "Failed to initialize GLAD" << endl; return -1; } // build and compile our shader program // vertex shader int vertexShader = glCreateShader(GL_VERTEX_SHADER); glShaderSource(vertexShader, 1, &vertexShaderSource, nullptr); glCompileShader(vertexShader); // check for shader compile errors int success; char infoLog[512]; glGetShaderiv(vertexShader, GL_COMPILE_STATUS, &success); if (!success) { glGetShaderInfoLog(vertexShader, 512, nullptr, infoLog); cout << "ERROR::SHADER::VERTEX::COMPILATION_FAILED\n" << infoLog << endl; } // fragment shader int fragmentShader = glCreateShader(GL_FRAGMENT_SHADER); glShaderSource(fragmentShader, 1, &fragmentShaderSource, nullptr); glCompileShader(fragmentShader); // check for shader compile errors glGetShaderiv(fragmentShader, GL_COMPILE_STATUS, &success); if (!success) { glGetShaderInfoLog(fragmentShader, 512, nullptr, infoLog); cout << "ERROR::SHADER::FRAGMENT::COMPILATION_FAILED\n" << infoLog << endl; } // link shaders int shaderProgram = glCreateProgram(); glAttachShader(shaderProgram, vertexShader); glAttachShader(shaderProgram, fragmentShader); glLinkProgram(shaderProgram); // check for linking errors glGetProgramiv(shaderProgram, GL_LINK_STATUS, &success); if (!success) { glGetProgramInfoLog(shaderProgram, 512, nullptr, infoLog); cout << "ERROR::SHADER::PROGRAM::LINKING_FAILED\n" << infoLog << endl; } glDeleteShader(vertexShader); glDeleteShader(fragmentShader); // set up vertex data (and buffer(s)) and configure vertex attributes float vertices[] = { -0.5f, -0.5f, 0.0f, // left 0.5f, -0.5f, 0.0f, // right 0.0f, 0.5f, 0.0f // top }; unsigned int VBO, VAO; glGenVertexArrays(1, &VAO); glGenBuffers(1, &VBO); // bind the Vertex Array Object first, then bind and set vertex buffer(s), //and then configure vertex attributes(s). glBindVertexArray(VAO); glBindBuffer(GL_ARRAY_BUFFER, VBO); glBufferData(GL_ARRAY_BUFFER, sizeof(vertices), vertices, GL_STATIC_DRAW); glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 3 * sizeof(float), (void*)0); glEnableVertexAttribArray(0); // note that this is allowed, the call to glVertexAttribPointer registered VBO // as the vertex attribute's bound vertex buffer object so afterwards we can safely unbind glBindBuffer(GL_ARRAY_BUFFER, 0); // You can unbind the VAO afterwards so other VAO calls won't accidentally // modify this VAO, but this rarely happens. Modifying other // VAOs requires a call to glBindVertexArray anyways so we generally don't unbind // VAOs (nor VBOs) when it's not directly necessary. glBindVertexArray(0); // uncomment this call to draw in wireframe polygons. //glPolygonMode(GL_FRONT_AND_BACK, GL_LINE); // render loop while (!glfwWindowShouldClose(window)) { // input // ----- processInput(window); // render // ------ glClearColor(0.2f, 0.3f, 0.3f, 1.0f); glClear(GL_COLOR_BUFFER_BIT); // draw our first triangle glUseProgram(shaderProgram); glBindVertexArray(VAO); // seeing as we only have a single VAO there's no need to // bind it every time, but we'll do so to keep things a bit more organized glDrawArrays(GL_TRIANGLES, 0, 3); // glBindVertexArray(0); // no need to unbind it every time // glfw: swap buffers and poll IO events (keys pressed/released, mouse moved etc.) glfwSwapBuffers(window); glfwPollEvents(); } // optional: de-allocate all resources once they've outlived their purpose: glDeleteVertexArrays(1, &VAO); glDeleteBuffers(1, &VBO); // glfw: terminate, clearing all previously allocated GLFW resources. glfwTerminate(); return 0; } //************************************************** // process all input: query GLFW whether relevant keys are pressed/released // this frame and react accordingly void processInput(GLFWwindow *window) { if (glfwGetKey(window, GLFW_KEY_ESCAPE) == GLFW_PRESS) glfwSetWindowShouldClose(window, true); } //******************************************************************** // glfw: whenever the window size changed (by OS or user resize) this callback function executes void framebuffer_size_callback(GLFWwindow* window, int width, int height) { // make sure the viewport matches the new window dimensions; note that width and // height will be significantly larger than specified on retina displays. glViewport(0, 0, width, height); } As you see, about 200 lines of complicated code only for a simple triangle. 
      I don't know what parts are necessary for that output. And also, what the correct order of instructions for such an output or programs is, generally. That start point is too complex for a beginner of OpenGL like me and I don't know how to make the issue solved. What are your ideas please? What is the way to figure both the code and the whole program out correctly please?
      I wish I'd read a reference that would teach me OpenGL through a step-by-step method. 
  • Popular Now