• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.
Sign in to follow this  
Followers 0
larspensjo

OpenGL
Linear color space

13 posts in this topic

I am trying to learn about when to use, and when to not use, linear and non-linear color space. This is how I understand it, please correct me where I am wrong or have incomplete understanding:[list]
[*]Many texture manipulations need to be done in linear space, e.g. anti aliasing and light manipulations.
[*]Many tools, like Photoshop, save pictures in non-linear format by default.
[*]You can specify SRGB as a bitmap format (e.g. GL_SRGB in OpenGL to glTexImage2D), and the graphic drivers (or the hardware?) will automatically transform the bitmap you sample from non-linear to linear.
[*]If you transform it yourself, you do that by setting each color component to c^2.2. This would be an approximation of the SRGB.
[*]You can transform each color channel independently on the others.
[*]As a last step, outputting the pixels to the screen, you need to transform it back into non-linear space, using c^(1/2.2) for each channel.
[*]The value 2.2 depends on the display you use. It looks like Apple use 1.8.
[/list]
I am not sure about the glGenerateMipmap() function. Will it take the linear/non-linear attribute (SRGB) into account when applying the filter functions?

Is the approximation above good enough, for showing pixels on the screen? Or is it that the exact SRGB encoding need to be used?

Is there any automatic support in the hardware for the final pixel transformation to show on screen?

In most example and tutorials "out there", you don't find any gamma correction being done. So either I don't understand this, or there is a general lack of understanding elsewhere (or something in between :-).
0

Share this post


Link to post
Share on other sites
It sounds like you understand [img]http://public.gamedev.net//public/style_emoticons/default/smile.png[/img]
[quote name='larspensjo' timestamp='1347870871' post='4980814']In most example and tutorials "out there", you don't find any gamma correction being done. So either I don't understand this, or there is a general lack of understanding elsewhere[/quote]Yes, "gamma correct rendering" has only become popular in real-time graphics over, maybe the past 5 or so years (guessing), along with the growth in popularity of HDR and physically-based lighting. A lot of older real-time rendering literature ignores gamma issues.
[quote]I am trying to learn about when to use, and when to not use, linear and non-linear color space[/quote]You can think of sRGB as a "compression" format for the number of bits required to store an image without colour banding. Humans are good at differentiating between different colours, especially dark colours. The sRGB curve allocates "more bits" to representing darker colours, which means it allows 8-bit images to have less noticeable colour-banding in dark areas.
I've seen it stated that to get the same precision in dark areas, a linear image would need 10-16 bits.
The bad thing is that doing math in curved spaces is difficult -- we're used to flat spaces, like the [url="http://en.wikipedia.org/wiki/Number_line"]number-line[/url], or the [url="http://en.wikipedia.org/wiki/Cartesian_coordinate_system"]cartesian plane[/url], but "gamma spaces" ([i]like sRGB, or "gamma 2.2"[/i]) aren't flat, they're curved.
[quote]I am not sure about the glGenerateMipmap() function. Will it take the linear/non-linear attribute (SRGB) into account when applying the filter functions?[/quote]OpenGL and Dx10/11 should do this correctly (convert to linear, downsample, convert to sRGB), but DX9 does not.[quote]You can specify SRGB as a bitmap format (e.g. GL_SRGB in OpenGL to glTexImage2D), and the graphic drivers (or the hardware?) will automatically transform the bitmap you sample from non-linear to linear.[/quote]When you sample from the texture, the texture-fetch hardware will apply the inverse sRGB curve to convert it from 8-bit sRGB to floating-point linear.[quote]As a last step, outputting the pixels to the screen, you need to transform it back into non-linear space, using c^(1/2.2) for each channel.[/quote]If your render-target is created as an sRGB texture, then when you write to it, the hardware will perform the linear->sRGB conversion when writing values from your pixel shader automatically.
It's common to just assume that the user's monitor is an sRGB monitor (because, it's pretty much *the* standard), but yes, a lot of monitors aren't actually sRGB -- I've seen gamma 2.4 and gamma 1.8 monitors before. To get the correct appearance on these monitors, it would be better to manually convert to e.g. gamma 1.8 rather than to convert to sRGB.[quote]Many tools, like Photoshop, save pictures in non-linear format by default[/quote]If you're painting a picture in an application that doesn't do any "gamma correction", then the data in your file is in the same "gamma space" as your monitor.
That is to say -- the image will only look the way that I saw it (while painting it), if it's displayed on another monitor with the same gamma-curve. If I paint my artwork on a gamma-1.8 screen, and then view it on an sRGB screen, it will look different ([i]because the original data was painted in the "gamma 1.8 space"[/i]).
For this reason, it's common for games studios to buy expensive calibration equipment to make sure that [i]all[/i] of their artists monitors are correctly calibrated to the sRGB curve. Edited by Hodgman
2

Share this post


Link to post
Share on other sites
Just wanted to clarify this one. For some reason, they have used "sRGB" to denote "linear color space" in DirectX and OpenGL, which are just two separate things.

Indeed, you can convert from linear to non-linear color spaces and vice-versa by using [url="http://en.wikipedia.org/wiki/Gamma_correction"]Gamma correction[/url].

RGB color space by itself lacks any standard or definition, so sRGB was proposed as a standard, which is defined by specifying [url="http://en.wikipedia.org/wiki/White_point"]white point[/url] and three [url="http://en.wikipedia.org/wiki/Chromaticities"]chromaticities[/url]. For instance, there is also [url="http://en.wikipedia.org/wiki/Wide-gamut_RGB_color_space"]Wide gamut RGB[/url], [url="http://en.wikipedia.org/wiki/Adobe_RGB_color_space"]Adobe RGB[/url] and so on.

Now, the conversion from one color space to another, where the color gamut is different, you would need to convert your initial color space to [url="http://en.wikipedia.org/wiki/CIE_XYZ"]CIE XYZ[/url] by using linear transformation and then to the desired color space.

This is why it is simply wrong to call sRGB "linear" and non-sRGB "non-linear" and do the conversion between both using gamma correction. In reality, both typical RGB and sRGB may or may not be linear.

In fact, typically, you can assume that your RGB color space is [b]actually linear[/b]. You don't need to voluntarily apply any gamma correction there. Since it lacks standard definition, you can simply assume that when you work with RGB, you work in sRGB, or in Adobe RGB - whatever your choice is. In order to properly standarize your color space, you would need to convert it to one of perceptually uniform (or supposedly) color spaces such as [url="http://en.wikipedia.org/wiki/CIELAB"]CIELAB[/url], [url="http://en.wikipedia.org/wiki/CIELUV"]CIELUV[/url], DIN99, ATD95, [url="http://en.wikipedia.org/wiki/CIECAM"]CIECAM[/url], or at least CIE XYZ, which can actually represent all visible colors by human eye, unlike RGB, which is limited by triangle in CIE diagram.

Now, the problem is that most LCD displays apply huge gamma correction to the input image. Not only that, they may also pre-process images and oversaturate them too. Why? To sell better since higher contrast and crispier images appear prettier, but in the end you receive a very distorted image. [u]This is not your problem, it is a problem of display's manufacturers and vendors![/u] You simply can't make an application that will predict all of the monitors out there, so it's their responsibility to generate final image as accurate as possible.

I don't know why they introduced "sRGB" into DirectX and OpenGL - after all, suposedly, you are already working in sRGB and it's display's job to properly represent input sRGB data so that output strictly conforms to sRGB, or any other standard. If you do gamma correction in your application - well, you still don't know how display is going to re-transform your image data, so in the end you may actually get less accurate results.

My guess is that they introduced so-called "sRGB" in APIs just for the hype of it, e.g.: "We can now store textures and front-buffer in gamma-adjusted format! WOW!" (like we couldn't do it back in 1969).

You may check some of the following bibliography to figure out more about different color spaces (you can see by the dates that this is a very studied topic, yet it seems that people making changes in DirectX/OpenGL standards regarding sRGB have never read them):

1. Poynton, Charles. Digital Video and HDTV Algorithms and Interfaces. Morgan Kaufmann, 2003.
2. Poynton, Charles. "Frequently-Asked Questions about Color." [url="http://www.poynton.com/ColorFAQ.html"]http://www.poynton.com/ColorFAQ.html[/url]
3. Hill, Francis S. Computer Graphics using OpenGL. Prentice Hall, 2000.
4. Hearn, Donald, and Pauline M. Baker. Computer Graphics, C Version. Prentice Hall, 1996.
5. Luo, Ronnier M., Guihua Cui, and Changjun Li. "Uniform Colour Spaces Based on CIECAM02 Colour Appearance Model." Color Research & Application (Wiley InterScience) 31, no. 4 (June 2006): 320-330.
6. Lindbloom, Bruce J. "Accurate Color Reproduction for Computer Graphics Applications." Computer Graphics 23, no. 3 (July 1989): 117-126.
7. Brewer, C. A. "Color Use Guidelines for Data Representation." Proceedings of the Section on Statistical Graphics. Alexandria VA: American Statistical Association, 1999. 55-60.
8. MacAdam, David L. "Visual Sensitivities to Color Differences in Daylight." (Journal of the Optical Society of America) 32, no. 5 (May 1942): 247-273.
9. Schanda, Janos. Colorimetry: Understanding the CIE system. Wiley Interscience, 2007.
10. Pratt, William K. Digital Image Processing. 3rd Edition. Wiley-Interscience, 2001.
11. Keith, Jack. Video Demystified: A Handbook for the Digital Engineer. 5th Edition. Fremont, CA: Newnes, 2007.
-3

Share this post


Link to post
Share on other sites
[quote name='Lifepower' timestamp='1347888622' post='4980867']
Just wanted to clarify this one. For some reason, they have used "sRGB" to denote "linear color space" in DirectX and OpenGL, which are just two separate things.[/quote]No, sRGB is [b]not[/b] linear color space so your accusations levelled against DirectX/OpenGL are wildly inaccurate.

For why it was so important to add sRGB support to GPUs, read this primer: [url="http://http.developer.nvidia.com/GPUGems3/gpugems3_ch24.html"]http://http.develope...gems3_ch24.html[/url]

Yes, the [url="http://en.wikipedia.org/wiki/SRGB"]sRGB[/url] standard does define a standard linear RGB space ([i]as an intermediate step[/i]) based on a standardized red/green/blue linear transform from [url="http://en.wikipedia.org/wiki/CIE_1931_color_space"]CIE XYZ[/url] space...
...but sRGB is a curved "gamma corrected" transformation of these standard linear RGB values.
sRGB isn't even a simple "gamma correction" transform. It's similar to gamma correction of 2.2, but it's actually a piecewise transform with a linear toe at the bottom and a gamma of 2.4 at the top.
Linear RGB to sRGB = [url="http://www.wolframalpha.com/input/?i=Piecewise%5B%7B%7Bx*12.92%2C+x%3C%3D0.0031308+%26+x%3E%3D0%7D%2C%7B1.055*%28x%5E%281%2F2.4%29%29-0.055%2C+x%3E0.0031308+%26+x%3C%3D1%7D%7D%5D"]http://www.wolframal...31308 & x<=1}}][/url]
sRGB to linear RGB = [url="http://www.wolframalpha.com/input/?i=Piecewise%5B%7B%7Bx%2F12.92%2C+x%3C%3D0.04045+%26+x%3E%3D0%7D%2C%7B%28%28x%2B0.055%29%2F1.055%29%5E2.4%2C+x%3E0.04045+%26+x%3C%3D1%7D%7D%5D"]http://www.wolframal...04045 & x<=1}}][/url]

It's the standard color space for the WWW, and it's being pushed as a standard color space for TVs, computer monitors, cameras, etc... If a display performs sRGB "gamma correction" on the signal, then that's a good thing -- they're supposed to assume that the input signal is sRGB ([i]~gamma 2.2[/i]) and adjust voltages accordingly to produce the appropriate perceptually linear luminance response.

Yes it's true that different monitors will do different, wacky things to the input signal, but the world is getting saner in this regard thanks to most manufacturers agreeing to adopt a single gamma standard. The right thing™ to do these days is to perform all of your math in a linear color space, and then output an sRGB signal, unless otherwise asked not to. Edited by Hodgman
0

Share this post


Link to post
Share on other sites
[quote name='Hodgman' timestamp='1347889890' post='4980871']
No, sRGB is [b]not[/b] linear color space so your accusations levelled against DirectX/OpenGL are wildly inaccurate.[/quote]
No, my accusations against sRGB in DirectX/OpenGL are based on fact that conversion between RGB and sRGB is thought in terms of gamma correction, while in reality RGB and sRGB may actually be the same thing. In any case, you cannot convert between the two using gamma correction, so [url="http://msdn.microsoft.com/en-us/library/windows/desktop/bb173460%28v=vs.85%29.aspx"]this SDK article[/url], for instance, is misleading.

[quote name='Hodgman' timestamp='1347889890' post='4980871']For why it was so important to add sRGB support to GPUs, read this primer: [url="http://http.developer.nvidia.com/GPUGems3/gpugems3_ch24.html"]http://http.develope...gems3_ch24.html[/url]
[/quote]
Have you read the article yourself? The article you provided can be used as an exercise to find out logical fallacies. Begging the question and fallacy of composition are amongst the first ones visible.

[quote name='Hodgman' timestamp='1347889890' post='4980871']Yes, the [url="http://en.wikipedia.org/wiki/SRGB"]sRGB[/url] standard does define a standard linear RGB space ([i]as an intermediate step[/i]) based on a standardized red/green/blue linear transform from [url="http://en.wikipedia.org/wiki/CIE_1931_color_space"]CIE XYZ[/url] space...[/quote]
So, you've repeated what I've said, then added phrase "as an intermediate step" (to what, by the way?) and now you are saying that:

[quote name='Hodgman' timestamp='1347889890' post='4980871']...but sRGB is a curved "gamma corrected" transformation of these standard linear RGB values.
sRGB isn't even a simple "gamma correction" transform. It's similar to gamma correction of 2.2, but it's actually a piecewise transform with a linear toe at the bottom and a gamma of 2.4 at the top.
[/quote]
Nonsense. sRGB is just a color space, nothing more. It's not "gamma correction of 2.2", left alone "piecewise transform with linear toe at [gibberish]". Proof by verbosity is a logical fallacy (but you already know that), please don't do that.

[quote name='Hodgman' timestamp='1347889890' post='4980871']It's the standard color space for the WWW, and it's being pushed as a standard color space for TVs, computer monitors, cameras, etc... If a display performs sRGB "gamma correction" on the signal, then that's a good thing -- they're supposed to assume that the input signal is sRGB ([i]~gamma 2.2[/i]) and adjust voltages accordingly to produce the appropriate perceptually linear luminance response.[/quote]
Two separate arguments. Yes, sRGB is a standard and popular color space. But starting from "display performs sRGB gamma correction" - just a senseless manipulation of words.
-3

Share this post


Link to post
Share on other sites
[quote name='Lifepower' timestamp='1347892637' post='4980887']
So, you've repeated what I've said, then added phrase "as an intermediate step" (to what, by the way?) and now you are saying that
...
Nonsense. sRGB is just a color space, nothing more. It's not "gamma correction of 2.2",
[/quote]No, you misread - the sRGB standard defines two colour spaces -- one is a linear RGB colour-space, which is used as an intermediate between CIE XYZ and sRGB-proper.
Once you've got colour data in this "linear RGB" space, you can perform the above transforms on it to get the values into the non-linear sRGB space.

Regarding sRGB being similar to "gamma 2.2" -- the above functions to convert to/from linear/sRGB ([i]the "piecewise gibberish"[/i]) can be approximated by x[sup]^2.2[/sup] and x[sup]^(1/2.2)[/sup] ([i]i.e. the regular "gamma correction" process with a power of 2.2[/i]).

Linear RGB colour spaces can be used to describe physical quantities of energy, not just colours. If I've got 100 "units" of photons at the "red" wavelength and send them through a half-silvered-mirror so I end up with only half of them, then I've now got "50" units of red photons. This kind of math does not work in non-linear spaces like sRGB.

Likewise if I've got a black/white checker pattern (0 & 255) and mathematically average it, I get a "50% grey" image (127). In a linear color-space, this value is exactly half as bright as the original white squares. However, in non-linear spaces this math doesn't work. For example, in sRGB 127 is ~21% as bright as 255. If you down-scale an sRGB image of a black/white checkerboard, the resulting colour should be ~187 (which corresponds to "half as bright as white").

e.g. the left half of this image is resized by performing the naive math ([i]averaging sRGB values directly resulting in 127[/i]).
The right half performs the math correctly ([i]convert sRGB values to linear, average to 127, then convert back to sRGB, resulting in 187[/i]).
If I squint at the image from a distance ([i]to manually average the black/white pattern in my eye[/i]), the right hand side all looks almost the same brightness, but the left hand side is obviously too dark.
[img]http://www.4p8.com/eric.brasseur/gamma_bad_2.2_0.5_imagemagick.jpg[/img]

The linked article from nVidia shows the disastrous consequences from trying to perform math in a non-linear colour space.
[quote]while in reality RGB and sRGB may actually be the same thing[/quote]Yes, RGB is a loose term so it could mean anything.
But in rendering we deal with linear-RGB spaces, and non-linear RGB spaces ([i]such as "gamma 2.2" and sRGB[/i]).
I posted the equations to transform between "linear RGB" and sRGB above, so when we talk about them in rendering they're definitely not the same - one is mathematically linear and one is not!
[quote]Have you read the article yourself? The article you provided can be used as an exercise to find out logical fallacies. Begging the question and fallacy of composition are amongst the first ones visible.[/quote]Wow. That article describes the basics for achieving physically correct math in a renderer. What is your problem with it?[quote]you cannot convert between the two using gamma correction, so this SDK article, for instance, is misleading
[/quote]What's your problem with that article?
Maybe if you're disagreeing with nVidia, Microsoft, Kronos, and the GL ARB ([i]3Dlabs, Apple, ATI, Dell, IBM, Intel, SGI, Sun[/i]), the problem is actually that they know what they're doing and you're refusing to read wikipedia to catch up? ([i]the argumentum ad verecundiam fallacy, I know -- but while I'm here, what does make you an expert over them?[/i])
[quote]Yes, sRGB is a standard and popular color space. But starting from "display performs sRGB gamma correction" - just a senseless manipulation of words.[/quote]The display has to calibrate it's internal voltages so that when it receives a value of 255 it outputs at maxium luminosity, at 187 it outputs half-maximum luminosity, and at 127 it outputs at 21% maximum luminosity. That's the sRGB correction that the monitor must perform. Edited by Hodgman
1

Share this post


Link to post
Share on other sites
[quote name='Hodgman' timestamp='1347894192' post='4980893']
Maybe if you're disagreeing with nVidia, Microsoft, Kronos, and the GL ARB ([i]3Dlabs, Apple, ATI, Dell, IBM, Intel, SGI, Sun[/i]), the problem is actually that they know what they're doing and you're refusing to read wikipedia to catch up?[/quote]
This is [url="http://en.wikipedia.org/wiki/Argumentum_ad_populum"]Argumentum ad populum[/url].

The articles I've mentioned in my post are verified and have been passed scientific review (by several council members), while you provide your opinions backed up by your own words, some stuff on Internet and popular belief.

So you are saying that the companies you have randomly selected and mentioned have something to do in decision-making regarding misleading usage of sRGB term? With this, you automatically decide that I'm wrong and you are right?

[quote name='Hodgman' timestamp='1347894192' post='4980893']([i]Appeal to authority fallacy, I know -- but while I'm here, what does make you an expert over them?[/i])[/quote]
Yes, it's appeal to authority. You are moderator, so you are always right and if there is something you don't like, you rather attack the person (Appeal to the person fallacy) rather than provide sound arguments in discussion. While we're here - I don't consider myself authority and there are many things in the world that I don't know or understand, and I'm humble about it. Yes, my master's and doctoral thesis works were regarding practical applications in mobile systems of color theory and I have published 12 council-reviewed scientific works regarding different color spaces and applications, so this is why I have something to say about it. I could always be mistaken as well as people who review and judge my work, but while I try to base my points on proven facts, you try to prove something by use of Wikipedia, popular folklore and your Moderator badge.

Please, I know your intentions in answering OP's question was good, I just tried to clarify things as something that made its way to SDK does not necessarily mean it is correct. You don't have either to defend something blindly just because I've pointed out to a misconception in Microsoft manual.
-3

Share this post


Link to post
Share on other sites
[quote name='Lifepower' timestamp='1347897416' post='4980915']You don't have either to defend something blindly just because I've pointed out to a misconception in Microsoft manual.[/quote]You said "[i]For some reason, they have used "sRGB" to denote "linear color space" in DirectX and OpenGL[/i]" which is absolutely 100% false, so is a statement that should be criticized. Which of your references backs up this statement of yours?

sRGB is defined as a non-linear transformation from a particular linear RGB colour space.

Also, you've said [[i]brackets mine[/i]]:
Indeed, you can convert from linear [[i]RGB[/i]] to non-linear [[i]sRGB[/i]] color spaces and vice-versa by using Gamma correction.
[i]And then:[/i]
In any case, you cannot convert between the two [[i]RGB and sRGB[/i]] using gamma correction, so this SDK article, for instance, is misleading.

The above checkerboard image and the linked nVidia article explain why you cannot perform your shading math in curved spaces such as sRGB, and thus why sRGB values have to be decoded to linear values for shading ([i]and possibly re-encoded to sRGB for display or storage[/i]).

Here's the short form of linear vs non-linear:
Math in sRGB: [font=courier new,courier,monospace](0+1)/2=0.22[/font]
Math in any linear space: [font=courier new,courier,monospace](0+1)/2=0.5[/font]

[b]What errors or misleading statements are there in the Microsoft and nVidia links that you've accused?[/b]

[quote name='Lifepower' timestamp='1347897416' post='4980915']So you are saying that the companies you have randomly selected and mentioned have something to do in decision-making regarding misleading usage of sRGB term? With this, you automatically decide that I'm wrong and you are right?[/quote]The companies I listed are responsible for the important nVidia page you've denounced and the design of D3D/GL, which you've denounced.

I decided that you're wrong because you're saying things that I know are wrong. I've worked on converting a lot of renderers from being "gamma ignorant" to performing proper gamma correction and linear-space lighting. You can't perform shading math in colour spaces like sRGB because of the non-linearity. This makes it fundamentally different from linear RGB colour spaces. This is the reason why it was so important to add them to D3D/GL.

[quote]You are moderator, so you are always right and if there is something you don't like, you rather attack the person[/quote]Moderators always being right is a ridiculous appeal to authority. We generally don't moderate threads that we've participated in either, so my ability to lock/hide abusive content is irrelevant.

I've explained where and why you were wrong, [b]which you've brushed off as nonsense, gibberish and senseless words.[/b] I think I've been quite polite regarding such condescension.
Despite 'appeal to popularity' being a fallacy, you do have to consider that perhaps you're [i]just wrong[/i] and you should go and re-read the sRGB wikipedia page -- Occam's razor and all that... but take it as a personal slight instead of reflecting on it if you must, or explain to me the flaw in the above math.
0

Share this post


Link to post
Share on other sites
[quote name='Hodgman' timestamp='1347898336' post='4980926']
sRGB is defined as a non-linear transformation from a particular linear RGB colour space.
[/quote]
You are wrong. sRGB is an application of standardization to RGB color space and it is defined by three primaries in CIE XYZ color space. The transformation between linear and non-linear color spaces is entirely different topic. I've already said this before.

[quote[b] name='Hodgman' timestamp='1347898336' post='4980926'][/b][b]What errors or misleading statements are there in the Microsoft and nVidia links that you've accused?[/b][/quote]
I've already said this in my earlier posts. The error is to mix gamma correction concepts along with RGB and sRGB color spaces together, trying to imply that at one point or another when you "convert" or "transform" (or similar term) from one to another, you need to do gamma correcion, or that at some point gamma correction is applied. My suggested correction is that there are separate topics and the introduction of sRGB texture format is poorly fundamented and sRGB color space name is misused.

[quote name='Hodgman' timestamp='1347898336' post='4980926']
I decided that you're wrong because you're saying things that I know are wrong.[/quote]
Just because you think/decide/believe I'm wrong, it doesn't make you right. It just makes you superficial.

[quote name='Hodgman' timestamp='1347898336' post='4980926']Moderators always being right is a ridiculous appeal to authority. We generally don't moderate threads that we've participated in either, so my ability to lock/hide abusive content is irrelevant.[/quote]
I was not referring to actual thread moderation, rather than you feel that you are right because you are moderator. Perhaps I'm wrong and maybe there are other reasons why you think you are automatically right.

[quote name='Hodgman' timestamp='1347898336' post='4980926']I've explained where and why you were wrong, [b]which you've brushed off as nonsense, gibberish and senseless words.[/b][/quote]
You have said that I'm wrong and failed to give any reasonable evidence to support your points, other than referring to popular belief, your own belief, mixing my phrases with new words among others. I wouldn't mind if you only posted your own points, but copying my text and then adding stuff of your own with the purpose of misguiding the discussion is just uncool. I think you just don't like being seen as wrong on forums where you moderate. [img]http://public.gamedev.net//public/style_emoticons/default/smile.png[/img]

[quote name='Hodgman' timestamp='1347898336' post='4980926']Despite 'appeal to popularity' being a fallacy, you do have to consider that perhaps you're [i]just wrong[/i] and you should go and re-read the sRGB wikipedia page -- Occam's razor and all that... but take it as a personal slight instead of reflecting on it if you must.
[/quote]
Why don't you follow your own advice? [img]http://public.gamedev.net//public/style_emoticons/default/smile.png[/img] On that note, I might suggest that you don't limit your reading to [s]Facebook[/s]Wikipedia only.

P.S. you might want to read some earlier versions of Wikipedia [url="http://en.wikipedia.org/w/index.php?title=SRGB&oldid=88097522"]sRGB entry[/url]. The end result is that when sRGB is viewed on CRT, the viewed gamma appears as 2.2, but again, this is CRT/Display issue, not the space itself. Coincidence and consequence are two different things. Just because gamma is mentioned, it doesn't mean (non-S)RGB has different gamma. In fact, I think mentioning gamma in sRGB discussion is not relevant.
-5

Share this post


Link to post
Share on other sites
Thanks for the very detailed information!

[quote name='Hodgman' timestamp='1347879730' post='4980838']
If your render-target is created as an sRGB texture, then when you write to it, the hardware will perform the linear->sRGB conversion when writing values from your pixel shader automatically.
[/quote]

It seems to be easy to do this in (looking at OpenGL specifically now). Using a Frame Buffer Object with a target texture object of format GL_SRGB8_ALPHA8 (which is a required format). The only caveat is that the transformation to SRGB color space should be done last, and I would prefer not to have a dummy draw into a FBO just to get this transformation. I don't think it is possible to associate the attribute "SRGB" with the default frame buffer?

You can do glEnable(GL_FRAMEBUFFER_SRGB), but only if the value GL_FRAMEBUFFER_ATTACHMENT_COLOR_ENCODING of the destination buffer is GL_SRGB.

Of course, there is always the possibility of doing the transformation yourself, in the shader. But automatic built-in functionality is sometimes more optimized.
0

Share this post


Link to post
Share on other sites
It's possible in DX, so it should be possible in GL as well. Although typically in the very last step you want to do the transformation yourself, so that you allow the user to tweak the curve slightly in order to compensate for the gamma of their display.
1

Share this post


Link to post
Share on other sites
[quote name='Hodgman' timestamp='1347940554' post='4981126']
CRT = pow(linear, 1/2.2);
[/quote]
This would be an approximation of the sRGB algorithm, which is
[code]
if (linear <= 0.0031308)
CRT = linear * 12.92;
else
CRT = 1.055 * pow(linear, 1/2.4) - 0.055;
[/code]
For high values, they are near. For low values, they start to diverge, especially very low values. What is the reason for using this approximation? I suppose one reason can be that of shader performance.

Possibly, it may be that the lower values will not be noticed in games. But that would seem to be a contradiction to the purpose of having extra resolution in the low value interval of sRGB. Actually, my game has some banding problems when drawing the outer limits of a spherical fog in a dark environment (while still not taking sRGB color space into account).

Sampled values from 8-bit textures will only have a resolution of 1/255 = 0.0039, which would not be used in the sRGB conversion unless using some form of HDR.
0

Share this post


Link to post
Share on other sites
[quote name='larspensjo' timestamp='1348049656' post='4981620']What is the reason for using this approximation?[/quote]In that code, I was assuming that the input values were in non-linear sRGB, but the desired output was "CRT RGB" (linear RGB with gamma 2.2) -- the code was decoding from sRGB to linear, and then re-encoding the linear values to "CRT RGB". i.e. it was an [font=courier new,courier,monospace]sRGB->CRT[/font] conversion function.
N.B. this isn't a very useful thing to do in practice, because if this operation is done with 8-bits inputs and outputs, you'll just be destroying information. If displaying 8-bit sRGB images on a CRT, it would be best to avoid doing the right thing™ ([i]which is converting the data into the display's colour encoding[/i]) and just output the sRGB data unmodified, because "CRT RGB" and sRGB are so similar.

In a game, if you had sRGB textures and the user had a CRT monitor, the right thing to do would be to sRGB decode the textures to linear, do your shading in linear ([i]at high precision, e.g. float/half[/i]) and then encode the result with x[sup]^(1/2.2)[/sup] to compensate for the CRT's natural response of x[sup]^2.2[/sup].
However, most user's [b]don't[/b] have a CRT, so it's best to encode the final result with the standard sRGB curve instead of the CRT gamma 2.2 curve. Edited by Hodgman
1

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!


Register a new account

Sign in

Already have an account? Sign in here.


Sign In Now
Sign in to follow this  
Followers 0

  • Similar Content

    • By DaniDesu
      #include "MyEngine.h" int main() { MyEngine myEngine; myEngine.run(); return 0; } MyEngine.h
      #pragma once #include "MyWindow.h" #include "MyShaders.h" #include "MyShapes.h" class MyEngine { private: GLFWwindow * myWindowHandle; MyWindow * myWindow; public: MyEngine(); ~MyEngine(); void run(); }; MyEngine.cpp
      #include "MyEngine.h" MyEngine::MyEngine() { MyWindow myWindow(800, 600, "My Game Engine"); this->myWindow = &myWindow; myWindow.createWindow(); this->myWindowHandle = myWindow.getWindowHandle(); // Load all OpenGL function pointers for use gladLoadGLLoader((GLADloadproc)glfwGetProcAddress); } MyEngine::~MyEngine() { this->myWindow->destroyWindow(); } void MyEngine::run() { MyShaders myShaders("VertexShader.glsl", "FragmentShader.glsl"); MyShapes myShapes; GLuint vertexArrayObjectHandle; float coordinates[] = { 0.5f, 0.5f, 0.0f, 0.5f, -0.5f, 0.0f, -0.5f, 0.5f, 0.0f }; vertexArrayObjectHandle = myShapes.drawTriangle(coordinates); while (!glfwWindowShouldClose(this->myWindowHandle)) { glClearColor(0.5f, 0.5f, 0.5f, 1.0f); glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); // Draw something glUseProgram(myShaders.getShaderProgram()); glBindVertexArray(vertexArrayObjectHandle); glDrawArrays(GL_TRIANGLES, 0, 3); glfwSwapBuffers(this->myWindowHandle); glfwPollEvents(); } } MyShaders.h
      #pragma once #include <glad\glad.h> #include <GLFW\glfw3.h> #include "MyFileHandler.h" class MyShaders { private: const char * vertexShaderFileName; const char * fragmentShaderFileName; const char * vertexShaderCode; const char * fragmentShaderCode; GLuint vertexShaderHandle; GLuint fragmentShaderHandle; GLuint shaderProgram; void compileShaders(); public: MyShaders(const char * vertexShaderFileName, const char * fragmentShaderFileName); ~MyShaders(); GLuint getShaderProgram(); const char * getVertexShaderCode(); const char * getFragmentShaderCode(); }; MyShaders.cpp
      #include "MyShaders.h" MyShaders::MyShaders(const char * vertexShaderFileName, const char * fragmentShaderFileName) { this->vertexShaderFileName = vertexShaderFileName; this->fragmentShaderFileName = fragmentShaderFileName; // Load shaders from files MyFileHandler myVertexShaderFileHandler(this->vertexShaderFileName); this->vertexShaderCode = myVertexShaderFileHandler.readFile(); MyFileHandler myFragmentShaderFileHandler(this->fragmentShaderFileName); this->fragmentShaderCode = myFragmentShaderFileHandler.readFile(); // Compile shaders this->compileShaders(); } MyShaders::~MyShaders() { } void MyShaders::compileShaders() { this->vertexShaderHandle = glCreateShader(GL_VERTEX_SHADER); this->fragmentShaderHandle = glCreateShader(GL_FRAGMENT_SHADER); glShaderSource(this->vertexShaderHandle, 1, &(this->vertexShaderCode), NULL); glShaderSource(this->fragmentShaderHandle, 1, &(this->fragmentShaderCode), NULL); glCompileShader(this->vertexShaderHandle); glCompileShader(this->fragmentShaderHandle); this->shaderProgram = glCreateProgram(); glAttachShader(this->shaderProgram, this->vertexShaderHandle); glAttachShader(this->shaderProgram, this->fragmentShaderHandle); glLinkProgram(this->shaderProgram); return; } GLuint MyShaders::getShaderProgram() { return this->shaderProgram; } const char * MyShaders::getVertexShaderCode() { return this->vertexShaderCode; } const char * MyShaders::getFragmentShaderCode() { return this->fragmentShaderCode; } MyWindow.h
      #pragma once #include <glad\glad.h> #include <GLFW\glfw3.h> class MyWindow { private: GLFWwindow * windowHandle; int windowWidth; int windowHeight; const char * windowTitle; public: MyWindow(int windowWidth, int windowHeight, const char * windowTitle); ~MyWindow(); GLFWwindow * getWindowHandle(); void createWindow(); void MyWindow::destroyWindow(); }; MyWindow.cpp
      #include "MyWindow.h" MyWindow::MyWindow(int windowWidth, int windowHeight, const char * windowTitle) { this->windowHandle = NULL; this->windowWidth = windowWidth; this->windowWidth = windowWidth; this->windowHeight = windowHeight; this->windowTitle = windowTitle; glfwInit(); } MyWindow::~MyWindow() { } GLFWwindow * MyWindow::getWindowHandle() { return this->windowHandle; } void MyWindow::createWindow() { // Use OpenGL 3.3 and GLSL 3.3 glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 3); glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3); // Limit backwards compatibility glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE); glfwWindowHint(GLFW_OPENGL_FORWARD_COMPAT, GL_TRUE); // Prevent resizing window glfwWindowHint(GLFW_RESIZABLE, GL_FALSE); // Create window this->windowHandle = glfwCreateWindow(this->windowWidth, this->windowHeight, this->windowTitle, NULL, NULL); glfwMakeContextCurrent(this->windowHandle); } void MyWindow::destroyWindow() { glfwTerminate(); } MyShapes.h
      #pragma once #include <glad\glad.h> #include <GLFW\glfw3.h> class MyShapes { public: MyShapes(); ~MyShapes(); GLuint & drawTriangle(float coordinates[]); }; MyShapes.cpp
      #include "MyShapes.h" MyShapes::MyShapes() { } MyShapes::~MyShapes() { } GLuint & MyShapes::drawTriangle(float coordinates[]) { GLuint vertexBufferObject{}; GLuint vertexArrayObject{}; // Create a VAO glGenVertexArrays(1, &vertexArrayObject); glBindVertexArray(vertexArrayObject); // Send vertices to the GPU glGenBuffers(1, &vertexBufferObject); glBindBuffer(GL_ARRAY_BUFFER, vertexBufferObject); glBufferData(GL_ARRAY_BUFFER, sizeof(coordinates), coordinates, GL_STATIC_DRAW); // Dertermine the interpretation of the array buffer glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 3*sizeof(float), (void *)0); glEnableVertexAttribArray(0); // Unbind the buffers glBindBuffer(GL_ARRAY_BUFFER, 0); glBindVertexArray(0); return vertexArrayObject; } MyFileHandler.h
      #pragma once #include <cstdio> #include <cstdlib> class MyFileHandler { private: const char * fileName; unsigned long fileSize; void setFileSize(); public: MyFileHandler(const char * fileName); ~MyFileHandler(); unsigned long getFileSize(); const char * readFile(); }; MyFileHandler.cpp
      #include "MyFileHandler.h" MyFileHandler::MyFileHandler(const char * fileName) { this->fileName = fileName; this->setFileSize(); } MyFileHandler::~MyFileHandler() { } void MyFileHandler::setFileSize() { FILE * fileHandle = NULL; fopen_s(&fileHandle, this->fileName, "rb"); fseek(fileHandle, 0L, SEEK_END); this->fileSize = ftell(fileHandle); rewind(fileHandle); fclose(fileHandle); return; } unsigned long MyFileHandler::getFileSize() { return (this->fileSize); } const char * MyFileHandler::readFile() { char * buffer = (char *)malloc((this->fileSize)+1); FILE * fileHandle = NULL; fopen_s(&fileHandle, this->fileName, "rb"); fread(buffer, this->fileSize, sizeof(char), fileHandle); fclose(fileHandle); buffer[this->fileSize] = '\0'; return buffer; } VertexShader.glsl
      #version 330 core layout (location = 0) vec3 VertexPositions; void main() { gl_Position = vec4(VertexPositions, 1.0f); } FragmentShader.glsl
      #version 330 core out vec4 FragmentColor; void main() { FragmentColor = vec4(1.0f, 0.0f, 0.0f, 1.0f); } I am attempting to create a simple engine/graphics utility using some object-oriented paradigms. My first goal is to get some output from my engine, namely, a simple red triangle.
      For this goal, the MyShapes class will be responsible for defining shapes such as triangles, polygons etc. Currently, there is only a drawTriangle() method implemented, because I first wanted to see whether it works or not before attempting to code other shape drawing methods.
      The constructor of the MyEngine class creates a GLFW window (GLAD is also initialized here to load all OpenGL functionality), and the myEngine.run() method in Main.cpp is responsible for firing up the engine. In this run() method, the shaders get loaded from files via the help of my FileHandler class. The vertices for the triangle are processed by the myShapes.drawTriangle() method where a vertex array object, a vertex buffer object and vertrex attributes are set for this purpose.
      The while loop in the run() method should be outputting me the desired red triangle, but all I get is a grey window area. Why?
      Note: The shaders are compiling and linking without any errors.
      (Note: I am aware that this code is not using any good software engineering practices (e.g. exceptions, error handling). I am planning to implement them later, once I get the hang of OpenGL.)

       
    • By KarimIO
      EDIT: I thought this was restricted to Attribute-Created GL contexts, but it isn't, so I rewrote the post.
      Hey guys, whenever I call SwapBuffers(hDC), I get a crash, and I get a "Too many posts were made to a semaphore." from Windows as I call SwapBuffers. What could be the cause of this?
      Update: No crash occurs if I don't draw, just clear and swap.
      static PIXELFORMATDESCRIPTOR pfd = // pfd Tells Windows How We Want Things To Be { sizeof(PIXELFORMATDESCRIPTOR), // Size Of This Pixel Format Descriptor 1, // Version Number PFD_DRAW_TO_WINDOW | // Format Must Support Window PFD_SUPPORT_OPENGL | // Format Must Support OpenGL PFD_DOUBLEBUFFER, // Must Support Double Buffering PFD_TYPE_RGBA, // Request An RGBA Format 32, // Select Our Color Depth 0, 0, 0, 0, 0, 0, // Color Bits Ignored 0, // No Alpha Buffer 0, // Shift Bit Ignored 0, // No Accumulation Buffer 0, 0, 0, 0, // Accumulation Bits Ignored 24, // 24Bit Z-Buffer (Depth Buffer) 0, // No Stencil Buffer 0, // No Auxiliary Buffer PFD_MAIN_PLANE, // Main Drawing Layer 0, // Reserved 0, 0, 0 // Layer Masks Ignored }; if (!(hDC = GetDC(windowHandle))) return false; unsigned int PixelFormat; if (!(PixelFormat = ChoosePixelFormat(hDC, &pfd))) return false; if (!SetPixelFormat(hDC, PixelFormat, &pfd)) return false; hRC = wglCreateContext(hDC); if (!hRC) { std::cout << "wglCreateContext Failed!\n"; return false; } if (wglMakeCurrent(hDC, hRC) == NULL) { std::cout << "Make Context Current Second Failed!\n"; return false; } ... // OGL Buffer Initialization glClear(GL_DEPTH_BUFFER_BIT | GL_COLOR_BUFFER_BIT); glBindVertexArray(vao); glUseProgram(myprogram); glDrawElements(GL_TRIANGLES, indexCount, GL_UNSIGNED_SHORT, (void *)indexStart); SwapBuffers(GetDC(window_handle));  
    • By Tchom
      Hey devs!
       
      I've been working on a OpenGL ES 2.0 android engine and I have begun implementing some simple (point) lighting. I had something fairly simple working, so I tried to get fancy and added color-tinting light. And it works great... with only one or two lights. Any more than that, the application drops about 15 frames per light added (my ideal is at least 4 or 5). I know implementing lighting is expensive, I just didn't think it was that expensive. I'm fairly new to the world of OpenGL and GLSL, so there is a good chance I've written some crappy shader code. If anyone had any feedback or tips on how I can optimize this code, please let me know.
       
      Vertex Shader
      uniform mat4 u_MVPMatrix; uniform mat4 u_MVMatrix; attribute vec4 a_Position; attribute vec3 a_Normal; attribute vec2 a_TexCoordinate; varying vec3 v_Position; varying vec3 v_Normal; varying vec2 v_TexCoordinate; void main() { v_Position = vec3(u_MVMatrix * a_Position); v_TexCoordinate = a_TexCoordinate; v_Normal = vec3(u_MVMatrix * vec4(a_Normal, 0.0)); gl_Position = u_MVPMatrix * a_Position; } Fragment Shader
      precision mediump float; uniform vec4 u_LightPos["+numLights+"]; uniform vec4 u_LightColours["+numLights+"]; uniform float u_LightPower["+numLights+"]; uniform sampler2D u_Texture; varying vec3 v_Position; varying vec3 v_Normal; varying vec2 v_TexCoordinate; void main() { gl_FragColor = (texture2D(u_Texture, v_TexCoordinate)); float diffuse = 0.0; vec4 colourSum = vec4(1.0); for (int i = 0; i < "+numLights+"; i++) { vec3 toPointLight = vec3(u_LightPos[i]); float distance = length(toPointLight - v_Position); vec3 lightVector = normalize(toPointLight - v_Position); float diffuseDiff = 0.0; // The diffuse difference contributed from current light diffuseDiff = max(dot(v_Normal, lightVector), 0.0); diffuseDiff = diffuseDiff * (1.0 / (1.0 + ((1.0-u_LightPower[i])* distance * distance))); //Determine attenuatio diffuse += diffuseDiff; gl_FragColor.rgb *= vec3(1.0) / ((vec3(1.0) + ((vec3(1.0) - vec3(u_LightColours[i]))*diffuseDiff))); //The expensive part } diffuse += 0.1; //Add ambient light gl_FragColor.rgb *= diffuse; } Am I making any rookie mistakes? Or am I just being unrealistic about what I can do? Thanks in advance
    • By yahiko00
      Hi,
      Not sure to post at the right place, if not, please forgive me...
      For a game project I am working on, I would like to implement a 2D starfield as a background.
      I do not want to deal with static tiles, since I plan to slowly animate the starfield. So, I am trying to figure out how to generate a random starfield for the entire map.
      I feel that using a uniform distribution for the stars will not do the trick. Instead I would like something similar to the screenshot below, taken from the game Star Wars: Empire At War (all credits to Lucasfilm, Disney, and so on...).

      Is there someone who could have an idea of a distribution which could result in such a starfield?
      Any insight would be appreciated
    • By afraidofdark
      I have just noticed that, in quake 3 and half - life, dynamic models are effected from light map. For example in dark areas, gun that player holds seems darker. How did they achieve this effect ? I can use image based lighting techniques however (Like placing an environment probe and using it for reflections and ambient lighting), this tech wasn't used in games back then, so there must be a simpler method to do this.
      Here is a link that shows how modern engines does it. Indirect Lighting Cache It would be nice if you know a paper that explains this technique. Can I apply this to quake 3' s light map generator and bsp format ?
  • Popular Now