Sign in to follow this  

DX11 [DX11] Why we need sRGB back buffer

This topic is 1278 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi,

 

after reading a couple of resources in web about Gamma Correction I still feel confused.

 

In my experiment pixel shader simply outputs linear gradient to backbuffer.

 

 - First case: backbuffer format is not sRGB, value of linear gradient is outputted without any modifications:

[attachment=22107:ng.jpg]

 

 - Second case: backbuffer format is sRGB, value of linear gradient is outputted without any modifications:

[attachment=22104:g1.jpg]

 

 - Third case: backbuffer format is sRGB, value of linear gradient is outputted with correction of pow(u, 1/2.2):

[attachment=22105:g1div2.2.jpg]

 

 - Fourth case: backbuffer format is sRGB, value of linear gradient is outputted with correction of pow(u, 2.2):

[attachment=22106:g2.2.jpg]

 

As you see, first and last results are almost the same. So, my question is why we need sRGB backbuffers plus modifying final output pixel shader if we can simply use non-sRGB texture? The result is almost the same:

[attachment=22108:pixcmp.jpg]

Edited by user88

Share this post


Link to post
Share on other sites
Hi, in this simple case it can look like same, but whole point of linear vs gamma is when it came to calculations, e.g. when values are multiplied/added together. Normally you would think that 1 + 1 = 2 and 1*1 = 1, e.g when you double light intensity or blend two lights together, result will have doubled brightness, but gamma is not linear so 1 +1 can be 3.

Share this post


Link to post
Share on other sites

Hello Ashaman73, as I understood you are talking about sRGB color space and HDR, but my question is about advantage sRGB backbuffer + pow(u, 2.2) over non-sRGB format  + direct output.

 

What I can guess from comparison image (last one in my first post) the advantage is in precision of Gamma curve applied to final image. With sRGB backbuffer + pow(u, 2.2) it is more precise. Right? Are there any other advantages?

Share this post


Link to post
Share on other sites

Have you read the article "The Importance of Being Linear"?  It does a pretty good job of explaining why you need gamma correction, including the situations when you should use it and when you shouldn't.  I applaud the OP's willingness to experiment, but in this case it seems like you don't get the high level concept just yet - so please try to read through that article and come to a mathematic reasoning for doing this and then the correct operation will be quite clear.

 

Hi Jason,

 

I have read this article (anyway thank you for a link) and understand the mathematic reasoning of Gamma Correction process. All is clear for me with sRGB images sampling and correction for further linear calculations. All intermediate calculations should be outputted to buffers with any correction. That is also clear for me.

 

The misunderstanding actually is with sRGB backbuffer. I thought that sRGB backbuffer is like JPEG in sRGB color space, meaning that all values in sRGB backbuffer are already Gamma Corrected (pow(value, 1/2.2)). If so, then final color values should outputted with pow(value, 1/2.2) correction. But no, it seems the sRGB backbuffer is the opposite of what I thought. Furthermore, final color value should be outputted with pow(value, 1/2.2) correction for non-sRGB backbuffers, right?

Edited by user88

Share this post


Link to post
Share on other sites


I thought that sRGB backbuffer is like JPEG in sRGB color space, meaning that all values in sRGB buffer are already Gamma Corrected (pow(value, 2.2)). If so, then final color values should outputted with pow(value, 2.2) correction. But no, it seems the sRGB backbuffer is the opposite of what I thought.
No, the display/monitor itself does the pow(value,2.2) itself, in the display hardware.

If you do the pow(value,2.2) yourself, then you end with seeing pow(pow(value,2.2),2.2) after the display emits the picture biggrin.png

Share this post


Link to post
Share on other sites

 


I thought that sRGB backbuffer is like JPEG in sRGB color space, meaning that all values in sRGB buffer are already Gamma Corrected (pow(value, 2.2)). If so, then final color values should outputted with pow(value, 2.2) correction. But no, it seems the sRGB backbuffer is the opposite of what I thought.
No, the display/monitor itself does the pow(value,2.2) itself, in the display hardware.

If you do the pow(value,2.2) yourself, then you end with seeing pow(pow(value,2.2),2.2) after the display emits the picture biggrin.png

 

I mean 1/2.2 not 2.2. Already corrected my previous post. Sorry for that..

Share this post


Link to post
Share on other sites

Hi,

 

after reading a couple of resources in web about Gamma Correction I still feel confused.

 

In my experiment pixel shader simply outputs linear gradient to backbuffer.

 

 - First case: backbuffer format is not sRGB, value of linear gradient is outputted without any modifications:

attachicon.gifng.jpg

 

 - Second case: backbuffer format is sRGB, value of linear gradient is outputted without any modifications:

attachicon.gifg1.jpg

 

 - Third case: backbuffer format is sRGB, value of linear gradient is outputted with correction of pow(u, 1/2.2):

attachicon.gifg1div2.2.jpg

 

 - Fourth case: backbuffer format is sRGB, value of linear gradient is outputted with correction of pow(u, 2.2):

attachicon.gifg2.2.jpg

 

As you see, first and last results are almost the same. So, my question is why we need sRGB backbuffers plus modifying final output pixel shader if we can simply use non-sRGB texture? The result is almost the same:

attachicon.gifpixcmp.jpg

 

I'd like to point out that that only your second image is "correct", that is because your performed the "manual" conversion incorrectly. Maybe an older post of mine can clear up any lingering confusion. It's not complicated, but it trips people up often. To answer the question, the advantage of a sRGB renderbuffer is that it performs the actual linear to sRGB conversion, not a gamma approximation, and by using it instead of a pow() instruction you are less likely to make a mistake as you did. wink.png

 

Erm, if the backbuffer is sRGB then you shouldn't be providing ANY changes to the values you are writing out; you should be writing linear values and allowing the hardware to do the conversion to sRGB space when it writes the data.

The correct versions are either;
linear maths in shader => sRGB buffer
or
linear maths in shader => pow(2.2) => non-sRGB buffer 8bit/channel image

Anything else is wrong.
(Also, keep in mind sRGB isn't just a pow(2.2) curve, it has a toe at the low end to 'boost' the dark colours).

 

That should be:

 

linear maths in shader => pow(1 / 2.2) => non-sRGB buffer 8bit/channel image

 

And that is only correct insofar that it is a close-ish approximation to sRGB.

Edited by Chris_F

Share this post


Link to post
Share on other sites

linear maths in shader => pow(1 / 2.2) => non-sRGB buffer 8bit/channel image


Yeah, my bad, if I was doing it by hand in shader code I'd have double checked that, but as I'd normally leave things linear (16f or 10rgb2a) or have sRGB source/dest I tend not to hold the number in my brain ;)

Anyway, post corrected now so anyone finding it isn't confused smile.png

Share this post


Link to post
Share on other sites

Okay, with implementation of Gamma Correction as well as backbuffer sRGB format all is clear for me now.

 

Great post, Chris:

 

http://www.gamedev.net/topic/652795-clarifications-gamma-correction-srgb/#entry5127278

 

Hodgman, your post also was very helpful, thanks. One point is not clear for me. It is about "mathematically linear" and "perceptually linear" things:

The reason you think the first image is 'correct' is because "mathematically linear" is not the same as "perceptually linear". In order to perform correct lighting and shading calculations, or to be able to reproduce the same photograph that we captured earlier, we need all the data to be mathematically linear.

 

  - Why the linear gradient that i can see on screen looks like non-linear with gamma correction (second case in my first post)? I compared it visually with linear gradient that i have made in PhotoShop. Same width in pixels. On screen looks different. Is there some PhotoShop trick?

Share this post


Link to post
Share on other sites

I've never used gamma correction thing and I'd like to make sure I understand correctly.

I have to use SRGB diffuse/color textures, non-SRGB normal textures, SRGB backbuffer to make it gamma correct?

 

I've tried to compare results and here's what I've got.

SRGB backbuffer:

[attachment=22203:Screenshot 2014-06-18 13.01.56.png]

Non-SRGB backbuffer:

[attachment=22204:Screenshot 2014-06-18 13.02.04.png]

 

With texture:

SRGB backbuffer and texture:

[attachment=22205:Screenshot 2014-06-18 13.27.31.png]

Non-SRGB backbuffer and non-SRGB texture:

[attachment=22206:Screenshot 2014-06-18 13.27.56.png]

Texture source: http://minecraftworld.files.wordpress.com/2011/05/earth_flat_map.jpg

 

So SRGB variants are correct ones?

Edited by Zaoshi Kaba

Share this post


Link to post
Share on other sites

Assuming that the textures were painted on an sRGB monitor, and you're viewing the results on an sRGB monitor, then yep.

 

n.b. the sharper termination line in the first sphere matches real life physics much more than the soft gradient in the second image too biggrin.png

Why the linear gradient that i can see on screen looks like non-linear with gamma correction (second case in my first post)?

The way that us humans perceive light is not linear.

 

e.g. If you're in a room lit by one light bulb, and you turn on a second bulb, physically/objectively/mathematically speaking the room is now twice as bright, but subjectively a person won't say that it looks twice as bright (their perceptions don't match the objective truth). You might have to turn on 10 light bulbs before the person says that the room is twice as bright as it was initially.

Likewise, if you get a gradient that looks linear/smooth and then measure it using a light-meter, you'll find that it's probably logarithmically curved!

 

I'm not sure how photoshop creates gradients - but there are several "smoothing" options from what I remember, to produce results that are perceived as looking nice.

 

The point of "being gamma correct" is mostly so that we can match how physics/maths works. When we add two lights on top of each other, we need them to behave the way they work in the real world.

Edited by Hodgman

Share this post


Link to post
Share on other sites

This topic is 1278 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Forum Statistics

    • Total Topics
      628738
    • Total Posts
      2984464
  • Similar Content

    • By GreenGodDiary
      Having some issues with a geometry shader in a very basic DX app.
      We have an assignment where we are supposed to render a rotating textured quad, and in the geometry shader duplicate this quad and offset it by its normal. Very basic stuff essentially.
      My issue is that the duplicated quad, when rendered in front of the original quad, seems to fail the Z test and thus the original quad is rendered on top of it.
      Whats even weirder is that this only happens for one of the triangles in the duplicated quad, against one of the original quads triangles.

      Here's a video to show you what happens: Video (ignore the stretched textures)

      Here's my GS: (VS is simple passthrough shader and PS is just as basic)
      struct VS_OUT { float4 Pos : SV_POSITION; float2 UV : TEXCOORD; }; struct VS_IN { float4 Pos : POSITION; float2 UV : TEXCOORD; }; cbuffer cbPerObject : register(b0) { float4x4 WVP; }; [maxvertexcount(6)] void main( triangle VS_IN input[3], inout TriangleStream< VS_OUT > output ) { //Calculate normal float4 faceEdgeA = input[1].Pos - input[0].Pos; float4 faceEdgeB = input[2].Pos - input[0].Pos; float3 faceNormal = normalize(cross(faceEdgeA.xyz, faceEdgeB.xyz)); //Input triangle, transformed for (uint i = 0; i < 3; i++) { VS_OUT element; VS_IN vert = input[i]; element.Pos = mul(vert.Pos, WVP); element.UV = vert.UV; output.Append(element); } output.RestartStrip(); for (uint j = 0; j < 3; j++) { VS_OUT element; VS_IN vert = input[j]; element.Pos = mul(vert.Pos + float4(faceNormal, 0.0f), WVP); element.Pos.xyz; element.UV = vert.UV; output.Append(element); } }  
      I havent used geometry shaders much so im not 100% on what happens behind the scenes.
      Any tips appreciated! 
    • By mister345
      Hi, I'm building a game engine using DirectX11 in c++.
      I need a basic physics engine to handle collisions and motion, and no time to write my own.
      What is the easiest solution for this? Bullet and PhysX both seem too complicated and would still require writing my own wrapper classes, it seems. 
      I found this thing called PAL - physics abstraction layer that can support bullet, physx, etc, but it's so old and no info on how to download or install it.
      The simpler the better. Please let me know, thanks!
    • By Hexaa
      I try to draw lines with different thicknesses using the geometry shader approach from here:
      https://forum.libcinder.org/topic/smooth-thick-lines-using-geometry-shader
      It seems to work great on my development machine (some Intel HD). However, if I try it on my target (Nvidia NVS 300, yes it's old) I get different results. See the attached images. There
      seem to be gaps in my sine signal that the NVS 300 device creates, the intel does what I want and expect in the other picture.
      It's a shame, because I just can't figure out why. I expect it to be the same. I get no Error in the debug output, with enabled native debugging. I disabled culling with CullMode.None. Could it be some z-fighting? I have little clue about it but I tested to play around with the RasterizerStateDescription and DepthBias properties with no success, no change at all. Maybe I miss something there?
      I develop the application with SharpDX btw.
      Any clues or help is very welcome
       


    • By Beny Benz
      Hi,
      I'm currently trying to write a shader which shoud compute a fast fourier transform of some data, manipulating the transformed data, do an inverse FFT an then displaying the result as vertex offset and color. I use Unity3d and HLSL as shader language. One of the main problems is that the data should not be passed from CPU to GPU for every frame if possible. My original plan was to use a vertex shader and do the fft there, but I fail to find out how to store changing data betwen shader calls/passes. I found a technique called ping-ponging which seems to be based on writing and exchangeing render targets, but I couldn't find an example for HLSL as a vertex shader yet.
      I found https://social.msdn.microsoft.com/Forums/en-US/c79a3701-d028-41d9-ad74-a2b3b3958383/how-to-render-to-multiple-render-targets-in-hlsl?forum=xnaframework
      which seem to use COLOR0 and COLOR1 as such render targets.
      Is it even possible to do such calculations on the gpu only? (/in this shader stage?, because I need the result of the calculation to modify the vertex offsets there)
      I also saw the use of compute shaders in simmilar projects (ocean wave simulation), do they realy copy data between CPU / GPU for every frame?
      How does this ping-ponging / rendertarget switching technique work in HLSL?
      Have you seen an example of usage?
      Any answer would be helpfull.
      Thank you
      appswert
    • By ADDMX
      Hi
      Just a simple question about compute shaders (CS5, DX11).
      Do the atomic operations (InterlockedAdd in my case) should work without any issues on RWByteAddressBuffer and be globaly coherent ?
      I'v come back from CUDA world and commited fairly simple kernel that does some job, the pseudo-code is as follows:
      (both kernels use that same RWByteAddressBuffer)
      first kernel does some job and sets Result[0] = 0;
      (using Result.Store(0, 0))
      I'v checked with debugger, and indeed the value stored at dword 0 is 0
      now my second kernel
      RWByteAddressBuffer Result;  [numthreads(8, 8, 8)] void main() {     for (int i = 0; i < 5; i++)     {         uint4 v0 = DoSomeCalculations1();         uint4 v1 = DoSomeCalculations2();         uint4 v2 = DoSomeCalculations3();                  if (v0.w == 0 && v1.w == 0 && v2.w)             continue;         //    increment counter by 3, and get it previous value         // this should basically allocate space for 3 uint4 values in buffer         uint prev;         Result.InterlockedAdd(0, 3, prev);                  // this fills the buffer with 3 uint4 values (+1 is here as the first 16 bytes is occupied by DrawInstancedIndirect data)         Result.Store4((prev+0+1)*16, v0);         Result.Store4((prev+1+1)*16, v1);         Result.Store4((prev+2+1)*16, v2);     } } Now I invoke it with Dispatch(4,4,4)
      Now I use DrawInstancedIndirect to draw the buffer, but ocassionaly there is missed triangle here and there for a frame, as if the atomic counter does not work as expected
      do I need any additional synchronization there ?
      I'v tried 'AllMemoryBarrierWithGroupSync' at the end of kernel, but without effect.
      If I do not use atomic counter, and istead just output empty vertices (that will transform into degenerated triangles) the all is OK - as if I'm missing some form of synchronization, but I do not see such a thing in DX11.
      I'v tested on both old and new nvidia hardware (680M and 1080, the behaviour is that same).
       
  • Popular Now