Sign in to follow this  

Reading linearized Texture as sRGB, then doing Gamma conversion: Different result

This topic is 1265 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi,

I just switched my Direct3D9 renderer to linear color.

 

As a test, I rendered a quad with an image. The quad itself uses no shaders. In the sampler state, I'm setting D3DSAMP_SRGBTEXTURE to 1.

Before presenting, I copy the rendertarget to the backbuffer with a gamma adjustment of 2.2. Here is the gamma adjustment shader: ("Texture" refers to the rendertarget as rendered previously).



sampler Texture
{
SRGBTexture=false;
};


float4 GammaAdjustOnly( float2 Tex : TEXCOORD0 ) : COLOR0
{
	return pow(tex2D(Texture, Tex), 1.0/2.2);
}

technique TAdjustGamma
{
	pass P0
	{   
		pixelshader = compile ps_2_0 GammaAdjustOnly();
	}
}

This should lead to the exact same result as before, when I didn't use sRGB textures. However, there is a difference: When using sRGB, the image looks more grey-ish and a bit brighter.

 

I am not using sRGB writes, instead I'm using an FP16 buffer. To confirm that I messed nothing else up, I removed the Gamma Adjustment and the sRGB sampler state while leaving everything else that same. Sure enough, it looks correct then.

 

Where does the difference come from? After all, "pixel ^ (1 / 2.2) ^ (2.2)" should be equal to "pixel", right?

 

Here are some more details: This is on Windows 7, with Direct3D9 via SlimDX, running in windowed mode.

Edited by Tubos

Share this post


Link to post
Share on other sites

pow(x, 2.2) is an approximation of the sRGB decoding process, but that's not exactly what happens. It's typically a piecewise linear curve, and the actual gamma is slightly different in different regions. Check out http://en.wikipedia.org/wiki/SRGB for the ideal version.

Share this post


Link to post
Share on other sites

Ah, that means DirectX does the conversion using an exact sRGB conversion, while my shader uses the approximated version.

 

So a good solution would be to disable the sRGB sampler state, and do the pow(x, 1.0 / 2.2) manually in every pixel shader, right?

Then I'm working with slightly incorrect texture values in the linear space, but the final output should look identical to the input image.

Share this post


Link to post
Share on other sites

Well, typically it would be better to let the GPU do the conversion, since there is hardware for it (ie. it's free, where as pow isn't). But if it suits your needs better, do as you wish.

 

Cheers!

Share this post


Link to post
Share on other sites

So a good solution would be to disable the sRGB sampler state, and do the pow(x, 1.0 / 2.2) manually in every pixel shader, right?

 

No, don't apply gamma in the pixel shader unless you have a good reason to. That's why sRGB formats exist.

Edited by Chris_F

Share this post


Link to post
Share on other sites

The best solution would be to use sRGB source images, let the hardware convert to linear and store in the offscreen buffer, do any blending etc in the linear buffer, then send to an sRGB backbuffer allowing the hardware to convert back to sRGB (which is what you monitor [nominally] is).

Edited by mark ds

Share this post


Link to post
Share on other sites

Ah, that means DirectX does the conversion using an exact sRGB conversion, while my shader uses the approximated version.

 

So a good solution would be to disable the sRGB sampler state, and do the pow(x, 1.0 / 2.2) manually in every pixel shader, right?

Then I'm working with slightly incorrect texture values in the linear space, but the final output should look identical to the input image.

 

You should use D3DRS_SRGBWRITEENABLE when you render, that way you don't need to do the pow(x, 1.0/2.2) approximation in your shader.

Share this post


Link to post
Share on other sites

The best solution would be to use sRGB source images, let the hardware convert to linear and store in the offscreen buffer, do any blending etc in the linear buffer, then send to an sRGB backbuffer allowing the hardware to convert back to sRGB (which is what you monitor [nominally] is).
Ok! I'm now doing that, and the result looks exactly right :)

 

One thing still puzzles me. Most HDR samples I see do not use sRGBWrite, but perform the pow in their tonemapping instead. Why is that? Is there an advantage to doing it manually?

Share this post


Link to post
Share on other sites

Sticking to sRGB, whilst technically the correct thing to do, is not always the final look an artist wants (they're a fussy bunch!). So, for the final output, whatever looks best is the right thing to do, meaning manual pow before presentation (or more likely, a series of pre-calculates lookups, often involving 3d textures).

 

On the other hand, things like photo editing packages would normal stay strictly within sRGB (or some other specified colour space).

Edited by mark ds

Share this post


Link to post
Share on other sites

Thank you!

 

According to the CardCaps.xls file included with the DirectX SDK, sRGB Writes and Reads on A8R8G8B8 surfaces are supported on almost all hardware except some released in 2002 (Geforce 4 MX 420). Is that true, or do recent adapters exist which don't support it?

Share this post


Link to post
Share on other sites

One thing still puzzles me. Most HDR samples I see do not use sRGBWrite, but perform the pow in their tonemapping instead. Why is that? Is there an advantage to doing it manually?

Ideally, everyone's TV/monitor would follow the sRGB standard exactly, but unfortunately, many don't. Worse, many do, but the factory default settings use extreme contrast/gamma, and the user is unlikely to go through the settings and activate sRGB mode :(

Because of this, lots of games show you a 'calibration' image and a slider to choose a game value (e.g. "Move the slider until the left image is barely visible").
If you do the final gamma correction manually, you can use this custom gamma value.

On the console games I've worked on, we routinely test on many different monitors and TVs, and have found that most look ok with gamma values all the way from 1.8 to 2.4!

According to the CardCaps.xls file included with the DirectX SDK, sRGB Writes and Reads on A8R8G8B8 surfaces are supported on almost all hardware except some released in 2002 (Geforce 4 MX 420). Is that true, or do recent adapters exist which don't support it?

as of D3D10, it became a required feature, so all modern cards will support it.

Share this post


Link to post
Share on other sites
Sign in to follow this