sRGB on diffuse textures or gbuffer color texture?

Started by
15 comments, last by Adam Miles 8 years, 8 months ago
Not sure I follow this one - isn't the whole point to be able to run multiple passes? For example run several point light passes, an ambient pass and maybe a directional light pass in sequence and accumulate the output color? Isn't that why you need to work in linear space to additively blend the output from the various lighting passes? And then you need a texture which is in linear space and then only at the end after all lighting is done gets converted to SRGB when copied into the backbuffer?

I din't say you don't do multiple passes. I was trying to say that all the lighting calculations that use the diffuse buffer are done in the pixel shader of the last pass. Yes, you do use multiple passes and separate light buffers for different light types, and then blend those light buffers together, but that is not what the article is talking about in those paragraphs you quoted. Those light buffers do not store any color information AFAIK - they just store light intensities, which are always linear, so there's no discussion on them having to be transformed to/from linear. You only need to do gamma correction on the diffuse buffers.

Would someone else please correct his, if I am wrong again? smile.png

I think ajmiles' answer above pretty much clears up the confusion though... I suspected that the article is probably bogus (for OpenGL too, not just D3D), but I didn't know why exactly. I always set the _SRGB format on the source textures anyway.

And please ignore what I said about the formats... I've been away from D3D for a while.

Advertisement

First of all, you're going to want a lighting buffer format that lets you store values > 1.0, so R16G16B16A16_UNORM is not what you're after. That same format as _FLOAT would do the trick, but DXGI_FORMAT_R11G11B10_FLOAT would also work well.

Why would you want values larger than 1.0? Dosn't the same RGB values normalized to [0.0, 1.0] give the same result?

Also does this not mess up lighting since the diffuse textures are UNORM and when sampled in the shader aren't they still in the [0.0, 1.0] range (since there is no equivalent float format for a SRV)? So you would only output color on the very low-end and very dark as a result?

Secondly, you really do want your diffuse textures sampled as sRGB. You make think that there's no point in going from sRGB -> Linear -> sRGB again, but once you factor in the fact you're bilinearly / anisotropically filtering these textures, it no longer makes mathematical sense to perform linear filtering on non-linear data. When the hardware performs sRGB -> Linear decoding it doesn't take an average of the 4 sRGB encoded texels and then convert the result, it converts the 4 texels from sRGB to linear *and then* averages them. Work it out on paper with a few examples and you'll see that the two orders of operation are not equivalent. In areas of contrast in the texture this can make a big difference. sRGB decoding and encoding is free on all hardware that I know of in this era, so use it.

Completely forgot about filtering admist all this, good point!

Those light buffers do not store any color information AFAIK - they just store light intensities, which are always linear,

Lights have colour don't forget, the intensity of each channel isn't necessarily equal.

First of all, you're going to want a lighting buffer format that lets you store values > 1.0, so R16G16B16A16_UNORM is not what you're after. That same format as _FLOAT would do the trick, but DXGI_FORMAT_R11G11B10_FLOAT would also work well.

Why would you want values larger than 1.0? Dosn't the same RGB values normalized to [0.0, 1.0] give the same result?

Also does this not mess up lighting since the diffuse textures are UNORM and when sampled in the shader aren't they still in the [0.0, 1.0] range (since there is no equivalent float format for a SRV)? So you would only output color on the very low-end and very dark as a result?

If you don't have the ability to store values > 1 in your lighting buffer then you're going to lose information in areas of the scene where the total light intensity exceeds 1. Light intensity isn't stored on a scale of "Black" to "White", there's no limit to how bright something can appear with enough light (or lights) shining at it. There is no maximum light intensity that you can map to the value of 1, you need a scale where 0 is "no light" and the upper limit is (within reason) unbounded.

Imagine a light of intensity '1.0' shining directly at a white wall. Where dot(N, L) is 1, the resulting colour is "White * 1.0f", which is still 1.0f. Now add a second light shining at the same area of the wall, now you have twice the illumination because you have two lights. Logically the light intensity on this area of the wall is now 2.0 rather than 1.0, so you need a texture format that can store values that exceed 1.

Your textures are still UNORM, as they represent values between Black and White, not "no light" to some unbounded intensity (although an Emissive texture may want to be HDR). It's the calculation of texture [0->1] * light intensity [0->inf] that results in HDR values, not the diffuse textures themselves.

Adam Miles - Principal Software Development Engineer - Microsoft Xbox Advanced Technology Group

Why would you want values larger than 1.0? Dosn't the same RGB values normalized to [0.0, 1.0] give the same result?

Yes of course, but you only know afterwards how big the range is. How do you normalize without knowing?

That is why you choose a format that allows values lager than 1.0 first, and then do some kind of histogram calculation (with some extra twists, so e.g. a single ultra-bright pixel doesn't screw everything up), and then you do tonemapping, which maps everything to a [0.0, 1.0] range. That way, you handle exposure much in the same way as your eyes do, too (only your eyes are a bit better at it!).

Why would you want values larger than 1.0? Dosn't the same RGB values normalized to [0.0, 1.0] give the same result?

Either:
Use float (and be aware or the max float value your format can store - e.g. for 16F it's a tad over 60k).
Or:
Use UNORM by divide my some large number at the end of your lighting shaders to remap into the 0-1 range (and multiply by the same large number later when reading from the lighting buffer in the tonemapping shader).

UNORM + manual scaling gives a linear precision distribution and allows you to choose your own max value.
FLOAT has a fixed max value and 16F wastes one bit storing the sign, which you dont need, but, gives a logarithmic precision distribution. i.e. more bits are dedicated to storing dark colours than bright colours, just like sRGB, which can be a great thing as this is also how eyes work.

In my experience, the 10/11 bit formats (float and unorm) don't have enough precision to pull off good HDR without introducing colour banding.
n.b. 8bit sRGB is equivalent to approx 10bit UNORM linear, so that leaves you approximately no headroom for dynamic range :(

In my experience, the 10/11 bit formats (float and unorm) don't have enough precision to pull off good HDR without introducing colour banding.

n.b. 8bit sRGB is equivalent to approx 10bit UNORM linear, so that leaves you approximately no headroom for dynamic range sad.png

If you don't have extreme dynamic range I have found that 11 bit is just enough with some careful dithering. At least it work for us in Hardland.


tonemgub, on 25 Aug 2015 - 5:09 PM, said:

Those light buffers do not store any color information AFAIK - they just store light intensities, which are always linear,


Lights have colour don't forget, the intensity of each channel isn't necessarily equal.

The per-channel intensities are still linear though. You could use an SRGB texture to store them, but what's the point, if all your calculations are in linear space?


tonemgub, on 25 Aug 2015 - 5:09 PM, said:

Those light buffers do not store any color information AFAIK - they just store light intensities, which are always linear,


Lights have colour don't forget, the intensity of each channel isn't necessarily equal.

The per-channel intensities are still linear though. You could use an SRGB texture to store them, but what's the point, if all your calculations are in linear space?

Because sRGB targets can be used to store whatever you want, linear or non-linear. The curve applied to the the values before reads and writes acts as a good, free/cheap means of retaining precision on smaller values while sacrificing it for bigger values. Any value that somehow contributes to the final perceptual colour (either the textures themselves or the per pixel lighting values) is a good candidate for this. The reason being that the human eye is significantly better at perceiving small differences in closely matched dark colours than similarly distanced bright colours.

There's probably a good evolutionary benefit to being able to see the predator in the shadows than being able to pick apart very bright colours that are already very visible. For that reason, if we don't have as many bits available for storage as we'd like, it would be better to dedicate them to dark colours/intensities than to those at the other end of the scale.

Adam Miles - Principal Software Development Engineer - Microsoft Xbox Advanced Technology Group

This topic is closed to new replies.

Advertisement