Apologies for something of a cross-post with the XBox Live DX11 forum, but they are fairly dead these days.
I'm currently mulling over how exactly I want to implement brightness and contrast adjustments in my (D3D11) renderer. After perusing the commonly suggested techniques, I see 3 basic approaches:
1) The easiest and lowest runtime-performance-impact option is to utilize IDXGIOutput::SetGammaControl, via the DXGI_GAMMA_CONTROL::Offset and ::Scale members. The first obvious drawback to this approach is that it only takes effect in fullscreen exclusive mode. Further, it will ignore any custom color profile the user has set up on their monitor with regards to the gamma ramp (there are entire forums dedicated to tracking the games that use SetGammaControl, and ways to try and work around that). I think it may be possible to avoid the color profile issue by using the GDI GetDeviceGammaRamp function to get values with which to translate and populate into DXGI_GAMMA_CONTROL::GammaCurve. Lastly, this functionality may not even be supported by some drivers, although I have no idea if that's a modern concern.
2) The "obvious" answer is to do this via shaders, and the calculation itself is of course simplicity. The first choice for implementing this method is to add another pass, which in effect adds the need for a (potentially additional) intermediary render target. That allows the calculation to reside in a single custom shader applied to that pass. But it's a bit hard to justify a full framebuffer-sized render target for that purpose (although, typically, one can be smart about having these intermediate render targets for sharing amongst passes). The second option is to embed this calculation directly into the regular shaders as a last step before output. This, however, is nasty, as it pollutes every shader and requires knowledge of whether or not it should actually apply (is this the presentation render target, or something that is going to be rendered into the scene, etc.).
3) The tried-and-true post-process pass of rendering a full-screen quad over the final image, with blending set up to apply brightness and contrast. While it doesn't exactly evoke the warm fuzzies adding another full-screen alpha-blended quad to the rendering output, it seems relatively low-impact (performance-wise). The only hiccup I've encountered thus far is that I haven't yet found a way to increase contrast via this method, as all blending inputs (including blend factors) are clamped to the display range (0.0 - 1.0). I have this set up with two blend states, one for increasing brightness and one for decreasing brightness:
// Increasing brightness
SrcBlend = D3D11_BLEND_ONE;
DestBlend = D3D11_BLEND_SRC_ALPHA;
BlendOp = D3D11_BLEND_OP_ADD;
// Decreasing brightness
SrcBlend = D3D11_BLEND_ONE;
DestBlend = D3D11_BLEND_SRC_ALPHA;
BlendOp = D3D11_BLEND_OP_REV_SUBTRACT;
As implied by the above setup, the fullscreen quad I'm rendering has the brightness offset in the rgb color components of the vertices, and puts the contrast scale into the alpha components.
#1 is tempting (and many games settle for doing it), but not having it apply when windowed is not really flexible enough.
#2 might be what's necessary, but the various complications with either adding another pass/render target or shader pollution are distasteful. I presume most folks work the brightness/contrast into their post-processing pipeline and explicitly deal with it via shader code?
I'd be satisfied with #3, if not for the fact that I can't figure out how I might apply a scale larger than 1.0. Has anyone had success using that method and been able to accommodate increasing contrast?
I'm curious if folks have tried any other methods, and/or had additional thoughts on the above?