Sign in to follow this  

DX11 Brightness and Contrast

This topic is 688 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Apologies for something of a cross-post with the XBox Live DX11 forum, but they are fairly dead these days.

 

I'm currently mulling over how exactly I want to implement brightness and contrast adjustments in my (D3D11) renderer.  After perusing the commonly suggested techniques, I see 3 basic approaches:

1)  The easiest and lowest runtime-performance-impact option is to utilize IDXGIOutput::SetGammaControl, via the DXGI_GAMMA_CONTROL::Offset and ::Scale members.  The first obvious drawback to this approach is that it only takes effect in fullscreen exclusive mode.  Further, it will ignore any custom color profile the user has set up on their monitor with regards to the gamma ramp (there are entire forums dedicated to tracking the games that use SetGammaControl, and ways to try and work around that).  I think it may be possible to avoid the color profile issue by using the GDI GetDeviceGammaRamp function to get values with which to translate and populate into DXGI_GAMMA_CONTROL::GammaCurve.  Lastly, this functionality may not even be supported by some drivers, although I have no idea if that's a modern concern.

2)  The "obvious" answer is to do this via shaders, and the calculation itself is of course simplicity.  The first choice for implementing this method is to add another pass, which in effect adds the need for a (potentially additional) intermediary render target.  That allows the calculation to reside in a single custom shader applied to that pass.  But it's a bit hard to justify a full framebuffer-sized render target for that purpose (although, typically, one can be smart about having these intermediate render targets for sharing amongst passes).  The second option is to embed this calculation directly into the regular shaders as a last step before output.  This, however, is nasty, as it pollutes every shader and requires knowledge of whether or not it should actually apply (is this the presentation render target, or something that is going to be rendered into the scene, etc.).

3)  The tried-and-true post-process pass of rendering a full-screen quad over the final image, with blending set up to apply brightness and contrast.  While it doesn't exactly evoke the warm fuzzies adding another full-screen alpha-blended quad to the rendering output, it seems relatively low-impact (performance-wise).  The only hiccup I've encountered thus far is that I haven't yet found a way to increase contrast via this method, as all blending inputs (including blend factors) are clamped to the display range (0.0 - 1.0).  I have this set up with two blend states, one for increasing brightness and one for decreasing brightness:

    // Increasing brightness
    SrcBlend = D3D11_BLEND_ONE;
    DestBlend = D3D11_BLEND_SRC_ALPHA;
    BlendOp = D3D11_BLEND_OP_ADD;

    // Decreasing brightness
    SrcBlend = D3D11_BLEND_ONE;
    DestBlend = D3D11_BLEND_SRC_ALPHA;
    BlendOp = D3D11_BLEND_OP_REV_SUBTRACT;

As implied by the above setup, the fullscreen quad I'm rendering has the brightness offset in the rgb color components of the vertices, and puts the contrast scale into the alpha components.

#1 is tempting (and many games settle for doing it), but not having it apply when windowed is not really flexible enough.

#2 might be what's necessary, but the various complications with either adding another pass/render target or shader pollution are distasteful.  I presume most folks work the brightness/contrast into their post-processing pipeline and explicitly deal with it via shader code?

I'd be satisfied with #3, if not for the fact that I can't figure out how I might apply a scale larger than 1.0.  Has anyone had success using that method and been able to accommodate increasing contrast?

I'm curious if folks have tried any other methods, and/or had additional thoughts on the above?

Share this post


Link to post
Share on other sites
Use a color correction texture, that is, a 3d texture where you insert your R,G,B values and which output new R,G,B values. For this you should use a simple 3d texture which can either be precalculated or calculated on-the-fly. You can put almost every commonly used in color correction into it, hue shifts, contrast, brightness whatever and it is just a single texture lookup in your shader.

Share this post


Link to post
Share on other sites

I wasn't aware the gaming community hated SetGammaControl, but on the basis that it does, I don't see any problem with #2.

 

You're right that adding an entirely extra pass at the end of post-processing just to do this is a little unnecessary. I'm not sure I follow your logic through to the conclusion that many shaders have to be polluted with this code. Post-processing usually finishes with a full-screen pass that uses a very specific shader (it might do things like colour-grading, film-grain, apply bloom) so that should be the only shader that needs to have specific brightness/saturation/contrast code adding to it.

 

Why do you think you need to pollute "every shader" with this code? It only needs to be put inside the last shader used in the frame which is likely the same shader every frame...

Share this post


Link to post
Share on other sites

Use a color correction texture, that is, a 3d texture where you insert your R,G,B values and which output new R,G,B values. For this you should use a simple 3d texture which can either be precalculated or calculated on-the-fly. You can put almost every commonly used in color correction into it, hue shifts, contrast, brightness whatever and it is just a single texture lookup in your shader.

 

Thanks for that suggestion, that's a great way to combine all the color corrections into a single lookup.  If/when I move towards an actual (shader) post-process pass to apply my color controls I'll definitely consider going that route.  I suppose you might even be able to utilize that with the blend method, although I'd have to do some beard-stroking to see if it was possible.

 


I wasn't aware the gaming community hated SetGammaControl

 

I wouldn't say it's an issue for most people, but folks who go through the trouble of setting up a color profile on their monitor (HW enthusiasts, as it were, which hardcore gamers are) seem to be perturbed by it.  Presumably the problem cases are when SetGammaControl is called with the default gamma ramp, which will then overwrite the one set by the color profile.

 


Why do you think you need to pollute "every shader" with this code?

 

Apologies, I was a bit unclear in my wording there.  To your point, yes absolutely, if you have an existing post-process pass that is always last that would definitely be the place to insert it.  Generally, where it starts to get a bit hairy is if you can't make such a guarantee (graphics options may disable certain passes, perhaps including that final one).  So, in lieu of adding a whole separate pass, you'd be in the position of trying to "wedge it in there" where appropriate.  Without a guaranteed pass to put it in (perhaps there are no post-process passes enabled), you'd be stuck with trying to add it to your "normal" shaders when they output a color, which introduces all sorts of other complications with render targets that aren't the presentation one, etc..  Obviously, that's a bit ridiculous and extreme.

 

Currently, I don't have a guaranteed post-process pass I can place it in, which is some of the reason I was exploring other low-impact options.

Share this post


Link to post
Share on other sites

 

Use a color correction texture, that is, a 3d texture where you insert your R,G,B values and which output new R,G,B values. For this you should use a simple 3d texture which can either be precalculated or calculated on-the-fly. You can put almost every commonly used in color correction into it, hue shifts, contrast, brightness whatever and it is just a single texture lookup in your shader.

 

Thanks for that suggestion, that's a great way to combine all the color corrections into a single lookup.  If/when I move towards an actual (shader) post-process pass to apply my color controls I'll definitely consider going that route.  I suppose you might even be able to utilize that with the blend method, although I'd have to do some beard-stroking to see if it was possible.

 

 

 


I wasn't aware the gaming community hated SetGammaControl

 

I wouldn't say it's an issue for most people, but folks who go through the trouble of setting up a color profile on their monitor (HW enthusiasts, as it were, which hardcore gamers are) seem to be perturbed by it.  Presumably the problem cases are when SetGammaControl is called with the default gamma ramp, which will then overwrite the one set by the color profile. 

I recently bought a new LG 23MP55 monitor to replace an old one that died and its stock calibration SUCKS. White is shown as fucking yellowish! No amount of OSD tweaking would fix it. Luckily someone had uploaded an ICC profile at tftcentral.com for that model, and it was a godsent. Now it looks very close toall  my other monitors.

If you were to override my ICC settings, I would be very pissed.

 

As for OP's dilemma: often we do lots of postprocessing effects, and that requires intermediate render textures. It's a good practice to reuse / recycle those textures for subsequent effects, so your worries are a little exaggerated as the higher memory consumption is often a fixed overhead price you pay once, the moment you begin to do postprocessing.

Edited by Matias Goldberg

Share this post


Link to post
Share on other sites


If you were to override my ICC settings, I would be very pissed.

 

As I understand it, the override is only in effect while the app is in fullscreen exclusive mode (i.e.; the proper gamma ramp is restored on exit from fullscreen), but it's still obviously a bad thing to do.  Regardless of that particular issue, I think it's a better idea to just handle the brightness/contrast natively in-game, and not limit yourself to fullscreen only.

 


It's a good practice to reuse / recycle those textures for subsequent effects

 

Yep, absolutely...I wordily alluded to that in my original post.  Once my post-process pipeline is more fleshed out I'm sure I'll be simply integrating the color controls into it via the aforementioned shader methodology.  For the time being, however, I'm somewhat limited by the fact that I actually may not have any post-process passes enabled.  So, in lieu of adding a "real" shader-based post-process pass just for brightness/contrast, I've been looking at alternate solutions.

 

For now, I'll probably stick with the blending-based brightness and contrast until the shader post-process method becomes viable for my situation.

 

Thanks to all for the input thus far.

Share this post


Link to post
Share on other sites

This topic is 688 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Forum Statistics

    • Total Topics
      628756
    • Total Posts
      2984526
  • Similar Content

    • By GreenGodDiary
      Having some issues with a geometry shader in a very basic DX app.
      We have an assignment where we are supposed to render a rotating textured quad, and in the geometry shader duplicate this quad and offset it by its normal. Very basic stuff essentially.
      My issue is that the duplicated quad, when rendered in front of the original quad, seems to fail the Z test and thus the original quad is rendered on top of it.
      Whats even weirder is that this only happens for one of the triangles in the duplicated quad, against one of the original quads triangles.

      Here's a video to show you what happens: Video (ignore the stretched textures)

      Here's my GS: (VS is simple passthrough shader and PS is just as basic)
      struct VS_OUT { float4 Pos : SV_POSITION; float2 UV : TEXCOORD; }; struct VS_IN { float4 Pos : POSITION; float2 UV : TEXCOORD; }; cbuffer cbPerObject : register(b0) { float4x4 WVP; }; [maxvertexcount(6)] void main( triangle VS_IN input[3], inout TriangleStream< VS_OUT > output ) { //Calculate normal float4 faceEdgeA = input[1].Pos - input[0].Pos; float4 faceEdgeB = input[2].Pos - input[0].Pos; float3 faceNormal = normalize(cross(faceEdgeA.xyz, faceEdgeB.xyz)); //Input triangle, transformed for (uint i = 0; i < 3; i++) { VS_OUT element; VS_IN vert = input[i]; element.Pos = mul(vert.Pos, WVP); element.UV = vert.UV; output.Append(element); } output.RestartStrip(); for (uint j = 0; j < 3; j++) { VS_OUT element; VS_IN vert = input[j]; element.Pos = mul(vert.Pos + float4(faceNormal, 0.0f), WVP); element.Pos.xyz; element.UV = vert.UV; output.Append(element); } }  
      I havent used geometry shaders much so im not 100% on what happens behind the scenes.
      Any tips appreciated! 
    • By mister345
      Hi, I'm building a game engine using DirectX11 in c++.
      I need a basic physics engine to handle collisions and motion, and no time to write my own.
      What is the easiest solution for this? Bullet and PhysX both seem too complicated and would still require writing my own wrapper classes, it seems. 
      I found this thing called PAL - physics abstraction layer that can support bullet, physx, etc, but it's so old and no info on how to download or install it.
      The simpler the better. Please let me know, thanks!
    • By Hexaa
      I try to draw lines with different thicknesses using the geometry shader approach from here:
      https://forum.libcinder.org/topic/smooth-thick-lines-using-geometry-shader
      It seems to work great on my development machine (some Intel HD). However, if I try it on my target (Nvidia NVS 300, yes it's old) I get different results. See the attached images. There
      seem to be gaps in my sine signal that the NVS 300 device creates, the intel does what I want and expect in the other picture.
      It's a shame, because I just can't figure out why. I expect it to be the same. I get no Error in the debug output, with enabled native debugging. I disabled culling with CullMode.None. Could it be some z-fighting? I have little clue about it but I tested to play around with the RasterizerStateDescription and DepthBias properties with no success, no change at all. Maybe I miss something there?
      I develop the application with SharpDX btw.
      Any clues or help is very welcome
       


    • By Beny Benz
      Hi,
      I'm currently trying to write a shader which shoud compute a fast fourier transform of some data, manipulating the transformed data, do an inverse FFT an then displaying the result as vertex offset and color. I use Unity3d and HLSL as shader language. One of the main problems is that the data should not be passed from CPU to GPU for every frame if possible. My original plan was to use a vertex shader and do the fft there, but I fail to find out how to store changing data betwen shader calls/passes. I found a technique called ping-ponging which seems to be based on writing and exchangeing render targets, but I couldn't find an example for HLSL as a vertex shader yet.
      I found https://social.msdn.microsoft.com/Forums/en-US/c79a3701-d028-41d9-ad74-a2b3b3958383/how-to-render-to-multiple-render-targets-in-hlsl?forum=xnaframework
      which seem to use COLOR0 and COLOR1 as such render targets.
      Is it even possible to do such calculations on the gpu only? (/in this shader stage?, because I need the result of the calculation to modify the vertex offsets there)
      I also saw the use of compute shaders in simmilar projects (ocean wave simulation), do they realy copy data between CPU / GPU for every frame?
      How does this ping-ponging / rendertarget switching technique work in HLSL?
      Have you seen an example of usage?
      Any answer would be helpfull.
      Thank you
      appswert
    • By ADDMX
      Hi
      Just a simple question about compute shaders (CS5, DX11).
      Do the atomic operations (InterlockedAdd in my case) should work without any issues on RWByteAddressBuffer and be globaly coherent ?
      I'v come back from CUDA world and commited fairly simple kernel that does some job, the pseudo-code is as follows:
      (both kernels use that same RWByteAddressBuffer)
      first kernel does some job and sets Result[0] = 0;
      (using Result.Store(0, 0))
      I'v checked with debugger, and indeed the value stored at dword 0 is 0
      now my second kernel
      RWByteAddressBuffer Result;  [numthreads(8, 8, 8)] void main() {     for (int i = 0; i < 5; i++)     {         uint4 v0 = DoSomeCalculations1();         uint4 v1 = DoSomeCalculations2();         uint4 v2 = DoSomeCalculations3();                  if (v0.w == 0 && v1.w == 0 && v2.w)             continue;         //    increment counter by 3, and get it previous value         // this should basically allocate space for 3 uint4 values in buffer         uint prev;         Result.InterlockedAdd(0, 3, prev);                  // this fills the buffer with 3 uint4 values (+1 is here as the first 16 bytes is occupied by DrawInstancedIndirect data)         Result.Store4((prev+0+1)*16, v0);         Result.Store4((prev+1+1)*16, v1);         Result.Store4((prev+2+1)*16, v2);     } } Now I invoke it with Dispatch(4,4,4)
      Now I use DrawInstancedIndirect to draw the buffer, but ocassionaly there is missed triangle here and there for a frame, as if the atomic counter does not work as expected
      do I need any additional synchronization there ?
      I'v tried 'AllMemoryBarrierWithGroupSync' at the end of kernel, but without effect.
      If I do not use atomic counter, and istead just output empty vertices (that will transform into degenerated triangles) the all is OK - as if I'm missing some form of synchronization, but I do not see such a thing in DX11.
      I'v tested on both old and new nvidia hardware (680M and 1080, the behaviour is that same).
       
  • Popular Now