# DX11 HDR adaptation, avg lum isn't calculated properly

This topic is 1534 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

Hello,

after starting over with implemented HDR for my new engine, I'm experiencing an issue I had the last time too: The scenes adaptation reacts mostly entierly to the luminance in the center of the scene, not the averaged value. Take a look at the two screenshots. The first one shows a fully adapated scene. The second one is almost exactly the same, though slightly moved to the right. Now obviously the shadowy part of that object is in the middle, and that seems to drag the "average" luminance I calculated to become extremely low, therefore the scene flashes in all white. If I move the camera slightly to eigther the right, left, up or down, it becomes "normal" again. This happens everywheren, whenever there is a very dark/bright part in the middle of the screen, the adaptation goes nuts. Therefore I deduce that the average luminance calculation is broken. I've even got multiple different settings, and all produce this same behaviour:

#include "../../../Base3D/Effects/Vertex.afx"

sampler InputSampler : register(s0);

Texture2D <float4> Luminance  : register(t0);

float4 mainPS(VS_OUTPUT i) : SV_TARGET0
{
float4 inTex = Luminance.Sample(InputSampler, i.vTex0);

return inTex;
};


I'm using this shader with a linear sampler to downsample the scenes luminance by 2times until there is only a 1x1 target left.

- DirectX11 auto mipmap generation:

Since the downsampling first produced the issue, I decided to try out auto-mipmap generation. It still produces exactly the same effect, the luminance in the middle of the scene almost entirely determines the avg luminance.

#include "../../../Base3D/Effects/Vertex.afx"

cbuffer instance : register(b2)
{
float2 params; // x = delta, y = miplevel
}

sampler InputSampler : register(s0);

Texture2D <float> CurrentLum  : register(t0);
Texture2D <float> PreviousLum  : register(t1);

float4 mainPS(VS_OUTPUT i) : SV_TARGET0
{
float fCurrentLum = CurrentLum.SampleLevel(InputSampler, i.vTex0, (int)params.y);

const float fTau = 0.5f;

};


- DirectX9, shader (almost identically to the DX11 one), also the same result.

Now, is there anything I'm missing? The way I used to do this has been taken from an old NVIDIA sample, but this one doesn't even get gamma-correction right, so I doubt it is accurate...

##### Share on other sites

some cards doesn't handle mipmap generation for float textures, and some do it on the CPU side (even for non float), you need to generate mipmaps with your on shaders.

(oh and auto generation doesnt work for render targets, havent looked your code through.)

Edited by skarab

##### Share on other sites

some cards doesn't handle mipmap generation for float textures, and some do it on the CPU side (even for non float), you need to generate mipmaps with your on shaders.

(oh and auto generation doesnt work for render targets, havent looked your code through.)

DirectX11 at least should be able to handle auto generation for render targets, in fact its the only use for it. http://msdn.microsoft.com/en-us/library/windows/desktop/ff476426%28v=vs.85%29.aspx See the remarks about needing to have RENDER_TARGET and SHADER_RESOURCE set. I did check the result though, and the mip levels are correctly calculated at least that there are miplevels, but it appears it doesn't sample right...

##### Share on other sites

Have you checked the mipmap visually ? (maybe there is no bilinear support as well, I dont know this directx 11 thing sorry)

##### Share on other sites

When you sample the Luminance texture, you need to sample the lowest mip level, since this is the level which holds the average value.

You call Luminance.Sample(), so unless you create the SRV with only the last mip-level, this will almost certainly will not sample from there. You also don't need texCoord from the VS, as the final mip-level is 1 pixel, and you can just use (0.5, 0.5).

The old NVIDIA sample probably did gamma correction inside the shader. Now-days, if you create your resources as SRGB, it's done automatically in the HW.

##### Share on other sites

Using texture.SampleLevel, you can just pass in an absurdly high value for the LOD (something like 666 ) and it will clamp to the smallest miplevel, which should be your 1x1 level.  I'm not too certain about sampling from a luminance rendertarget however; last time I did this I sampled from an RGBA one and dotted the float4 with (0.3, 0.59, 0.11, 0.0) to get the luminance.

The other thing I found is that for certain game types - such as an FPS - you move through the world so fast that the average luminance in the scene changes rapidly, which led to a flickering/strobing effect.  To resolve that I updated the average luminance at some well-defined interval (0.1 seconds worked well for me, you can feel free to experiment) and interpolated between that and the previous average luminance based on time passed.  That worked quite well and gave a good illusion of eye-adaptation over time - not physically correct (in real life it takes a lot longer) but good enough to show off the effect in a reasonably convincing manner.

##### Share on other sites

some cards doesn't handle mipmap generation for float textures, and some do it on the CPU side (even for non float), you need to generate mipmaps with your on shaders.

(oh and auto generation doesnt work for render targets, havent looked your code through.)

DirectX11 at least should be able to handle auto generation for render targets, in fact its the only use for it. http://msdn.microsoft.com/en-us/library/windows/desktop/ff476426%28v=vs.85%29.aspx See the remarks about needing to have RENDER_TARGET and SHADER_RESOURCE set. I did check the result though, and the mip levels are correctly calculated at least that there are miplevels, but it appears it doesn't sample right...

Have you checked you create the RTs with D3D11_RESOURCE_MISC_GENERATE_MIPS?

Furthermore, check your sampling states are correct.

I wouldn't trust mipmapping to do this work though. Drivers often allow overriding mip settings to improve performance by trading quality; or to avoid blurry textures; which could break your HDR.

##### Share on other sites

Have you checked the mipmap visually ? (maybe there is no bilinear support as well, I dont know this directx 11 thing sorry)

Have you checked you create the RTs with D3D11_RESOURCE_MISC_GENERATE_MIPS?

Regarding this, I did check that at least my render target is created with the GENERATE_MIPS flag, and the mips are indeed being generated. I don't have any screenshots, but I'm not sure I could tell whether it would have correct filtering or not, so I'll try to get a cap of the mipmaps so some of you could look at it...

When you sample the Luminance texture, you need to sample the lowest mip level, since this is the level which holds the average value.

You call Luminance.Sample(), so unless you create the SRV with only the last mip-level, this will almost certainly will not sample from there. You also don't need texCoord from the VS, as the final mip-level is 1 pixel, and you can just use (0.5, 0.5).

No, no, I'm calling Sample on the 1x1 luminance output target from the last call, I'm using SampleLevel passing in the correct mip level (11 for 1378x768) on the scenes luminance texture. Thanks to for the hint with the texture coordinates, it didn't fix anything but at least now my shader is a little simplier.

The old NVIDIA sample probably did gamma correction inside the shader. Now-days, if you create your resources as SRGB, it's done automatically in the HW.

I'm pretty sure it didn't do it anywhere, there was no pow() anywhere, and SRGB in the effects was explicitely set to false. Maybe someone screwed up before posting the sample, or something...

Using texture.SampleLevel, you can just pass in an absurdly high value for the LOD (something like 666 biggrin.png) and it will clamp to the smallest miplevel, which should be your 1x1 level. I'm not too certain about sampling from a luminance rendertarget however; last time I did this I sampled from an RGBA one and dotted the float4 with (0.3, 0.59, 0.11, 0.0) to get the luminance.

Ah, thats good to know and saves me some complexity :D Well, I'm doing the RGB to LUM conversion before calculating the average luminance, so that should be fine :D

The other thing I found is that for certain game types - such as an FPS - you move through the world so fast that the average luminance in the scene changes rapidly, which led to a flickering/strobing effect. To resolve that I updated the average luminance at some well-defined interval (0.1 seconds worked well for me, you can feel free to experiment) and interpolated between that and the previous average luminance based on time passed. That worked quite well and gave a good illusion of eye-adaptation over time - not physically correct (in real life it takes a lot longer) but good enough to show off the effect in a reasonably convincing manner.

I though that could be a reason too, and I quess I'll need a solution to that at some point, but in my case even a suddle move of the camera like in this case of 1 screen pixel, it goes from average dark to unrealistically bright in half a second, so thats something else...

I wouldn't trust mipmapping to do this work though. Drivers often allow overriding mip settings to improve performance by trading quality; or to avoid blurry textures; which could break your HDR.

Well, I went to doing auto-mipmapping since my first sample-shader approach failed with the same result. Is there anything wrong about my pass-through shader, the first one I posted? I made sure the filter for the scene luminance is set to linear, so I quess I need to do something else inside the shader to get it to work?

• 10
• 11
• 9
• 16
• 18
• ### Similar Content

• I wanted to see how others are currently handling descriptor heap updates and management.
I've read a few articles and there tends to be three major strategies :
1 ) You split up descriptor heaps per shader stage ( i.e one for vertex shader , pixel , hull, etc)
2) You have one descriptor heap for an entire pipeline
3) You split up descriptor heaps for update each update frequency (i.e EResourceSet_PerInstance , EResourceSet_PerPass , EResourceSet_PerMaterial, etc)
The benefits of the first two approaches is that it makes it easier to port current code, and descriptor / resource descriptor management and updating tends to be easier to manage, but it seems to be not as efficient.
The benefits of the third approach seems to be that it's the most efficient because you only manage and update objects when they change.

• hi,
until now i use typical vertexshader approach for skinning with a Constantbuffer containing the transform matrix for the bones and an the vertexbuffer containing bone index and bone weight.
Now i have implemented realtime environment  probe cubemaping so i have to render my scene from many point of views and the time for skinning takes too long because it is recalculated for every side of the cubemap.
For Info i am working on Win7 an therefore use one Shadermodel 5.0 not 5.x that have more options, or is there a way to use 5.x in Win 7
My Graphic Card is Directx 12 compatible NVidia GTX 960
the member turanszkij has posted a good for me understandable compute shader. ( for Info: in his engine he uses an optimized version of it )
Now my questions
is it possible to feed the compute shader with my orignial vertexbuffer or do i have to copy it in several ByteAdressBuffers as implemented in the following code ?
the same question is about the constant buffer of the matrixes
my more urgent question is how do i feed my normal pipeline with the result of the compute Shader which are 2 RWByteAddressBuffers that contain position an normal
for example i could use 2 vertexbuffer bindings
1 containing only the uv coordinates
2.containing position and normal
How do i copy from the RWByteAddressBuffers to the vertexbuffer ?

(Code from turanszkij )
Here is my shader implementation for skinning a mesh in a compute shader:

• Hi, can someone please explain why this is giving an assertion EyePosition!=0 exception?

It looks like DirectX doesnt want the 2nd parameter to be a zero vector in the assertion, but I passed in a zero vector with this exact same code in another program and it ran just fine. (Here is the version of the code that worked - note XMLoadFloat3(&m_lookAt) parameter value is (0,0,0) at runtime - I debugged it - but it throws no exceptions.
and here is the repo with the alternative version of the code that is working with a value of (0,0,0) for the second parameter.

• Hi, can somebody please tell me in clear simple steps how to debug and step through an hlsl shader file?
I already did Debug > Start Graphics Debugging > then captured some frames from Visual Studio and
double clicked on the frame to open it, but no idea where to go from there.

I've been searching for hours and there's no information on this, not even on the Microsoft Website!
They say "open the  Graphics Pixel History window" but there is no such window!
Then they say, in the "Pipeline Stages choose Start Debugging"  but the Start Debugging option is nowhere to be found in the whole interface.
Also, how do I even open the hlsl file that I want to set a break point in from inside the Graphics Debugger?

All I want to do is set a break point in a specific hlsl file, step thru it, and see the data, but this is so unbelievably complicated