Followers 0

# DX11 HDR adaptation, avg lum isn't calculated properly

## 7 posts in this topic

Hello,

after starting over with implemented HDR for my new engine, I'm experiencing an issue I had the last time too: The scenes adaptation reacts mostly entierly to the luminance in the center of the scene, not the averaged value. Take a look at the two screenshots. The first one shows a fully adapated scene. The second one is almost exactly the same, though slightly moved to the right. Now obviously the shadowy part of that object is in the middle, and that seems to drag the "average" luminance I calculated to become extremely low, therefore the scene flashes in all white. If I move the camera slightly to eigther the right, left, up or down, it becomes "normal" again. This happens everywheren, whenever there is a very dark/bright part in the middle of the screen, the adaptation goes nuts. Therefore I deduce that the average luminance calculation is broken. I've even got multiple different settings, and all produce this same behaviour:

#include "../../../Base3D/Effects/Vertex.afx"

sampler InputSampler : register(s0);

Texture2D <float4> Luminance  : register(t0);

float4 mainPS(VS_OUTPUT i) : SV_TARGET0
{
float4 inTex = Luminance.Sample(InputSampler, i.vTex0);

return inTex;
};


I'm using this shader with a linear sampler to downsample the scenes luminance by 2times until there is only a 1x1 target left.

- DirectX11 auto mipmap generation:

Since the downsampling first produced the issue, I decided to try out auto-mipmap generation. It still produces exactly the same effect, the luminance in the middle of the scene almost entirely determines the avg luminance.

#include "../../../Base3D/Effects/Vertex.afx"

cbuffer instance : register(b2)
{
float2 params; // x = delta, y = miplevel
}

sampler InputSampler : register(s0);

Texture2D <float> CurrentLum  : register(t0);
Texture2D <float> PreviousLum  : register(t1);

float4 mainPS(VS_OUTPUT i) : SV_TARGET0
{
float fAdaptedLum = PreviousLum.Sample(InputSampler, i.vTex0);
float fCurrentLum = CurrentLum.SampleLevel(InputSampler, i.vTex0, (int)params.y);

const float fTau = 0.5f;
float fNewAdaptation = fAdaptedLum + (fCurrentLum - fAdaptedLum) * (1 - exp(-params.x * fTau));

return float4(fNewAdaptation, 0.0f, 0.0, 1.0);
};


- DirectX9, shader (almost identically to the DX11 one), also the same result.

Now, is there anything I'm missing? The way I used to do this has been taken from an old NVIDIA sample, but this one doesn't even get gamma-correction right, so I doubt it is accurate...

0

##### Share on other sites

some cards doesn't handle mipmap generation for float textures, and some do it on the CPU side (even for non float), you need to generate mipmaps with your on shaders.

(oh and auto generation doesnt work for render targets, havent looked your code through.)

Edited by skarab
0

##### Share on other sites

some cards doesn't handle mipmap generation for float textures, and some do it on the CPU side (even for non float), you need to generate mipmaps with your on shaders.

(oh and auto generation doesnt work for render targets, havent looked your code through.)

DirectX11 at least should be able to handle auto generation for render targets, in fact its the only use for it. http://msdn.microsoft.com/en-us/library/windows/desktop/ff476426%28v=vs.85%29.aspx See the remarks about needing to have RENDER_TARGET and SHADER_RESOURCE set. I did check the result though, and the mip levels are correctly calculated at least that there are miplevels, but it appears it doesn't sample right...

0

##### Share on other sites

Have you checked the mipmap visually ? (maybe there is no bilinear support as well, I dont know this directx 11 thing sorry)

0

##### Share on other sites

When you sample the Luminance texture, you need to sample the lowest mip level, since this is the level which holds the average value.

You call Luminance.Sample(), so unless you create the SRV with only the last mip-level, this will almost certainly will not sample from there. You also don't need texCoord from the VS, as the final mip-level is 1 pixel, and you can just use (0.5, 0.5).

The old NVIDIA sample probably did gamma correction inside the shader. Now-days, if you create your resources as SRGB, it's done automatically in the HW.

0

##### Share on other sites

Using texture.SampleLevel, you can just pass in an absurdly high value for the LOD (something like 666 ) and it will clamp to the smallest miplevel, which should be your 1x1 level.  I'm not too certain about sampling from a luminance rendertarget however; last time I did this I sampled from an RGBA one and dotted the float4 with (0.3, 0.59, 0.11, 0.0) to get the luminance.

The other thing I found is that for certain game types - such as an FPS - you move through the world so fast that the average luminance in the scene changes rapidly, which led to a flickering/strobing effect.  To resolve that I updated the average luminance at some well-defined interval (0.1 seconds worked well for me, you can feel free to experiment) and interpolated between that and the previous average luminance based on time passed.  That worked quite well and gave a good illusion of eye-adaptation over time - not physically correct (in real life it takes a lot longer) but good enough to show off the effect in a reasonably convincing manner.

0

##### Share on other sites

some cards doesn't handle mipmap generation for float textures, and some do it on the CPU side (even for non float), you need to generate mipmaps with your on shaders.

(oh and auto generation doesnt work for render targets, havent looked your code through.)

DirectX11 at least should be able to handle auto generation for render targets, in fact its the only use for it. http://msdn.microsoft.com/en-us/library/windows/desktop/ff476426%28v=vs.85%29.aspx See the remarks about needing to have RENDER_TARGET and SHADER_RESOURCE set. I did check the result though, and the mip levels are correctly calculated at least that there are miplevels, but it appears it doesn't sample right...

Have you checked you create the RTs with D3D11_RESOURCE_MISC_GENERATE_MIPS?

Furthermore, check your sampling states are correct.

I wouldn't trust mipmapping to do this work though. Drivers often allow overriding mip settings to improve performance by trading quality; or to avoid blurry textures; which could break your HDR.

0

##### Share on other sites

Have you checked the mipmap visually ? (maybe there is no bilinear support as well, I dont know this directx 11 thing sorry)

Have you checked you create the RTs with D3D11_RESOURCE_MISC_GENERATE_MIPS?

Regarding this, I did check that at least my render target is created with the GENERATE_MIPS flag, and the mips are indeed being generated. I don't have any screenshots, but I'm not sure I could tell whether it would have correct filtering or not, so I'll try to get a cap of the mipmaps so some of you could look at it...

When you sample the Luminance texture, you need to sample the lowest mip level, since this is the level which holds the average value.

You call Luminance.Sample(), so unless you create the SRV with only the last mip-level, this will almost certainly will not sample from there. You also don't need texCoord from the VS, as the final mip-level is 1 pixel, and you can just use (0.5, 0.5).

No, no, I'm calling Sample on the 1x1 luminance output target from the last call, I'm using SampleLevel passing in the correct mip level (11 for 1378x768) on the scenes luminance texture. Thanks to for the hint with the texture coordinates, it didn't fix anything but at least now my shader is a little simplier.

The old NVIDIA sample probably did gamma correction inside the shader. Now-days, if you create your resources as SRGB, it's done automatically in the HW.

I'm pretty sure it didn't do it anywhere, there was no pow() anywhere, and SRGB in the effects was explicitely set to false. Maybe someone screwed up before posting the sample, or something...

Using texture.SampleLevel, you can just pass in an absurdly high value for the LOD (something like 666 biggrin.png) and it will clamp to the smallest miplevel, which should be your 1x1 level. I'm not too certain about sampling from a luminance rendertarget however; last time I did this I sampled from an RGBA one and dotted the float4 with (0.3, 0.59, 0.11, 0.0) to get the luminance.

Ah, thats good to know and saves me some complexity :D Well, I'm doing the RGB to LUM conversion before calculating the average luminance, so that should be fine :D

The other thing I found is that for certain game types - such as an FPS - you move through the world so fast that the average luminance in the scene changes rapidly, which led to a flickering/strobing effect. To resolve that I updated the average luminance at some well-defined interval (0.1 seconds worked well for me, you can feel free to experiment) and interpolated between that and the previous average luminance based on time passed. That worked quite well and gave a good illusion of eye-adaptation over time - not physically correct (in real life it takes a lot longer) but good enough to show off the effect in a reasonably convincing manner.

I though that could be a reason too, and I quess I'll need a solution to that at some point, but in my case even a suddle move of the camera like in this case of 1 screen pixel, it goes from average dark to unrealistically bright in half a second, so thats something else...

I wouldn't trust mipmapping to do this work though. Drivers often allow overriding mip settings to improve performance by trading quality; or to avoid blurry textures; which could break your HDR.

Well, I went to doing auto-mipmapping since my first sample-shader approach failed with the same result. Is there anything wrong about my pass-through shader, the first one I posted? I made sure the filter for the scene luminance is set to linear, so I quess I need to do something else inside the shader to get it to work?

0

## Create an account or sign in to comment

You need to be a member in order to leave a comment

## Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Followers 0

• ### Similar Content

• By thmfrnk
Hello,
I am working on a Deferred Shading Engine, which actually uses MSAA for Antialising. Apart from the big G-Buffer ressources its working fine. But the intention of my engine is not only realtime-rendering as also render Screenshots as well as Videos. In that case I've enough time to do everything to get the best results. While using 8x MSAA, some scenes might still flicker.. especially on vegetations. Unfortunately 8x seems to be the maximum on DX11 Hardware, so there is no way to get better results, even if don't prefer realtime.
So finally I am looking for a solution, which might offer an unlimited Sample count. The first thing I thought about was to find a way to manually manipulate MSAA Sample locations, in order to be able to render multiple frames with different patterns and combining them. I found out that NVIDIA did something equal with TXAA. However, I only found a solution to use NVAPI, in order to change sample locations. https://mynameismjp.wordpress.com/2015/09/13/programmable-sample-points/
While I am working on .NET and SlimDX I've no idea how hard it would to implement the NVIDIA API and if its possible to use it together with SlimDX. And this approach would be also limited to NV.
Does anyone have an idea or maybe a better approach I could use?
Thanks, Thomas

• For vector operations which mathematically result in a single scalar f (such as XMVector3Length or XMPlaneDotCoord), which of the following extractions from an XMVECTOR is preferred:
1. The very explicit store operation
const XMVECTOR v = ...; float f; XMStoreFloat(&f, v); 2. A shorter but less explicit version (note that const can now be used explicitly)
const XMVECTOR v = ...; const float f = XMVectorGetX(v);

• Hi guys,
this is a exam question regarding alpha blending, however there is no official solution, so i am wondering  whether my solution is right or not... thanks in advance...

my idea:
BS1:
since BS1 with BlendEnable set as false, just write value into back buffer.
-A : (0.4, 0.4, 0.0, 0.5)
-B : (0.2, 0.4, 0.8, 0.5)

BS2:

backbuffer.RGB: = (0.4, 0.0, 0.0) * 1 + (0.0, 0.0, 0.0) * (1-0.5)      = ( 0.4, 0.0, 0.0)
backbuffer.Alpha = 1*1 + 0*0   =1

A.RGB = (0.4, 0.4, 0.0)* 0.5 + (0.4, 0.0, 0.0)* ( 1-0.5)   = (0.4,0.2,0.0)
A.Alpha=0.5*1+1*(1-0.5) = 1

B.RGB = (0.2, 0.4, 0.8) * 0.5 + (0.4, 0.2, 0.0) * (1-0.5)  = (0.3, 0.3, 0.4)
B.Alpha = 0.5 * 1 + 1*(1-0.5)  = 1

==========================
BS3:

backbuffer.RGB = (0.4, 0.0, 0.0) + (0.0, 0.0, 0.0)  = (0.4, 0.0, 0.0)
backbuffer.Alpha = 0

A.RGB = (0.4, 0.4, 0.0) + (0.4, 0.0, 0.0) = (0.8, 0.4, 0.0)
A.Alpha = 0

B.RGB = (0.2, 0.4, 0.8) + (0.8, 0.4, 0.0) = (1.0, 0.8, 0.8)
B.Alpha = 0

• Hi Guys,
I am revisiting an old DX11 framework I was creating a while back and am scratching my head with a small issue.
I am trying to set the pixel shader resources and am getting the following error on every loop.
As you can see in the below code, I am clearing out the shader resources as per the documentation. (Even going overboard and doing it both sides of the main PSSet call). But I just can't get rid of the error. Which results in the render target not being drawn.
ID3D11ShaderResourceView* srv = { 0 }; d3dContext->PSSetShaderResources(0, 1, &srv); for (std::vector<RenderTarget>::iterator it = rtVector.begin(); it != rtVector.end(); ++it) { if (it->szName == name) { //std::cout << it->srv <<"\r\n"; d3dContext->PSSetShaderResources(0, 1, &it->srv); break; } } d3dContext->PSSetShaderResources(0, 1, &srv);
I am storing the RT's in a vector and setting them by name. I have tested the it->srv and am retrieving a valid pointer.
At this stage I am out of ideas.
Any help would be greatly appreciated

• hi, guys, how to understand the math used in CDXUTDirectionWidget ::UpdateLightDir
the  following code snippet is taken from MS DXTU source code

D3DXMATRIX mInvView;
D3DXMatrixInverse( &mInvView, NULL, &m_mView );
mInvView._41 = mInvView._42 = mInvView._43 = 0;
D3DXMATRIX mLastRotInv;
D3DXMatrixInverse( &mLastRotInv, NULL, &m_mRotSnapshot );
D3DXMATRIX mRot = *m_ArcBall.GetRotationMatrix();
m_mRotSnapshot = mRot;
// Accumulate the delta of the arcball's rotation in view space.
// Note that per-frame delta rotations could be problematic over long periods of time.
m_mRot *= m_mView * mLastRotInv * mRot * mInvView;
// Since we're accumulating delta rotations, we need to orthonormalize
// the matrix to prevent eventual matrix skew
D3DXVECTOR3* pXBasis = ( D3DXVECTOR3* )&m_mRot._11;
D3DXVECTOR3* pYBasis = ( D3DXVECTOR3* )&m_mRot._21;
D3DXVECTOR3* pZBasis = ( D3DXVECTOR3* )&m_mRot._31;
D3DXVec3Normalize( pXBasis, pXBasis );
D3DXVec3Cross( pYBasis, pZBasis, pXBasis );
D3DXVec3Normalize( pYBasis, pYBasis );
D3DXVec3Cross( pZBasis, pXBasis, pYBasis );

https://github.com/Microsoft/DXUT/blob/master/Optional/DXUTcamera.cpp

• 11
• 19
• 14
• 23
• 11