Sign in to follow this  

Copy data from HDR no-multisampled to LDR multisampled textures.

This topic is 683 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

In my Deferred renderer implementation, I have HDR Light Accumulator and DepthStencil texture of these formats:

D3D11_TEXTURE2D_DESC lightAccumulatorDescr = {
	width, height,
	1, // MipLevels
	1, // ArraySize
	DXGI_FORMAT_R16G16B16A16_FLOAT,
	1, //SampleDesc
	0,
	D3D11_USAGE_DEFAULT,
	D3D11_BIND_RENDER_TARGET | D3D11_BIND_SHADER_RESOURCE,
	0,// CPUAccessFlags
	0 // MiscFlags
};
	
D3D11_TEXTURE2D_DESC depthStencilDescr = {
	width, height,
	1, // MipLevels
	1, // ArraySize
	DXGI_FORMAT_R24G8_TYPELESS, //depth stencil texture format
	1, //SampleDesc
	0,
	D3D11_USAGE_DEFAULT,
	D3D11_BIND_DEPTH_STENCIL | D3D11_BIND_SHADER_RESOURCE,// BindFlags
	0,// CPUAccessFlags
	0 // MiscFlags
};

Both of them have no multi-sampling.

After light calculation, I am moving rendered data from the textures to LDR backbuffer.
I need to rendering into LDR backbuffer some aux meshes like normals, bounding boxes, and so on.
The LDR backbuffer and depth buffer are multisampled:

DXGI_SWAP_CHAIN_DESC1 swapChainDesc{};
swapChainDesc.Width       = cx;
swapChainDesc.Height      = cy;
swapChainDesc.Format      = DXGI_FORMAT_B8G8R8A8_UNORM;
swapChainDesc.Stereo      = false;
swapChainDesc.SampleDesc  = m_sampleDesc; //MSAA = 8, Quality = 0
swapChainDesc.BufferUsage = DXGI_USAGE_RENDER_TARGET_OUTPUT;
swapChainDesc.BufferCount = 2; // Use double-buffering to minimize latency.
swapChainDesc.Scaling     = DXGI_SCALING_STRETCH;
swapChainDesc.SwapEffect  = DXGI_SWAP_EFFECT_DISCARD;
swapChainDesc.Flags       = 0; // No rotation expected on desktop


CD3D11_TEXTURE2D_DESC depthStencilDesc(DXGI_FORMAT_D24_UNORM_S8_UINT, cx, cy, 1, 1, 
         D3D11_BIND_DEPTH_STENCIL, D3D11_USAGE_DEFAULT, 0, 
         m_sampleDesc.Count (= 8), (m_sampleDesc.Quality = 0));

So here I have 2 issues:

 

1. LDR Depth buffer is clean, and new meshes are rendered without depth awareness of HDR Depth buffer.
The depth data inside HDR Depth buffer is present in it, but not in LDR Depth buffer yet, so all new meshes will be rendered.

For example, on this image, vertex normals are rendered for all vertices, even that are occluded by a mesh.

Also, bounding box is rendered for all occluded sides.

[attachment=30526:hdr-ldr.png]

Question 1: what’s the best way to place the data from HDR non-multisampled Depth buffer into LDR multisampled Depth buffer?

 

 

2. I need to find a way to copy texture data from HDR texture to LDR texture in case I don’t want to make tone mapping.
I did 2 shaders: vertex shader creates a full screen quad, and in pixel shader I copy pixel-by-pixel the image.

Texture2D<float4> HDRTex : register(t0);
SamplerState PointSampler : register(s0);

struct VS_OUTPUT
{
	float4 Position : SV_Position;
	float2 UV	  : TEXCOORD0;
};

float4 main(VS_OUTPUT In) : SV_TARGET
{
	// Get the color sample
	float3 color = HDRTex.Sample(PointSampler, In.UV.xy).xyz;

	// Output the LDR value
	return float4(color, 1.0);
}

Question2: is there a better solution to copy from HDR non-multisampled texture all pixels to LDR multisampletd texture?

 

Thanks in advance.

Edited by Happy SDE

Share this post


Link to post
Share on other sites

Re: #2 Look up tone mapping operators. E.g. https://www.shadertoy.com/view/lslGzl 

 

Your current shader is clipping floating point values to [0, 1], making your usage of a 16F source texture kind of useless. It will solve a  rounding errors but it won't offer any higher ranges. 

 

 

 

re: #1 going from single sample->multisample seems pretty backwards. You can do this by writing to depth using a pixel shader, but your multisampled depth buffer is going to be kind of bungled up and writing custom depth values is never a great state to be in. Imo you should try to make your pipeline go multisample->single sample and not the reverse. Your display is never going to natively work with a multisampled buffer anyway; it always has to resolve it at some point, and in d3d11 the resolve is hidden away for some dumb reason.

 

But furthermore, depth values can't be nicely combined like color values by averaging them together. If you average two depth values, you get a new depth value that doesn't exist -- you'll need to take the min(or max) of the depth going from multi->single, and single->multi is usually going to be a nearest neighbor upsample operation, meaning you'll potentially have edge artifacts as things intersect.

Edited by Dingleberry

Share this post


Link to post
Share on other sites
Dingleberry, on 03 Feb 2016 - 01:10 AM, said:

Your current shader is clipping floating point values to [0, 1], making your usage of a 16F source texture kind of useless. It will solve a  rounding errors but it won't offer any higher ranges. 

 

There are 2 options which I am going to switch in runtime:

1. Use Tone mapping.

2. Don't use tone mapping.

void ToneMappingPass::ToneMap(bool toneMap, const ComPtr<ID3D11ShaderResourceView>& hdrSrv, ComPtr<ID3D11RenderTargetView>& ldrRtv)
{
	if (toneMap)
	{
		realToneMap(hdrSrv, ldrRtv);
	}
	else
	{
		copyPixels(hdrSrv, ldrRtv);
	}
}

So the question is: is it possible to improve copying all color pixels between no-multisampled texture to multisampled texture?

 

 

 

Dingleberry, on 03 Feb 2016 - 01:10 AM, said:

re: #1 going from single sample->multisample seems pretty backwards. You can do this by writing to depth using a pixel shader, but your multisampled depth buffer is going to be kind of bungled up and writing custom depth values is never a great state to be in. Imo you should try to make your pipeline go multisample->single sample and not the reverse. Your display is never going to natively work with a multisampled buffer anyway; it always has to resolve it at some point, and in d3d11 the resolve is hidden away for some dumb reason.

 

But furthermore, depth values can't be nicely combined like color values by averaging them together. If you average two depth values, you get a new depth value that doesn't exist -- you'll need to take the min(or max) of the depth going from multi->single, and single->multi is usually going to be a nearest neighbor upsample operation, meaning you'll potentially have edge artifacts as things intersect.

I have an MRT GBuffer.

As far as I know, GBuffer can't be multisampled.

Am I wrong?

 

The final image should be multisampled (I would like to use line anitialiasing and other drawing after Deferred renderer pass).

I don't like it, but results of GBuffer rendering would be not multisampled =(

The other pixels after GBuffer rendering will be multisampled.

 

So, right now I see artifacts because final depth buffer is not aware of pixel depths that were rendered in GBuffer.

I am going try to implement something like per-pixel depth copy the way I did with NoToneMap approach.

 

And the question is: is it possible make it more efficient?

Edited by Happy SDE

Share this post


Link to post
Share on other sites

After thinking a while on this code:

 

In a pixel shader I copy each color pixel to LDR texture.

Is it possible also to store the depth values to LDR depth buffer?

void ToneMappingPass::ToneMap(bool toneMap, const ComPtr<ID3D11ShaderResourceView>& hdrSrv, ComPtr<ID3D11RenderTargetView>& ldrRtv)
{
	if (toneMap)
	{
		realToneMap(hdrSrv, ldrRtv);
	}
	else
	{
		copyPixels(hdrSrv, ldrRtv);
	}
}
================================================================
Texture2D<float4> HDRTex	   : register(t0);
SamplerState PointSampler	   : register(s0);

struct VS_OUTPUT
{
	float4 Position : SV_Position; // vertex position 
	float2 UV		: TEXCOORD0;
};

float4 main(VS_OUTPUT In) : SV_TARGET
{
	// Get the color sample
	float3 color = HDRTex.Sample(PointSampler, In.UV.xy).xyz;

	// Output the LDR value
	return float4(color, 1.0);
}
Edited by Happy SDE

Share this post


Link to post
Share on other sites

I did it! =)

 

Thanks all for attention.

Texture2D<float4> HdrColor : register(t0);
Texture2D<float>  HdrDepth : register(t1);

struct PS_OUTPUT
{
	float4 color : SV_Target;
	float  depth : SV_DEPTH;
};

PS_OUTPUT main(float4 position : SV_Position)
{
	PS_OUTPUT ret;

	int3 texCoord = int3(position.xy, 0);

	ret.color = HdrColor.Load(texCoord);
	ret.depth = HdrDepth.Load(texCoord);

	return ret;
}

Share this post


Link to post
Share on other sites

This topic is 683 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this