• Advertisement
Sign in to follow this  

Copy texture in a shader artifact.

This topic is 716 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I spent some time on debugging and can’t figure out what am I doing wrong.

I have a HDR texture with format DXGI_FORMAT_R16G16B16A16_FLOAT.

Also I have LDR texture with format DXGI_FORMAT_R8G8B8A8_UNORM

Right now there is no tone mapping algorithm implemented, so I just copy entire texture over full-quad shaders.

Pixel shader is very simple:

Texture2D<float4> HdrColor : register(t0);

float4 main(float4 position : SV_Position) : SV_Target
{
	int3 texCoord = int3(position.xy, 0);
	//TODO: bug is here: HDR texture stores much better gradient, but after Load(), artifact occurs
	float4 color = HdrColor.Load(texCoord);
	return color;
}

After this code run, a banding artifact occurs: color, that I am receiving on Load() is much different that the color is stored in HDR texture.

 

Pixel                      HDR texture       LDR backbuffer

1015,488              .143188477          .141176477

1015,489              .143310547          .145098045

 

Both cursors are on [1015,488] position, and one pixel lower on the right picture there is an artifact.

You can see it better if you download the image and magnify it.

(the color is a little bit different due VS texture visualization)

[attachment=31781:LoadArtifact.png]

 

There are no objects like Rasterizer/Sampler/… bound to the pipeline. Just VS/PS/IL/SRV/RTV.

 

Has anyone have any clue what’s going on? =)

Thanks in advance.

Edited by Serenity79

Share this post


Link to post
Share on other sites
Advertisement

Sorry, I just figured out what is going on! :)

This is a precision error due LDR 8-bit channels.

 

256*(.145098045 - .141176477) = 1.004

 

I did not observe it on HDR texture because VS made some color adjustments on the texture like auto-levels.

Edited by Serenity79

Share this post


Link to post
Share on other sites

You're storing your 16-bit per channel floating point value in an 8-bit per channel fixed point format and expecting no loss of precision?

 

0.141176477 represents '36' in an 8 bit format. (36/255).

0.145098045 represents '37' in an 8 bit format. (37/255).

 

Your high(er) precision value has been quantized down to the nearest value available in your 8-bit UNORM format.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement