Pixel shaders not correctly sampling subresouces

Started by
2 comments, last by Lotan4t 8 years, 6 months ago

Hello,

I am trying to use d2deffects with custom shaders to create a depth based pixel shifitng shader

So the px shader output is something like this : return InputTexture1.Sample(InputSampler, newcoord);

Like always, dx will slice the textures into subresources into sizes of power of 2 for GPU processing. In my current windowsize, they sliced it into

4 1024x512 texture from a input 1920x 1080 texture.

This is a sample output of the shader

67e077ed6c.jpg

The shader tries to shift the center part of the texture to the right.

What it did was to sample the texture with an offset to the left,

but when texture slice 2 passes the PS

8f6d473e98.jpg

It does not try to sample the neighbour subresource.

Why is this so? How do I provide the correct texture for it to sample?

Full code:

PXshader


Texture2D InputTexture1 : register(t0);
Texture2D InputTexture2 : register(t1);


SamplerState InputSampler : register(s0);

cbuffer constants : register(b0)
{
	float										zeroplane : packoffset(c0.x);
	float										depthfactor : packoffset(c0.y);
	float										dispersion : packoffset(c0.z);
	float										currview : packoffset(c0.w);
	float										maxview : packoffset(c1.x);
	float										height : packoffset(c1.y);
	float										width : packoffset(c1.z);
	float										dpi : packoffset(c1.w);


};

float4 main(
	float4 pos      : SV_POSITION,
	float4 posScene : SCENE_POSITION,
	float4 uv0 : TEXCOORD0,
	float4 uv1 : TEXCOORD1

	) : SV_Target
{
	float	db_depthfactor	=100.0f;
	float	db_zeroplane	=128.0f;
	float	db_dispersion = dispersion;

	float2 dist;
	//float distance = length(toPixel * (96.0f / 96.0f)) * uv0.z;
	float w, h,ww,hh, level;


	float4 depth = InputTexture2.Sample(InputSampler, uv0.xy);
	InputTexture1.GetDimensions(w,h);
	InputTexture1.GetDimensions(1,ww,hh, level);


	float shift = depth.r*256.0;
	shift = ((shift - db_zeroplane) / 256.0) * -1 * db_dispersion * uv0.zw;  // /w; //
	float2 newcoord = uv0.xy + float2(shift, 0);
	
	

	float4 text = InputTexture1.Sample(InputSampler,  newcoord);
	

return text;
}
Advertisement

The "power of two" splitting, is that something you do yourself? I think you have to use somewhat old GPUs for it to not support non-power-of-two textures?

.:vinterberg:.

You may also consider not using minification filtering, you then will be able to create arbitrary sized textures without mipmaps. You can keep magnification filtering of any type, only for minification use no filter (this will create a noise artefact, visible when you will move during minificated texture being sampled). But if you will not minify your textures in animated ways, you practicaly do not need minification filtering.

Is it possible to get these border co-ordinates? Say from the Semantics, I dont see any of them mentioning borders in the Semantics, but how are they mapped correctly onto a surface if the renderer does not know the borders?

https://msdn.microsoft.com/en-us/library/windows/desktop/bb509647%28v=vs.85%29.aspx#Semantics_All

This topic is closed to new replies.

Advertisement