L16 and TextureObject.Load

Started by
4 comments, last by MJP 9 years, 10 months ago

I'm having a hell of a time getting an L16 texture (16bit heightfield data) read in my shader and was hoping someone could spot where I'm going wrong.

I'm pretty sure the logic is just some syntax issue in my shader, but just to verify I ran everything through RenderDoc and I can see the load call being executed, however it always returns 0. I've tried a few values to no avail, checking the pipeline I can see the resource is bound and I can see the values in the resource look correct - the shader just isn't reading it for some reason.

Any thoughts?

Shader:


float4x4 WorldViewProjection	: WorldViewProjection;

// Textures
Texture2D<float4> DiffuseTexture;
Texture2D<uint> HeightmapTexture;

float MaxHeight;

// Structures
struct VSInput
{
	float4 Position		: POSITION;
	float2 Uv		: TEXCOORD0;
};

struct VSOutput
{
	float4 Position		: SV_POSITION;
	float2 Uv		: TEXCOORD0;
};

sampler LinearClampSampler
{
	Filter = Min_Mag_Mip_Linear;
	AddressU = Clamp;
	AddressV = Clamp;
};

sampler PointClampSampler
{
	Filter = Min_Mag_Mip_Point;
	AddressU = Clamp;
	AddressV = Clamp;
};

// Helper Methods
float GetHeightmapValue(float2 uv)
{
	uint rawValue = HeightmapTexture.Load(uint3(uv.x, uv.y, 0)); // <- Always returns 0,0,0,0
	return ((float)rawValue/(256.0f * 256.0f)) * MaxHeight;
}

// Vertex shaders
VSOutput ForwardRenderVS(VSInput IN)
{
	VSOutput Out = (VSOutput)0;
	float3 position = IN.Position.xyz;
	position.y = GetHeightmapValue(IN.Uv);
	Out.Position = mul(float4(position, 1.0f), WorldViewProjection);
	Out.Uv = IN.Uv;
	return Out;
}


// Fragment shaders.
float4 ForwardRenderFP(VSOutput In) : FRAG_OUTPUT_COLOR0
{
	return float4(0.0f, 1.0f, 0.0f, 0.0f);
}

// Render states.
BlendState NoBlend
{
	BlendEnable[0] = FALSE;
	RenderTargetWriteMask[0] = 15;
};

BlendState LinearBlend
{
	BlendEnable[0] = TRUE;
	SrcBlend[0] = SRC_ALPHA;
	DestBlend[0] = INV_SRC_ALPHA;
	BlendOp[0] = ADD;
	SrcBlendAlpha[0] = ZERO;
	DestBlendAlpha[0] = ZERO;
	BlendOpAlpha[0] = ADD;
	BlendEnable[1] = FALSE;
	RenderTargetWriteMask[0] = 15;
};

DepthStencilState DefaultDepthState
{
	DepthEnable = TRUE;
	DepthWriteMask = All;
	DepthFunc = Less;
	StencilEnable = FALSE;
};

RasterizerState DefaultRasterState
{
	CullMode = None;
};

RasterizerState CullBackRasterState
{
	CullMode = Front;
};

// Techniques.
technique11 ForwardRender
{
	pass pass0
	{
		SetVertexShader( CompileShader( vs_4_0, ForwardRenderVS() ) );
		SetPixelShader( CompileShader( ps_4_0, ForwardRenderFP() ) );
	
		SetBlendState( NoBlend, float4( 0.0f, 0.0f, 0.0f, 0.0f ), 0xFFFFFFFF );
		SetDepthStencilState( DefaultDepthState, 0 );
		SetRasterizerState( DefaultRasterState );
	}
}

RenderDoc image.

Advertisement

Are your UVs in the range [0, 1] or are they in the range [0, HeightmapDimension]? I'm pretty sure Load takes texel coordinates as integers between 0 and the width/height of the resource in texels. When you take your UVs and cast them to an int you might be getting uint3(0,0,0) every time, and simply be loading the same texture location over and over again.

Are your UVs in the range [0, 1] or are they in the range [0, HeightmapDimension]? I'm pretty sure Load takes texel coordinates as integers between 0 and the width/height of the resource in texels. When you take your UVs and cast them to an int you might be getting uint3(0,0,0) every time, and simply be loading the same texture location over and over again.

Yea I had that thought as well, even when I multiplied the uv coords (which are in 0 - 1 space) value by the dimensions of the texture, I still get all 0's back.

EDIT:

Here's the ASM, notice the r0 index value:

iYzzoVo.png

And here's what that value should be returning:

IekBJQO.png

So there is such as thing as an "L16" format in DX10/DX11. Instead there are a few variations of R16 formats, with different suffixes that determine how the texture data is interpreted when the texture is sampled by a shader. I'm not sure how you're loading your texture data (perhaps as a DDS file?) but I would guess that when the texture/SRV is created it's using the R16_UNORM format since this is the moral equivalent of D3DFMT_L16 from DX9. The UNORM suffix means that the unsigned integer texture data will be interpreted as a [0, 1] floating point value, with an integer value of 0 mapping to 0.0 and an integer value of 65535 mapping to 1.0. If this is the case, you will want to declare your Texture2D using the <float> return type and remove your code for converting from integer -> floating point. Using the <int> or <uint> return format for a texture with a UNORM, SNORM, or FLOAT format is illegal and will cause any texture fetches to return 0. You can easily check to see if this is the case by creating your device with the D3D11_CREATE_DEVICE_DEBUG flag and checking your debugger output window for error messages (you can also configure the device to break on errors). I believe you can also setup RenderDoc to use the debug device for playing back your capture.

Thanks MJP, I'll give that a shot. Is using the Load method the best way to read that value? Or (since its going to be a float value returned anyway) go back to using SampleLevel?

Typically you'll want to use Load if you don't want any filtering applied to the texture, and SampleLevel if you do want filtering.

This topic is closed to new replies.

Advertisement