How to use Texture Samplers

Started by
11 comments, last by BenS1 11 years, 5 months ago
Somehow I seem to have missed a couple of the simpler subjects despite reading every DirectX 11 book I can find. Todays subject is Texture Samplers.

I've successfully created Samplers in code in the past using code like this:


// Create the sample state
D3D11_SAMPLER_DESC sampDesc;
ZeroMemory( &sampDesc, sizeof(sampDesc) );
sampDesc.Filter = D3D11_FILTER_MIN_MAG_MIP_LINEAR;
sampDesc.AddressU = D3D11_TEXTURE_ADDRESS_CLAMP;
sampDesc.AddressV = D3D11_TEXTURE_ADDRESS_CLAMP;
sampDesc.AddressW = D3D11_TEXTURE_ADDRESS_CLAMP;
sampDesc.ComparisonFunc = D3D11_COMPARISON_NEVER;
sampDesc.MinLOD = 0;
sampDesc.MaxLOD = D3D11_FLOAT32_MAX;
DXCall(g_pd3dDevice->CreateSamplerState( &sampDesc, &m_pSamplerLinear));


However, its my understanding that you can define samplers in your HLSL code instead of your C++ code. So for example I could have the following in my HLSL:


SamplerState permSampler2d
{
Filter = MIN_MAG_MIP_POINT;
AddressU = WRAP;
AddressV = WRAP;
};


The problem is, when I define the sampler in my HLSL code I always get the following Debug messages in Visual Studio:

449 : ID3D11DeviceContext::DrawIndexed: The Pixel Shader unit expects a Sampler to be set at Slot 0, but none is bound. This is perfectly valid, as a NULL Sampler maps to default Sampler state. However, the developer may not want to rely on the defaults.
449 : ID3D11DeviceContext::DrawIndexed: The resource return type for component 0 declared in the shader code (FLOAT) is not compatible with the Shader Resource View format bound to slot 0 of the Pixel Shader unit (UINT). This mismatch is invalid if the shader actually uses the view (e.g. it is not skipped due to shader code branching).


And any attempt to use the sampler in my Pixel Shader always returns black:


float4 PS(VertexOut pin) : SV_Target
{
float4 colour = permTexture2d.SampleLevel(permSampler2d, pin.TextureUV, 0);
return colour;
}


I've confirmed from the Visual Studio 2012 Graphics Debugger that the texture is correct at this point, and the TextureUV co-ords are correct so I can only guess I'm doing something wrong with the sampler.

I feel that I'm missing something obvious here, but I don't know what.

Any help would be appreciated.

Thanks
Ben
P.S. Note that I'm not using the Effects framework. I presume defining Samplers in HLSL isn't something that only possible through Effects?
Advertisement
I think Samplers are part of Effects Framework. I'm not sure why that code even compiles.

I think Samplers are part of Effects Framework. I'm not sure why that code even compiles.


Ok thanks. So I preusme I have to create my samplers through C++ code as per:

// Create the sample state
D3D11_SAMPLER_DESC sampDesc;
ZeroMemory( &sampDesc, sizeof(sampDesc) );
sampDesc.Filter = D3D11_FILTER_MIN_MAG_MIP_LINEAR;
sampDesc.AddressU = D3D11_TEXTURE_ADDRESS_CLAMP;
sampDesc.AddressV = D3D11_TEXTURE_ADDRESS_CLAMP;
sampDesc.AddressW = D3D11_TEXTURE_ADDRESS_CLAMP;
sampDesc.ComparisonFunc = D3D11_COMPARISON_NEVER;
sampDesc.MinLOD = 0;
sampDesc.MaxLOD = D3D11_FLOAT32_MAX;
DXCall(g_pd3dDevice->CreateSamplerState( &sampDesc, &m_pSamplerLinear));


All the DirectX 11 books I've got are great, but I wish there was a DirectX 11 book out there that covers real DirectX and not DirectX hidden by the Effects Framework\D3DX\DXUT etc.

I guess that now that stuff has gone in the latest Windows SDK version of DirectX any future books will cover pure DirectX. Hopefully they won't all assume that we only want to write Metro style games for Win 8.

Thanks for your help.
Ben

I think Samplers are part of Effects Framework. I'm not sure why that code even compiles.


Yes, declaring sampler states this way requires using Effects. It compiles because the shader compiler also handles compiling effects, so some of the effect syntax is part of HLSL (which is pretty unfortunate, IMO).
The effects framework is now open-source. I haven't looked at the code yet, but surely you could copy the relevant part that parses effect files?
EDIT: Oh, but the compiler is not open-source :( In that case, the best I can think of is that you write your own parser that reads the file and generates sampler states. Painful.
Thanks MJP and hupsilardee,

Ok, its nto a big problem, I was just trying to save a little code on the C++ side, and having seen the Samplers defined in other HLSL code I just assumed it was part of HLSL.... now that I know its actually part of the Effects framework I'll just go back to creating the samplers from C++. Not a big problem.

Thanks again
Ben
Ok, I now create and set the samplers from my C++ code... but now I'm getting these errors:

D3D11: ERROR: ID3D11DeviceContext::DrawIndexed: The Shader Resource View in slot 1 of the Pixel Shader unit is using the Format (R8G8B8A8_UINT). This format does not support 'Sample', 'SampleLevel', 'SampleBias' or 'SampleGrad', at least one of which may being used on the Resource by the shader. This mismatch is invalid if the shader actually uses the view (e.g. it is not skipped due to shader code branching). [ EXECUTION ERROR #371: DEVICE_DRAW_RESOURCE_FORMAT_SAMPLE_UNSUPPORTED ]

D3D11: ERROR: ID3D11DeviceContext::DrawIndexed: The resource return type for component 0 declared in the shader code (FLOAT) is not compatible with the Shader Resource View format bound to slot 1 of the Pixel Shader unit (UINT). This mismatch is invalid if the shader actually uses the view (e.g. it is not skipped due to shader code branching). [ EXECUTION ERROR #361: DEVICE_DRAW_RESOURCE_RETURN_TYPE_MISMATCH ]

So if I can't use SampleLevel to read from my textures then what do I use?

Note that I'm trying to convert the Perlin Noise code from GPU Gems 2 to DirectX 11. The original code was DirectX 9 and used "tex2Dlod" to sample the textures, but apparently this is deprecated in DirectX 11.

Thanks again for any help.
Ben
You Sample from floating point textures and you Load from integer textures.

Sample : http://msdn.microsoft.com/en-us/library/windows/desktop/bb509695%28v=vs.85%29.aspx

Load : http://msdn.microsoft.com/en-us/library/windows/desktop/bb509694%28v=vs.85%29.aspx
You could try to use R8G8B8A8_FLOAT format (or higher accuracy), or possibly use Load function to load specific bytes your UINT texture.
Thanks Gavin and Zaoshi,

Looking at the documentation for Load it looks like the texture co-ords are Ints, which is a little odd as texture co-ords are normally between 0 and 1.

Does the Load function support wrapping etc (The documentation doesn't say)?

Actually, just changing my texture formats from DXGI_FORMAT_R8G8B8A8_UINT to DXGI_FORMAT_R8G8B8A8_UNORM and from DXGI_FORMAT_R8_UINT to DXGI_FORMAT_R8_UNORM seems to have fixed the problem.

Out of interest, what's the difference between DXGI_FORMAT_R8_UNORM and DXGI_FORMAT_A8_UNORM?

Thanks
Ben

.

This topic is closed to new replies.

Advertisement