Jump to content
  • Advertisement
Sign in to follow this  
Husbj

SampleLevel not honouring integer texel offset

This topic is 918 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

This is a curious one.

I recently had to switch out some SampleCmpLevelZero calls on a Texture2DArray for use with shader model 4. The go-to solution seemed to be SampleLevel, called as such:

Tex2DArray.SampleLevel(LegacySampler, coord, 0, texelOffset);

The MSDN article on the function states that the offset argument should indeed be supported: https://msdn.microsoft.com/en-us/library/windows/desktop/bb509699(v=vs.85).aspx

Once compiled it has absolutely no effect however; indeed it doesn't even give off a compilation error if set outside the valid range of (-8 .. +7).

Furthermore, the assembly instruction associated with SampleLevel, sample_l, doesn't seem to have any offset argument at all: https://msdn.microsoft.com/en-us/library/windows/desktop/hh447229(v=vs.85).aspx

It is of course possible that the offset is merged with the input texcoord through separate instructions prior to calling sample_l, but I can't seem to find any indication of this looking at my disassembled shader (granted I'm far from experienced with that though).

 

So I guess my question is whether this function indeed does not support the integer texel offset argument, with the compiler (version 43) essentially just silently removing it, or if something is going particularly wrong on my end? Perhaps this is a known bug in my particular HLSL compiler dll?

 

I know I can just go the extra mile and compute the corresponding texcoord offsets myself and I suppose I will, but I still found this intriguing enough to make a post about it since there doesn't seem to be any information regarding it that I've been able to find.

Edited by Husbjörn

Share this post


Link to post
Share on other sites
Advertisement
Based on personal experience do not rely on the offset parameters. Broken drivers, broken hardware; missmatching results across vendors. It's better to just apply the offset yourself to the UVs.

Share this post


Link to post
Share on other sites

Version 43 of the compiler is pretty old now, the latest Windows SDK contains Version 47.

 

Can you compile the following shader and paste the DXBC output and compare it to what I get?

Texture2DArray<float> tex;
SamplerState samp;

float main(float2 uv : A) : SV_TARGET
{
	return tex.SampleLevel(samp, float3(uv, 2), 0, int2(-5, 5));
}
dcl_globalFlags refactoringAllowed
dcl_sampler s0, mode_default
dcl_resource_texture2darray (float,float,float,float) t0
dcl_input_ps linear v0.xy
dcl_output o0.x
dcl_temps 1
mov r0.xy, v0.xyxx
mov r0.z, l(2.000000)
sample_l_aoffimmi_indexable(-5,5,0)(texture2darray)(float,float,float,float) r0.x, r0.xyzx, t0.xyzw, s0, l(0.000000)
mov o0.x, r0.x
ret

Thanks,

 

Adam

Share this post


Link to post
Share on other sites

Can you compile the following shader and paste the DXBC output and compare it to what I get?

Certainly. As you can see, there is no mention of any offset coordinates, neither does it use the "aoffimmi_indexable" postfix:

dcl_globalFlags refactoringAllowed
dcl_sampler s0, mode_default
dcl_resource_texture2darray (float,float,float,float) t3
dcl_input_ps linear v0.xy
dcl_output o0.x
dcl_temps 1

mov r0.xy, v0.xyxx
mov r0.z, l(2.000000)
sample_l(texture2darray)(float,float,float,float) r0.x, r0.xyzx, gTex.xyzw, gSampler, l(0)
mov o0.x, r0.x
ret

The array texture is bound to register t3 and is named "gTex", while the sampler is named "gSampler" and bound to s0 so I guess those add up with the differences towards your output.

 

 

Version 43 of the compiler is pretty old now, the latest Windows SDK contains Version 47.

Yes I know, perhaps I should try to replace it. I've been using it since it's the version that is included with the DirectX SDK from February 2010 which is the one I'm using. I recall trying to upgrade to the June 2010 version gave me troubles about two years ago so I settled for downgrading. Is there any way to maybe just replace the HLSL compiler? It would appear it simply loads d3dx11_42.dll (yes I was wrong and it isn't even using version 43 :rolleyes: ) at runtime so I guess I could take a newer version, rename it to that and drop it in my program's working directory. I'd rather not have to rename it though.

 

 

 

Based on personal experience do not rely on the offset parameters. Broken drivers, broken hardware; missmatching results across vendors. It's better to just apply the offset yourself to the UVs.

Hm, I see. Yes, I suppose that wouldn't be too big of a hassle; I did change it for that successfully and I guess I will stick to that in the future as well then. Thanks for pointing it out!

Share this post


Link to post
Share on other sites

Your best bet is to get off of the long-since-deprecated DirectX SDK and move to the Windows SDK, there's no guarantee that this bug was fixed in the June 2010 DXSDK.

 

What's blocking you from making the move to the Windows SDK?

 

Take a read through my colleague Chuck's blog post on how to replace any dependencies you have in there and see if you can get to something a little more recent!

 

https://blogs.msdn.microsoft.com/chuckw/2015/08/05/where-is-the-directx-sdk-2015-edition/

Share this post


Link to post
Share on other sites

Hah, yeah I suppose I really should try for that sooner or later.

 

What's blocking you from making the move to the Windows SDK?

Mainly a somewhat large pre-existing codebase and laziness, but also the need to support Windows 7 and Vista. I'm not sure whether upgrading to the Windows SDK would affect that, though I do recall reading that this was more for Windows 8+ (ie DirectX 11.1 and above)? I may very well be wrong there though.

 

As your linked-to article says however:

Your application uses use XAudio2 and supports Windows 7 systems.

This would also be a problem. Not an insurmountible one though as I don't have that much audio functionality yet; it would probably be possible to change to another library without too much of a hassle.

Edited by Husbjörn

Share this post


Link to post
Share on other sites

It's possible to just use the new compiler and still use the old SDK, if for some reason you're really keen on not switching. If you're using fxc.exe it's easy: just use the new version. If you're linking to the D3DCompiler DLL it's a little trickier, since you will probably have trouble making sure that your app links to the correct import lib. One way to make sure that you use the right version is to not use an import lib at all, and instead manually call LoadLibrary/GetProcAddress to get a pointer to the function you want to use from d3dcompiler_47.dll.

Share this post


Link to post
Share on other sites

One way to make sure that you use the right version is to not use an import lib at all, and instead manually call LoadLibrary/GetProcAddress to get a pointer to the function you want to use from d3dcompiler_47.dll.

Ah yes, that should work; there are only two functions being imported from there so should be an easy thing, thanks.

 

I probably should see about making the switch and look further into what that will entail in regards to older OS support one of these days though.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!