NonUniformResourceIndex in Vulkan glsl

Started by
4 comments, last by galop1n 6 years, 2 months ago

I need to index into a texture array using indices which are not dynamically uniform. This works fine on NVIDIA chips but you can see the artifacts on AMD due to the wavefront problem. This means, a lot of pixel invocations get the wrong index value. I know you fix this by using NonUniformResourceIndex in hlsl. Is there an equivalent for Vulkan glsl?

This is the shader code for reference. As you can see, index is an arbitrary value for each pixel and is not dynamically uniform. I fix this for hlsl by using NonUniformResourceIndex(index)


layout(set = 0, binding = 0) uniform sampler textureSampler;
layout(set = 0, binding = 1) uniform texture2D albedoMaps[256];

layout(location = 0) out vec4 oColor;

void main()
{
	uint index = calculate_arbitrary_texture_index();
	vec2 texCoord = calculate_texcoord();
	vec4 albedo = texture(sampler2D(albedoMaps[index], textureSampler), texCoord);

	oColor = albedo;
}

Thank you

Advertisement

I'm not a Vulkan guy, but I Googled it and came across this information. It should set you on the right path:

https://www.khronos.org/registry/vulkan/specs/1.0/man/html/VkPhysicalDeviceFeatures.html

Quote

shaderSampledImageArrayDynamicIndexing indicates whether arrays of samplers or sampled images can be indexed by dynamically uniform integer expressions in shader code. If this feature is not enabled, resources with a descriptor type of VK_DESCRIPTOR_TYPE_SAMPLER, VK_DESCRIPTOR_TYPE_COMBINED_IMAGE_SAMPLER, orVK_DESCRIPTOR_TYPE_SAMPLED_IMAGE must be indexed only by constant integral expressions when aggregated into arrays in shader code. This also indicates whether shader modules can declare theSampledImageArrayDynamicIndexing capability.

Chances are this bool is true for NVIDIA and false for AMD?

Adam Miles - Principal Software Development Engineer - Microsoft Xbox Advanced Technology Group

Vulkan doesn't have this intrinsic.

 

You'll have to do by hand what the compiler does for you when shaderSampledImageArrayDynamicIndexing is set to False:


uint NonUniformResourceIndex( uint textureIdx )
{
	while (true)
	{
		uint currentIdx = readFirstInvocationARB(textureIdx);

		if (currentIdx == textureIdx)
			return currentIdx;
	}
	
	return 0; //Execution should never land here
}

 

Cheers

Matias

Hey Matias,

Thanks for the code. But I am still getting the wavefront tiling artifacts. Is there no way to get over this problem? Since hlsl can do it, I assume it is not a hardware limitation.

This makes me curious - How do cross compilers handle this when translating hlsl to glsl or they just don support it

7 hours ago, mark_braga said:

Hey Matias,

Thanks for the code. But I am still getting the wavefront tiling artifacts. Is there no way to get over this problem? Since hlsl can do it, I assume it is not a hardware limitation.

This makes me curious - How do cross compilers handle this when translating hlsl to glsl or they just don support it

This is indeed a hardware limitation. Texture descriptor are scalar register on GCN. If your index is not uniform across the lanes. It creates the artefacts you see. What the NonUniform intrinsic does is to generate a loop, masking threads per index and do the texture fetch until all lanes have done it. It does not do it at the DXC/DXIL bytecode but just tag thing in case the driver has to do it.

If you have access to ballot and readfirstlane in glsl, it is easy to write yourself two version of the shader for GCN and nVidia, doing it or not depending on the GPU.

This topic is closed to new replies.

Advertisement