Loop fails in HLSL3

Started by
7 comments, last by Adam_42 10 years, 1 month ago

Hello,

I've got this function in my shader that performs PCF-filtering on a cascaded shadow map, just like in the DirectX-SDK11-sample:


float CalculatePCFPercentLit(float4 vShadowTexCoord, float fBlurRowSize, int numCascade) 
{
	float fPercentLit = 0.0f;
	// This loop could be unrolled, and texture immediate offsets could be used if the kernel size were fixed.
	// This would be performance improvment.
	float depthcompare = vShadowTexCoord.z;
	// A very simple solution to the depth bias problems of PCF is to use an offset.
	depthcompare -= vBlur.z; // z => bias
			
	for( int x = int(vBlur.x); x < vBlur.y; ++x ) 
	{
		for( int y = int(vBlur.x); y < vBlur.y; ++y ) 
		{
			// Compare the transformed pixel depth to the depth read from the map.
			float depth = Sample(Shadow, 
				float2( 
					vShadowTexCoord.x + (float(x) * vBorderPadding.w) , // w => native texel size 
					vShadowTexCoord.y + (float(y) * vBorderPadding.x) // x => texelsize 
					));
					
			fPercentLit += (depthcompare <= depth ? 1.0f : 0.0f);
		}
	}
	fPercentLit /= fBlurRowSize;
	return fPercentLit;
}

Now in DirectX11 this works just fine, but in DirectX9, it fails with this warning/errors:


C:\Acclimate Engine\Techdemo\Game\Modules\Shadow\Effects\memory(59,19): warning X3553: can't use gradient instructions in loops with break, forcing loop to unroll
C:\Acclimate Engine\Techdemo\Game\Modules\Shadow\Effects\memory(53,5): error X3511: unable to unroll loop, loop does not appear to terminate in a timely manner (1024 iterations)
C:\Acclimate Engine\Techdemo\Game\Modules\Shadow\Effects\memory(51,5): error X3511: forced to unroll loop, but unrolling failed.

What does that mean? I'm not using any break in this loop as suggested by the warning, why does the loop try to unroll then? Is it even possible to have a loop like this based on c-register-input (thats where vBlur comes from) in HLSL for DirectX9? Target level is 3.0, and the "Sample"-function translates to tex2d when this effect is parsed to HLSL3. Any ideas?

Advertisement

I think that the compiler can't detect the fix int logic behind your loops. Try something like this:


int range =int( vBlur.y-vBlur.x);
	for( int x = 0; x < range; ++x ) 
	{
		for( int y = 0; y < range; ++y )  {
        {
            // Compare the transformed pixel depth to the depth read from the map.
            float depth = Sample(Shadow,
                float2(
                    vShadowTexCoord.x + (float(x+vBlur.x) * vBorderPadding.w) , // w => native texel size 
                    vShadowTexCoord.y + (float(y+vBlur.x) * vBorderPadding.x) // x => texelsize 
                    ));
                    
            fPercentLit += (depthcompare <= depth ? 1.0f : 0.0f);
        }



Maybe it has issues with non-fix loops too (unrolling is an indication), it would be better to compile it with a predefined fix range.

I'm assuming the shadow texture doesn't have mipmaps, so you don't need gradient sampling in the first place (i.e. use tex2Dlod instead of tex2D).


I'm assuming the shadow texture doesn't have mipmaps, so you don't need gradient sampling in the first place (i.e. use tex2Dlod instead of tex2D).

Thanks, that fixes is, so thats what "gradient" stands for, eh? Learnt something new today :D Can you recommend using tex2dlod(sampler, texcoord, 0); anywhere where textures do not have a miplevel for e.g. performance reason and to prevent this kind of issue, or would that be prematurely optimized?

I don't know that there is a performance difference in the texture lookup instructions or not (they probably use the same hardware sampling features) but any instructions that use gradient information can't be used in situations in which neighboring pixels might take different paths of execution in their respective shaders. It is simply because they used neighboring pixels to do the gradient calculation (finite differences) that they outlawed using gradient instructions inside of dynamic branches.

Can you inspect your assembly listing for that shader to see if FXC is generating dynamic branch instructions?

Dynamic loops in SM3 using uniform variables are often really ugly if you look at the generated assembly
e.g. this
	for( int x = int(vBlur.x); x < vBlur.y; ++x ) 
often gets compiled into this:
	float x = vBlur.x;
	for( int i = 0; i != 255; ++i, x+=1.0 ) 
		if( x >= vBlur.y )
			break;
It will likely be much more optimal if you can use variables for the loop that are all known at compile-time, and then put an [unroll] hint before each loop.

Can you inspect your assembly listing for that shader to see if FXC is generating dynamic branch instructions?

Is there any way to output this in the application when using D3DCompile? I could save the parsed shader and compile that with FXC, but I would prefer a build-in solution, if there is one...

@Hodgeman:

So thats where the "break" originates from, I see. I quess I could use my shader permutation system to make the pcf-filter kernel known at compile-time, but first I gotta implement selective compilation. Right now I compile every possible permutation, and that shader already has 4 bits for the number of cascades, and I don't want compile time to go up too high. Especially since of all those combinations of cascade&pcf-filter, only one combination can actively be around anyway...


Is there any way to output this in the application when using D3DCompile? I could save the parsed shader and compile that with FXC, but I would prefer a build-in solution, if there is one...

Not that I am aware of. It would probably be a nice feature to add to your engine though - being able to dump the parsed shader to file for inspection later on seems like a reasonable thing to want during debugging of your generated shader code. It doesn't need to be done all the time, you could selectively choose to save a file only during debugging...

I think D3DDisassemble() should get you that assembly output without having to mess about with CreateProcess("fxc.exe", ...)

By the way when reading a single channel from a texture, you can save on texture reads by using instructions like Gather() or GatherRed() (depending on if the hardware is DX10 or DX11). They work best when you know what pixels you're going to read at compile time.

This topic is closed to new replies.

Advertisement