Jump to content

  • Log In with Google      Sign In   
  • Create Account

Shader limitations and best practices


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
12 replies to this topic

#1 jefferytitan   Crossbones+   -  Reputation: 2118

Like
0Likes
Like

Posted 26 July 2012 - 05:32 AM

Hi people,

Recently I've been experimenting with shaders in Unity, particularly terrain beautification with better blending and bump-mapping. It's been a haphazard self-teaching and experimentation process so far. I have a few questions which may fall under basics or advanced.
  • What's the best practice for making shaders cross-platform? At the moment I'm developing on Mac only, so hard to know.
  • So far my shaders have had no significant effect on frame-rate. What tends to have the most effect? Textures, number of shaders, number of instructions per shader, branching?
  • Do I need to offer cut-down versions for lower spec'd hardware, and if so how does it know which version to use?
  • Is it worth branching if most of the time a huge amount of work can be skipped, e.g. only do bump-mapping within distance x of the camera?
  • When blending bump-mapped textures would it pay off to waste texture space by using a completely flat normal texture rather than having a special case for textures with no bump-map?
  • What is the instruction count limit based on? Some shaders I've seen seem very "busy" but don't exceed the limits.
Any help would be appreciated. Screenshots will follow if I manage to make something look nice. ;)

JT

Sponsor:

#2 Ashaman73   Crossbones+   -  Reputation: 7416

Like
3Likes
Like

Posted 26 July 2012 - 05:52 AM

My 2 cents.

# What's the best practice for making shaders cross-platform? At the moment I'm developing on Mac only, so hard to know.

You are talking about OGL/GLSL, right ? Then try to support at least the minimum requirements of your choosen OGL/GLSL version. You can find the supported features (e.g. number of indirect texture access) in the specification.

# So far my shaders have had no significant effect on frame-rate. What tends to have the most effect? Textures, number of shaders, number of instructions per shader, branching?

texture => bandwidth
number of shaders => not that important, as long as you don't switch the shader all the time, try to bundle API calls by material/shader.
number of instructions => very important for pixel shaders, less is always better
branching => a certain measure of branching will not hurt, it can even improve performance if you discard expensive calculations which will effect larger parts of the screen (shaders are executed in groups (tiles), the slowest shader in this group will slow down the rest. But you will benefit from it if all performe in the same way most of the time).

# Do I need to offer cut-down versions for lower spec'd hardware, and if so how does it know which version to use?

Use pre-processor statements to support multiple versions.

# Is it worth branching if most of the time a huge amount of work can be skipped, e.g. only do bump-mapping within distance x of the camera?

Yes. See above (shader groups).

# When blending bump-mapped textures would it pay off to waste texture space by using a completely flat normal texture rather than having a special case for textures with no bump-map?

It depends on the overhead, texture access is not always cheap. But it could pay off if you have a small texture (cache friendly) and keep your shader code clean of addtional branches.

# What is the instruction count limit based on? Some shaders I've seen seem very "busy" but don't exceed the limits.

Depends on the supported OGL/GLSL version and hardware. Most modern hardware will have not have an instruction limitid any longer, though there are other limitations (registers) and many instruction will reduce performance.

Edited by Ashaman73, 26 July 2012 - 05:56 AM.


#3 jefferytitan   Crossbones+   -  Reputation: 2118

Like
0Likes
Like

Posted 26 July 2012 - 04:37 PM

Thanks for your reply!

I will check when I get home, as Unity is crossplatform and wraps everything up in ShaderLab. I believe that I'm using CG, which has similar syntax to HLSL. Apparently it gets compiled for OpenGL and DirectX (9?).

So branching isn't so bad as long as there are likely to be contiguous regions in screen-space which branch the same way? But wouldn't the tiles in the border region between branch 1 and branch 2 slow everything down, e.g. weakest link in the chain?

My main issue is that I want to do bucket-loads in a single shader. Unity's terrain engine has a lot of nice features that I don't want to give up, however there is only one shader that gets applied to all terrain. Posted Image So I'm wondering whether I can squeeze bump-mapping, specular maps and nicer texture blending in together with the existing splat mapping. Does that sound overly optimistic? The instruction limit that I referred to was in the Unity shader compiler, so it may be a lowest-common-denominator thing. I've only encountered it a few times, like doing 4 harmonic Perlin noise.

#4 jefferytitan   Crossbones+   -  Reputation: 2118

Like
0Likes
Like

Posted 26 July 2012 - 11:35 PM

Yeesh, I just did some research. Apparently by default Unity uses Vertex shader 1.1 and pixel shader 2.0, and if you ask nicely you can have 3.0 and 3.0:
http://docs.unity3d.com/Documentation/Components/SL-ShaderPrograms.html#renderers

According to the below, the capabilities of the former are pitiful:
http://en.wikipedia.org/wiki/Shader_Model

Apparently you can also specify a target platform and request higher limits for texture indirections etc assuming the graphics card allows it. I guess the key is prioritising features and lots of failovers. ;)

#5 Ashaman73   Crossbones+   -  Reputation: 7416

Like
1Likes
Like

Posted 26 July 2012 - 11:40 PM

CG is just a meta-language which compiles to GLSL or HLSL. On mac most likely GLSL. If you can't extract the information directly from unity (I don't know unity myself), choose a certain GPU generation and check its supported DirectX/OGL version to get a gimpse of what the shaders are capable of.

So branching isn't so bad as long as there are likely to be contiguous regions in screen-space which branch the same way?

Branches are never bad, smaller branches will be even optimized to non-branching code by the driver, but it is still always a good idea to avoid them if possible.

But wouldn't the tiles in the border region between branch 1 and branch 2 slow everything down, e.g. weakest link in the chain?

Yes, but only a group of them, if 1 out of 10 is a border tile, you save a lot of GPU processing power. This will not work so good if your branches flickers alot. A simple example is a soft shadow application, where you check 4 pixels first and only continue to check an other bunch of lets say 20 pixels if the 4 are a mix of shadowed/unshadowed pixels. In this case all tiles which are completly in or out of a shadow will performe much faster.

Unity's terrain engine has a lot of nice features that I don't want to give up, however there is only one shader that gets applied to all terrain.

I think that one shader for the terrain isn't that uncommon. If unity is using a deferred rendering/lighting system you will have the chance to apply more effects in post-processing steps.

So I'm wondering whether I can squeeze bump-mapping, specular maps and nicer texture blending in together with the existing splat mapping. Does that sound overly optimistic?

This doesn't sound overly optimistic as long as you don't target too old hardware. Normalmaps, specular, color, ao map (channels) are quite common, best to choose a standard quality and add higher quality shader options (pre-processor in CG/unity ?) for better GPUs.

#6 jefferytitan   Crossbones+   -  Reputation: 2118

Like
0Likes
Like

Posted 28 July 2012 - 04:50 PM

Thanks for the info. Currently I've managed to get splat-mapping with 4 textures, relief-mapping with cone step mapping, improved blending between textures (based on height difference) and one simple specular float per texture into one shader model 3 terrain shader. That's pretty much where I've hit the 512 instruction limit, as the CSM takes a lot after loop unrolling. I'm not sure how much further optimisation could get me. I heard you can do multiple pass shaders. Is it worth it, or too tricky/expensive?

I also tried doing a GLSL specific version using the feature as below:
#pragma glsl - when compiling shaders for desktop OpenGL platforms, convert Cg/HLSL into GLSL (instead of default setting which is ARB vertex/fragment programs)

Curiously the relief mapping didn't work at all (completely flat appearance), but the rest of the shader worked. Could the GLSL compiled version not be getting all the textures?

Anyway, will follow up with screenshots and maybe shader code if anyone is interested.

#7 jefferytitan   Crossbones+   -  Reputation: 2118

Like
0Likes
Like

Posted 07 August 2012 - 06:30 AM

Hi people,

Just thought I'd show you my progress so far. I'm hitting the SM3 512 instruction limit right now with no specular map. :( Also there's a bit of aliasing which may or may not be coming from the cone step mapping.

Posted Image

Posted Image

Posted Image

Posted Image

#8 Ashaman73   Crossbones+   -  Reputation: 7416

Like
0Likes
Like

Posted 07 August 2012 - 07:33 AM

SM3 512 instruction

512 instructions for conestep mapping ? The reason is most likely that your loops are unrolled. Try to limit the number of steps for lower end video cards and try to optimize the loop content (size).

#9 jefferytitan   Crossbones+   -  Reputation: 2118

Like
0Likes
Like

Posted 08 August 2012 - 06:13 AM

The number one problem I'm encountering now is the instruction limit (512) imposed by Unity/CG/Shaderlab, the card itself seems to be irrelevant unless I'm missing something. Having said that I will happily admit I'm no GPU optimisation guru, so there may be decent gains to be had. I'll give you a few representative snippets (as happens with testing the whole thing is a huge mess), and if you have any ideas on optimisation it would be much appreciated.

General Setup:
[source lang="plain"] half4 splat_control = tex2D (_Control, IN.uv_Control); half3 col; float3 p0, p1, p2, p3, v; const int cone_steps = 15; float db; float dist; float4 tex; float height0; float height1; float height2; float height3; float cone_ratio; v = normalize(IN.eye.xyz); v.z = abs(v.z); db = 1.0-v.z; db*=db; db*=db; db=1.0-db*db; v.xy *= db; v.xy *= parallaxDepth; v /= v.z; dist = length(v.xy); p0 = float3(IN.uv_Control.x * (_TerrainX/_Tile0), IN.uv_Control.y * (_TerrainZ/_Tile0), 0); p1 = float3(IN.uv_Control.x * (_TerrainX/_Tile1), IN.uv_Control.y * (_TerrainZ/_Tile1), 0); p2 = float3(IN.uv_Control.x * (_TerrainX/_Tile2), IN.uv_Control.y * (_TerrainZ/_Tile2), 0); p3 = float3(IN.uv_Control.x * (_TerrainX/_Tile3), IN.uv_Control.y * (_TerrainZ/_Tile3), 0);[/source]

Cone Stepping Loop (x 4)
[source lang="plain"] for (int i=0;i<cone_steps; i++ ) { tex = tex2D(_parallax0, p0.xy); height0 = saturate(tex.w - p0.z); cone_ratio = tex.z; p0 += v * (cone_ratio * height0 / (dist + cone_ratio)); }[/source]

Height Weighting The Splatting:
[source lang="plain"] height0 = 1 - max(p0.z, 0.0001); height1 = 1 - max(p1.z, 0.0001); height2 = 1 - max(p2.z, 0.0001); height3 = 1 - max(p3.z, 0.0001); height0 = height0 * height0; height0 = height0 * height0; height1 = height1 * height1; height1 = height1 * height1; height2 = height2 * height2; height2 = height2 * height2; height3 = height3 * height3; height3 = height3 * height3; splat_control *= float4(height0, height1, height2, height3); float totalSplat = dot(splat_control, float4(1,1,1,1)); splat_control /= totalSplat; float2 pAv = splat_control.r * p0.xy + splat_control.g * p1.xy + splat_control.b * p2.xy + splat_control.a * p3.xy; [/source]

Splatting to get Final Result:
[source lang="plain"] col = splat_control.r * tex2D (_Splat0, pAv).rgb; o.Normal = splat_control.r * UnpackNormal(tex2D(_BumpMap0, pAv)); o.Gloss = splat_control.r * _Spec0; col += splat_control.g * tex2D (_Splat1, pAv).rgb; o.Normal += splat_control.g * UnpackNormal(tex2D(_BumpMap1, pAv)); o.Gloss += splat_control.g * _Spec1; col += splat_control.b * tex2D (_Splat2, p2.xy).rgb; o.Normal += splat_control.b * UnpackNormal(tex2D(_BumpMap2, pAv)); o.Gloss += splat_control.b * _Spec2; col += splat_control.a * tex2D (_Splat3, p3.xy).rgb; o.Normal += splat_control.a * UnpackNormal(tex2D(_BumpMap3, pAv)); o.Gloss += splat_control.a * _Spec3; o.Specular = o.Gloss;[/source]
As you can see, there's a lot of repetitiveness, in multiples of 4, which I hope means good optimisation possibilities. But I can't figure out the practicalities. I don't think most of the vector operations could be generalised to matrices. I can't figure a way to square each component of a vector more efficiently. And some of the areas actually took more instructions when I converted them to use vectors instead of floats. Frustrating.

#10 Lightness1024   Members   -  Reputation: 736

Like
0Likes
Like

Posted 23 August 2012 - 04:02 PM

instead of:
height0 = height0 * height0;
height0 = height0 * height0;
use
height0 = pow(height0, 4);
because pow will use the Special Function Unit that is idling in your shader ! this will offload the ALU a bit and increase parallelization.

#11 phantom   Moderators   -  Reputation: 7260

Like
0Likes
Like

Posted 23 August 2012 - 04:23 PM

because pow will use the Special Function Unit that is idling in your shader


While I'm not questioning your advice, as you generally want to use intrinstics as much as you can, there is no garentee about what it will use as not all hardware has a SFU; AMD's latest GPU arch, for example, doesn't have a dedicated SFU; all the vector units can do SFU work as required.

#12 Scoob Droolins   Members   -  Reputation: 238

Like
0Likes
Like

Posted 23 August 2012 - 09:42 PM

Regarding your cone stepping loop - you'd have to check the asm output to see if this bit is being in-lined, but if it is, it could be caused by defining cone_steps as a const int = 15. The compiler now knows the max number of loops and will inline the whole thing if there are enough instruction slots. Try to leave this undefined, set value from CPU, now the compiler doesn't know the max number of loops and will leave this bit as a loop. Whether that will help performance is unknown, but it will reduce instruction count by a ton.

#13 Lightness1024   Members   -  Reputation: 736

Like
0Likes
Like

Posted 24 August 2012 - 04:27 AM


because pow will use the Special Function Unit that is idling in your shader


While I'm not questioning your advice, as you generally want to use intrinstics as much as you can, there is no garentee about what it will use as not all hardware has a SFU; AMD's latest GPU arch, for example, doesn't have a dedicated SFU; all the vector units can do SFU work as required.

Ok, I didn't know that.
it also depends on the sahder model profile the compiler is set to compile. it is possible that when targeting SM3 the pow will be extended to a taylor serie. (which was definitely the case for e.g. sin(x) in SM1, in SM2 the compiler is replacing sin(x) by sincos(x) asm intrinsic)




Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS