How to know whether a texture is present in HLSL?

Started by
7 comments, last by Hodgman 6 years, 10 months ago

What is considered good practice for knowing whether a texture is present or not in a HLSL shader?

I use a single texel white texture (in case no diffuse/specular texture is present). Therefore, the diffuse/specular color is fully determined by a single diffuse/specular RGB coëfficiënt (multiplied with the texture color = noop). Unfortunately, this does not work for normal maps (unless of course I redefine the meaning of a normal map) or some other maps. Should one therefore have a uint in some constant buffer interpreted as a set of flags? Or is there some syntactic sugar available for this?

🧙

Advertisement

Are you using object-space normal maps rather than tangent-space normal maps?

The ability to do "this has no normal map, so I'll replace it with a 1x1 texture" works for tangent-space normal maps, but not object-space ones.

I would generally try and avoid branching on presence / non-presence of textures and ensure I'd bound a cut-down shader without normal map support for objects that don't want to provide the texture.

Adam Miles - Principal Software Development Engineer - Microsoft Xbox Advanced Technology Group

Yeah I would recommend binding the right shader - e.g. one compiled without those texture reads.

Then if you end up in a situation where performance is suffering because you've got 5000 unique shaders, then I'd start looking into hacks like the 1x1 pixel texture or dynamic branches.

The ability to do "this has no normal map, so I'll replace it with a 1x1 texture" works for tangent-space normal maps, but not object-space ones.

Yeah just bind a 1x1 pixel normal map.
Note that there's one gotcha here -- the common UNORM scheme for encoding (t=n*.5+.5) and decoding (n=t*2-1) for normal maps cannot exactly store the number zero, so your "no normal map" surfaces always have a slight slant to them.
If you follow the SNORM encoding rules, this problem doesn't occur.

Are you using object-space normal maps rather than tangent-space normal maps?

Never heard of the latter? My Crytek Sponza model contains textures whose name ends with _ddn? I guess this is object-space. Which also means I need to pass a object-to-view inverse transpose per fragment to the pixel shader for obtaining the view-space normal for my lighting calculations.

🧙

If you follow the SNORM encoding rules, this problem doesn't occur.

Is this related to the texture surface format (currently unsigned X8R8G8B8)?

🧙

Never heard of the latter? My Crytek Sponza model contains textures whose name ends with _ddn? I guess this is object-space. Which also means I need to pass a object-to-view inverse transpose per fragment to the pixel shader for obtaining the view-space normal for my lighting calculations.

Cryengine documentation suggests otherwise:

http://docs.cryengine.com/display/SDKDOC2/Normal+Maps

CRYENGINE is using tangent space normal maps which can be reused on different models.

Never heard of the latter? My Crytek Sponza model contains textures whose name ends with _ddn? I guess this is object-space. Which also means I need to pass a object-to-view inverse transpose per fragment to the pixel shader for obtaining the view-space normal for my lighting calculations.

Cryengine documentation suggests otherwise:

http://docs.cryengine.com/display/SDKDOC2/Normal+Maps

CRYENGINE is using tangent space normal maps which can be reused on different models.

I do not have the alpha channel.

http://www.crytek.com/cryengine/cryengine3/downloads

🧙

Then if you end up in a situation where performance is suffering because you've got 5000 unique shaders, then I'd start looking into hacks like the 1x1 pixel texture or dynamic branches.

What about deferred rendering? Does one normally pass a mode value (no lighting, Lambertian BRDF, GGX BRDF, etc.) to branch on in the final PS invocation? If not, how do you know which shader you need to use for a specific fragment?

🧙

Then if you end up in a situation where performance is suffering because you've got 5000 unique shaders, then I'd start looking into hacks like the 1x1 pixel texture or dynamic branches.

What about deferred rendering? Does one normally pass a mode value (no lighting, Lambertian BRDF, GGX BRDF, etc.) to branch on in the final PS invocation? If not, how do you know which shader you need to use for a specific fragment?

The most common option is to simply use a single BRDF for every object in the game :D

Alternatively, yes, you can store a material-type ID in the gbuffer.
With that, you can:

  • Do all of your lighting with 2D look-up tables instead of math. You can stack multiple LUT's into a 3D texture, where the Z axis is material type. This then even lets you have a fractional material type that is a blend of two others :o
  • Do some dynamic branching. Try not to end up with a huge if-else chain! Beware that the register usage of the worst branch will determine the register usage for the entire shader...
  • Divide the screen up into tiles (e.g. 8x8 pixels) and compute a bit-mask of which shading modes are used by each tile. Create many permutations of lighting shaders -- all pixels in tile are model A, all pixels are model B, tile contains both A and B (use branching), etc... If there's three permutations, build up three lists of Tile-IDs, and then launch three compute shaders to calculate lighting for all of your tiles. This avoids branching at all in tiles where a single model is used.

However, one of the most common solutions is to use a single-BRDF for 99% of your materials, and then use forward-rendering for the exceptions :)

This topic is closed to new replies.

Advertisement