Jump to content
  • Advertisement
Sign in to follow this  
Ryan_001

Efficient texture blending

This topic is 1841 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I've come across the need to do texture blending in a pixel shader.  In theory I'd like to be able to handle a large number of textures (at least 8, but preferably more).  I can't store the blend weights in a vertex, they have to be looked-up from a texture.  That said I need to keep the texture storage as small as possible, but it still needs to be fast.  Also all the textures in mind are 3D.
 
The simplest way that came to mind was to simply store a texture 'id' at each texel.  Uses very little memory and can handle far more textures than I need.  I imagine this is the slowest version though, as it would require 8 individual texture reads/samples/loads/whatevers and then I'd have to manually lerp the results.  Even using a Gather instead of a Sample would still mean alot of fiddling to expand that to the proper blend weights for each texture.
 
The next idea was to use a format like D3DFMT_A4R4G4B4/DXGI_FORMAT_B4G4R4A4_UNORM, but D3D11 doesn't support that unless you have windows 8 (this confuses me as D3D9 supports this format just fine).
 

The next idea came from: http://developer.amd.com/wordpress/media/2012/10/S2008_Mittring_AdvVirtualTexTopics.pdf and mentioned something called combo textures.  It seemed like what I was looking for, but it was hard to guess all the details from the slides.  I couldn't find any other references about this from google.  Anyone have any ideas where I can get more info and/or a little better explanation?  I'm guessing with a hyper-cube having 16 edges I could in theory get blend weights for 16 textures quite efficiently?

 
And last, if you had to blend a number of 3D textures in a pixel shader how would you go about doing it?

Share this post


Link to post
Share on other sites
Advertisement
You store your weights in colour space and from this derive your weights in your fragment shader. Read "Visual quality of the ground in 3D models: Using color-coded images to blend aerial photos with tiled detail textures" and the full paper version of "Advanced Virtual Texture Topics" for a more detailed explanation. If you can't get hold of them I can post them up for you (although I'm not sure if the first one would violate the T&C of this site).

ETA: or if it makes life any easier I can break it down for you but I think the full Virtual texturing paper should be enough as it has source code
ETA2: alternatively you could use a procedural approach to obtain your blend weights. Typically you would use the height, slope and normal to achieve this. For example, hills could appear at low(ish) altitudes with more gentle slopes, rocky mountains at higher altitudes and steeper slopes, snow on south-facing gentle slopes at high altitudes, rolling planes on low altitude flatlands and so on. If you still need more textures you could use a Bloom-style alpha mask for the more "special case" textures. Edited by GeneralQuery

Share this post


Link to post
Share on other sites
I can only find the slides for "Advanced Virtual Texture Topics" and every link to "Visual quality of the ground in 3D models: Using color-coded images to blend aerial photos with tiled detail textures" is to a portal/pay site. So a link to either would be greatly appreciated. Or even just a general run-down of the idea.

Share this post


Link to post
Share on other sites
Here's the key figure from the paper:

FBA3let.png

Here's the source code from the paper:
 
float3 g_ComboMask;             // RGB material combo colour
                                // (3 channels for 8 materials)
                                //   000,100,010,110,001,101,011,111
float4 g_ComboSum0,g_ComboSum1; // RGBA sum of the masks blended so far
                                // including the current
                                // (8 channels for 8 materials)
                                //   10000000, 11000000, 11100000, 11110000
                                //   11111000, 11111100, 11111110, 11111111

float ComputeComboAlpha( BETWEENVERTEXANDPIXEL_Unified InOut )
{
  float3 cCombo = tex2D(Sampler_Combo,InOut.vBaseTexPos).rgb;

  float3 fSrcAlpha3 = g_ComboMask*cCombo + (1-g_ComboMask)*(1-cCombo);
  float fSrcAlpha3 = fSrcAlpha3.r*fSrcAlpha3.g*fSrcAlpha3.b;

  float4 vComboRG = float4(1-cCombo.r,cCombo.r,1-cCombo.r,cCombo.r)
                  * float4(1-cCombo.g,1-cCombo.g,cCombo.g,cCombo.g);
  
  float fSum = dot(vComboRG,g_ComboSum0)*(1-cCombo.b) 
             + dot(vComboRG,g_ComboSum1)*(cCombo.b);

  // + small numbers to avoid DivByZero
  return (fSrcAlpha+0.00000001f)/(0.00000001f+fSum);
}
This figure from Roupé's paper sheds some more light on the mask values:

mgNuTLG.png

The idea is that you treat the RGB value of the texture as a point within a unit cube who's origin is at (0,0,0) (black) and extends to (1,1,1) (white). The basic premise is that the distance of this point from whatever corner (material) you wish to evaluate is the impact that material has on the blend. If you need a hand wrapping your head around the source code just go ahead and ask.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!