Advertisement Jump to content
  • Advertisement
  • 04/23/05 10:42 AM
    Sign in to follow this  

    Texture Splatting in Direct3D

    General and Gameplay Programming

    Myopic Rhino


    If you've been looking into terrain texturing techniques, you've probably heard about texture splatting. The term was coined by Charles Bloom, who discussed it at With no disrespect to Charles Bloom, it was not the most clear or concise of articles, and has left many confused in its wake. While many use and recommend it, few have taken the time to adequately explain it. I hope to clear up the mystery surrounding it and demonstrate how to implement it in your own terrain engine.

    The Basics

    What is texture splatting? In its simplest form, it is a way to blend textures together on a surface using alphamaps.

    I will use the term alphamap to refer to a grayscale image residing in a single channel of a texture. It can be any channel - alpha, red, green, blue, or even luminance. In texture splatting, it will be used to control how much of a texture appears at a given location. This is done by a simple multiplication, alphamap * texture. If the value of a texel in the alphamap is 1, that texture will appear there in full value; if the value of a texel in the alphamap is 0, that texture will not appear there at all.

    For terrain, the texture might be grass, dirt, rock, snow, or any other type of terrain you can think of. Bloom refers to a texture and its corresponding alphamap as a splat. An analogy would be throwing a glob of paint at a canvas. The splat is wherever you see that glob. Multiple globs of paint layered on top of each other create the final picture.

    Let's say you have a 128x128 heightmap terrain, divided into 32x32 chunks. Each chunk would then be made up of 33x33 vertices. Each chunk has the base textures repeated several times over it - but the alphamap is stretched over the entire area. (0, 0) of the chunk would have alphamap coordinates of (0, 0) and texture coordinates of (0, 0). (33, 33) of the chunk would have alphamap coordinates of (1, 1) and texture coordinates of (x, x), where x is the number of times you want the textures to be repeated. x will depend on the resolution of the textures. Try to make sure they repeat enough to make detail up close, but not so much that the repetition is obvious from far away.

    The resolution of your alphamap per chunk is something you need to decide for yourself, but I recommend powers of two. For a 32x32 chunk, you could have a 32x32 alphamap (one texel per unit), a 64x64 alphamap (two texels per unit), or even a 128x128 alphamap (four texels per unit). When deciding what resolution to use, remember that you need an alphamap for every texture that appears on a given chunk. The higher the resolution is, the more control over the blending you have, at the cost of memory.

    The size of the chunk is a little trickier to decide. Too small and you will have too many state changes and draw calls, too large and the alphamaps may contain mostly empty space. For example, if you decided to create an alphamap with 1 texel per unit with a chunk size of 128x128, but the alphamap only has non-zero values in one small 4x4 region, 124x124 of your alphamap is wasted memory. If your chunk size was 32x32, only 28x28 would be wasted. This brings up a point: if a given texture does not appear at all over a given chunk, don't give that chunk an alphamap for that texture.

    The reason the terrain is divided into chunks is now apparent. Firstly, and most importantly, it can save video memory, and lots of it. Secondly, it can reduce fillrate consumption. By using smaller textures, the video card has less sampling to do if the texture does not appear on every chunk. Thirdly, it fits into common level of detail techniques such as geomipmapping that require the terrain to be divided into chunks anyway.

    Creating the Blends

    The key to getting smooth blending is linear interpolation of the alphamaps. Suppose there is a 1 right next to a 0. When the alphamap is stretched out over the terrain, Direct3D creates an even blend between the two values. The stretched alphamap is then combined with terrain texture, causing the texture itself to blend.

    Rendering then becomes the simple matter of going through each chunk and rendering the splats on it. Generally, the first splat will be completely opaque, with the following splats having varying values in their alphamaps. Let me demonstrate with some graphics. Let's say the first splat is dirt. Because it is the first that appears on that chunk, it will have an alphamap that is completely solid.

    solidalphamap.gif * blendeddirt.gif = blendeddirt.gif  

    After the first splat is rendered, the chunk is covered with dirt. Then a grass layer is added on top of that:

    lineartexels.gif * texture.gif = blendedtexture.gif

    blendeddirt.gifblendedtexture.gif = grassdirtblend.gif

    The process is repeated for the rest of the splats for the chunk.

    It is important that you render everything in the same order for each chunk. Splat addition is not commutative. Skipping splats won't cause any harm, but if you change the order around, another chunk could end up looking like this:

    blendedtexture.gif + blendeddirt.gif = blendeddirt.gif

    The grass splat is covered up because the dirt splat is completely opaque and was rendered second.

    You may be wondering why the first splat should be opaque. Let's say it wasn't, and instead was only solid where the grass splat was clear. Here's what happens:

    blendeddirt2.gif + blendedtexture.gif = wronggrassdirtblend.gif

    It's obvious this does not look right when compared with the blend from before. By having the first splat be completely opaque, you prevent any "holes" from appearing like in the picture above.

    Creating the Alphamaps

    Now that we know what texture splatting is, we need to create the alphamaps to describe our canvas. But how do we decide what values to give the alphamaps?

    Some people base it off of terrain height, but I recommend giving the ability to make the alphamaps whatever you want them to be. This gives you the flexibility to put any texture wherever you want with no limitations. It's as simple as drawing out the channels in your paint program of choice. Even better, you could create a simple world editor that allows the artist to see their alphamap and interact with it in the actual world.


    Let's take a step back and look at what we have:

    • Some sort of terrain representation, such as a heightmap
    • A set of textures to be rendered on the terrain
    • An alphamap for each texture

    Look at #3. We know that each alphamap has to be in a texture. Does this mean that every alphamap needs its own texture? Thankfully, the answer is no. Because the alphamap only has to be in a single channel of a texture, we can pack up to four alphamaps into a single texture - one in red, one in green, one in blue, and one in alpha. In order to access these individual channels we will need to use a pixel shader, and because we need five textures (one with the alphamaps and four to blend), PS 1.4 is required. Unfortunately this is a still stiff requirement, so I will show how to use texture splatting with the fixed function pipeline as well as with a pixel shader.

    Splatting with the Fixed Function Pipeline

    Using the fixed function pipeline has one benefit that the pixel shader technique lacks: it will run on virtually any video card. All it requires is one texture unit for the alphamap, one texture unit for the texture, and the correct blending states.

    I chose to put the alphamap in stage 0 and the texture in stage 1. This was to stay consistent with the pixel shader, which makes most sense with the alphamap in stage 0. The texture stage states are relatively straightforward from there. Stage 0 passes its alpha value up to stage 1. Stage 1 uses that alpha value as its own and pairs it with its own color value.

    // Alphamap: take the alpha from the alphamap, we don't care about the color
    g_Direct3DDevice->SetTextureStageState(0, D3DTSS_ALPHAOP, D3DTOP_SELECTARG1);
    g_Direct3DDevice->SetTextureStageState(0, D3DTSS_ALPHAARG1, D3DTA_TEXTURE);
    // Texture: take the color from the texture, take the alpha from the previous stage
    g_Direct3DDevice->SetTextureStageState(1, D3DTSS_COLOROP, D3DTOP_SELECTARG1);
    g_Direct3DDevice->SetTextureStageState(1, D3DTSS_COLORARG1, D3DTA_TEXTURE);
    g_Direct3DDevice->SetTextureStageState(1, D3DTSS_ALPHAOP, D3DTOP_SELECTARG1);
    g_Direct3DDevice->SetTextureStageState(1, D3DTSS_ALPHAARG1, D3DTA_CURRENT);

    We have to set the blending render states as well in order to get the multiple splats to combine together correctly. D3DRS_SRCBLEND is the alpha coming from the splat being rendered, so we set it to D3DBLEND_SRCALPHA. The final equation we want is FinalColor = Alpha * Texture + (1 - Alpha) * PreviousColor. This is done by setting D3DRS_DESTBLEND to D3DBLEND_INVSRCALPHA.

    g_Direct3DDevice->SetRenderState(D3DRS_ALPHABLENDENABLE, TRUE);
    g_Direct3DDevice->SetRenderState(D3DRS_SRCBLEND, D3DBLEND_SRCALPHA);
    g_Direct3DDevice->SetRenderState(D3DRS_DESTBLEND, D3DBLEND_INVSRCALPHA);

    Splatting with a Pixel Shader

    Why even bother with the pixel shader? Using all the channels available in a texture instead of only one saves memory. It also allows us to render four splats in a single pass, reducing the number of vertices that need to be transformed. Because all of the texture combining takes place in the shader, there are no texture stage states to worry about. We just load the texture with an alphamap in each channel into stage 0, the textures into stages 1 through 4, and render.

    // r0: alphamaps
    // r1 - r4: textures
    // Sample textures
    texld r0, t0
    texld r1, t1
    texld r2, t1
    texld r3, t1
    texld r4, t1
    // Combine the textures together based off of their alphamaps
    mul r1, r1, r0.x
    lrp r2, r0.y, r2, r1
    lrp r3, r0.z, r3, r2
    lrp r0, r0.w, r4, r3

    The mul instruction multiplies the first texture by its alphamap, which is stored in the red channel of the texture in sampler 0. The lrp instruction does the following arithmetic: dest = src0 * src1 + (1 - src0) * src2. Let's say r0.x is the alphamap for a dirt texture stored in r1, and r0.y is the alphamap for a grass texture stored in r2. r2 contains the following after the first lrp: GrassAlpha * GrassTexture + (1 - GrassAlpha) * DirtBlended, where DirtBlended is DirtAlpha * DirtTexture. As you can see, lrp does the same thing as the render states and texture stage states we set before. The final lrp uses r0 as the destination register, which is the register used as the final pixel color. This eliminates the need for a final mov instruction.

    What if you only need to render two or three splats for a chunk? If you want to reuse the pixel shader, simply have the remaining channels be filled with 0. That way they will have no influence on the final result. You could also create another pixel shader for two splats and a third for three splats, but the additional overhead of more SetPixelShader calls may be less efficient than using an extra instruction or two.

    Multiple passes are required if you need to render more than four splats for a chunk. Let's say you have to render seven splats. You first render the first four, leaving three left. The alpha channel of your second alphamap texture would be filled with 0, causing the fourth texture to cancel out in the equation. You simply set the alphamap texture and the three textures to be blended and render. The D3DRS_BLEND and D3DRS_SRCBLEND stages from before perform the same thing as the lrp in the pixel shader, allowing the second pass to combine seamlessly with the first.

    The Demo

    The demo application uses the two techniques described here to render a texture splatted quad. I decided not to go for a full heightmap to make it as easy as possible to find the key parts in texture splatting. Because of this, the demo is completely fillrate limited. The initial overhead of the pixel shader may cause some video cards to perform worse with it than with its fixed function equivalent, so take the frame rates with a grain of salt. The pixel shader will almost always come out ahead in a more complex scene.

    You can toggle between the fixed function pipeline and the pixel shader through the option in the View menu.

    The textures used are property of nVidia(R) and are available in their full resolution at

    The Problem of Seams

    If texture splatting has a downfall, it is this: when two neighboring splats come together, an unattractive seam forms between them. The look can be recreated by tiling four of the example splats from before.


    Why does this happen? Let's look at the grass alphamap of the top two sections:


    Here space has been added between them and the problem becomes apparent. The one on the left does not know the border values of the one on the right. When the video card performs its linear blend, it has no way of knowing that there is in fact a black texel right next to a white one. It simply assumes the same color is on the border.

    This is not an easy problem to fix and many games leave it untouched. It can be disguised a little by properly wrapping textures and a skilled level designer, but I have not thought of an elegant solution. In my experience it has been more trouble than it's worth to solve this problem, and I believe the advantages of texture splatting far outweigh the issue.


    Hopefully this article has cleared up the mystery behind texture splatting. There are, of course, enhancements to be made, but texture splatting in its basic form is a powerful and flexible technique. It creates a smooth blend between different layers of terrain while giving detail at any distance and avoids the patterned look a detail map can give. Its main disadvantage is that it is very fillrate consuming, but with video cards becoming ever more powerful and the abilities of pixel shaders increasing, this is not an issue on modern and future hardware. Its ease of use and flexibility make it a perfect choice for texturing your terrain.

    Happy splatting!


    Terrain Texture Compositing by Blending in the Frame-Buffer by Charles Bloom,

    And, of course, the helpful people at

    Feel free to send any questions or comments to email or private message @Raloth on the forums!



      Report Article
    Sign in to follow this  

    User Feedback

    There are no comments to display.

    Create an account or sign in to comment

    You need to be a member in order to leave a comment

    Create an account

    Sign up for a new account in our community. It's easy!

    Register a new account

    Sign in

    Already have an account? Sign in here.

    Sign In Now

  • Advertisement
  • Advertisement
  • What is your GameDev Story?

    In 2019 we are celebrating 20 years of! Share your GameDev Story with us.

    (You must login to your account.)

  • Latest Featured Articles

  • Featured Blogs

  • Advertisement
  • Popular Now

  • Similar Content

    • By babaliaris
      My texture problems just don't want to stop keep coming...
      After a lot of discussions here with you guys, I've learned a lot about textures and digital images and I fixed my bugs. But right now I'm making an animation system and this happened.
      Now if you see, the first animation (bomb) is ok. But the second and the third (which are arrows changing direction) are being render weird (They get the GL_REPEAT effect).
      In order to be sure, I only rendered (without using my animation system or anything else i created in my project, just using simple opengl rendering code) the textures that are causing this issue and this is the result (all these textures have exactly 115x93 resolution)

      I will attach all the images which I'm using.
      giphy-27 and giphy-28 are rendering just fine.
      All the others not.They give me an effect like GL_REPEAT which I use in my code. This is why I'm getting this result? But my texture coordinates are inside the range of -1 and 1 so why?
      My Texture Code:
      #include "Texture.h" #include "STB_IMAGE/stb_image.h" #include "GLCall.h" #include "EngineError.h" #include "Logger.h" Texture::Texture(std::string path, int unit) { //Try to load the image. unsigned char *data = stbi_load(path.c_str(), &m_width, &m_height, &m_channels, 0); //Image loaded successfully. if (data) { //Generate the texture and bind it. GLCall(glGenTextures(1, &m_id)); GLCall(glActiveTexture(GL_TEXTURE0 + unit)); GLCall(glBindTexture(GL_TEXTURE_2D, m_id)); //Not Transparent texture. if (m_channels == 3) { GLCall(glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, m_width, m_height, 0, GL_RGB, GL_UNSIGNED_BYTE, data)); } //Transparent texture. else if (m_channels == 4) { GLCall(glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, m_width, m_height, 0, GL_RGBA, GL_UNSIGNED_BYTE, data)); } //This image is not supported. else { std::string err = "The Image: " + path; err += " , is using " + m_channels; err += " channels which are not supported."; throw VampEngine::EngineError(err); } //Texture Filters. GLCall(glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT)); GLCall(glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT)); GLCall(glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST_MIPMAP_NEAREST)); GLCall(glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR)); //Generate mipmaps. GLCall(glGenerateMipmap(GL_TEXTURE_2D)); } //Loading Failed. else throw VampEngine::EngineError("There was an error loading image \ (Myabe the image format is not supported): " + path); //Unbind the texture. GLCall(glBindTexture(GL_TEXTURE_2D, 0)); //Free the image data. stbi_image_free(data); } Texture::~Texture() { GLCall(glDeleteTextures(1, &m_id)); } void Texture::Bind(int unit) { GLCall(glActiveTexture(GL_TEXTURE0 + unit)); GLCall(glBindTexture(GL_TEXTURE_2D, m_id)); }  
      My Render Code:
      #include "Renderer.h" #include "glcall.h" #include "shader.h" Renderer::Renderer() { //Vertices. float vertices[] = { //Positions Texture Coordinates. 0.0f, 0.0f, 0.0f, 0.0f, //Left Bottom. 0.0f, 1.0f, 0.0f, 1.0f, //Left Top. 1.0f, 1.0f, 1.0f, 1.0f, //Right Top. 1.0f, 0.0f, 1.0f, 0.0f //Right Bottom. }; //Indices. unsigned int indices[] = { 0, 1, 2, //Left Up Triangle. 0, 3, 2 //Right Down Triangle. }; //Create and bind a Vertex Array. GLCall(glGenVertexArrays(1, &VAO)); GLCall(glBindVertexArray(VAO)); //Create and bind a Vertex Buffer. GLCall(glGenBuffers(1, &VBO)); GLCall(glBindBuffer(GL_ARRAY_BUFFER, VBO)); //Create and bind an Index Buffer. GLCall(glGenBuffers(1, &EBO)); GLCall(glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, EBO)); //Transfer the data to the VBO and EBO. GLCall(glBufferData(GL_ARRAY_BUFFER, sizeof(vertices), vertices, GL_STATIC_DRAW)); GLCall(glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(indices), indices, GL_STATIC_DRAW)); //Enable and create the attribute for both Positions and Texture Coordinates. GLCall(glEnableVertexAttribArray(0)); GLCall(glVertexAttribPointer(0, 4, GL_FLOAT, GL_FALSE, sizeof(float) * 4, (void *)0)); //Create the shader program. m_shader = new Shader("Shaders/sprite_vertex.glsl", "Shaders/sprite_fragment.glsl"); } Renderer::~Renderer() { //Clean Up. GLCall(glDeleteVertexArrays(1, &VAO)); GLCall(glDeleteBuffers(1, &VBO)); GLCall(glDeleteBuffers(1, &EBO)); delete m_shader; } void Renderer::RenderElements(glm::mat4 model) { //Create the projection matrix. glm::mat4 proj = glm::ortho(0.0f, 600.0f, 600.0f, 0.0f, -1.0f, 1.0f); //Set the texture unit to be used. m_shader->SetUniform1i("diffuse", 0); //Set the transformation matrices. m_shader->SetUniformMat4f("model", model); m_shader->SetUniformMat4f("proj", proj); //Draw Call. GLCall(glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_INT, NULL)); }  
      Vertex Shader:
      #version 330 core layout(location = 0) in vec4 aData; uniform mat4 model; uniform mat4 proj; out vec2 TexCoord; void main() { gl_Position = proj * model * vec4(aData.xy, 0.0f, 1.0); TexCoord =; }  
      Fragment Shader:
      #version 330 core out vec4 Color; in vec2 TexCoord; uniform sampler2D diffuse; void main() { Color = texture(diffuse, TexCoord); }  

    • By babaliaris
      For those who don't know me I have started a quite amount of threads about textures in opengl. I was encountering bugs like the texture was not appearing correctly (even that my code and shaders where fine) or I was getting access violation in memory when I was uploading a texture into the gpu. Mostly I thought that these might be AMD's bugs because when someone was running my code he was getting a nice result. Then someone told me "Some drivers implementations are more forgiven than others, so it might happen that your driver does not forgive that easily. This might be the reason that other can see the output you where expecting". I did not believe him and move on.
      Then Mr. @Hodgman gave me the light. He explained me somethings about images and what channels are (I had no clue) and with some research from my perspective I learned how digital images work in theory and what channels are. Then by also reading this article about image formats I also learned some more stuff.
      The question now is, if for example I want to upload a PNG to the gpu, am I 100% that I can use 4 channels? Or even that the image is a PNG it might not contain all 4 channels (rgba). So I need somehow to retrieve that information so my code below will be able to tell the driver how to read the data based on the channels.
      I'm asking this just to know how to properly write the code below (with capitals are the variables which I want you to tell me how to specify)
      stbi_set_flip_vertically_on_load(1); //Try to load the image. unsigned char *data = stbi_load(path.c_str(), &m_width, &m_height, &m_channels, HOW_MANY_CHANNELS_TO_USE); //Image loaded successfully. if (data) { //Generate the texture and bind it. GLCall(glGenTextures(1, &m_id)); GLCall(glActiveTexture(GL_TEXTURE0 + unit)); GLCall(glBindTexture(GL_TEXTURE_2D, m_id)); GLCall(glTexImage2D(GL_TEXTURE_2D, 0, WHAT_FORMAT_FOR_THE_TEXTURE, m_width, m_height, 0, WHAT_FORMAT_FOR_THE_DATA, GL_UNSIGNED_BYTE, data)); } So back to my question. If I'm loading a PNG, and tell stbi_load to use 4 channels and then into glTexImage2D,  WHAT_FORMAT_FOR_THE_DATA = RGBA will I be sure that the driver will properly read the data without getting an access violation?  
      I want to write a code that no matter the image file, it will always be able to read the data correctly and upload them to the GPU.
      Like 100% of the tutorials and guides about openGL out there (even one which I purchased from Udemy) where not explaining all these stuff and this is why I was experiencing all these bugs and got stuck for months!
      Also some documentation you might need to know about stbi_load to help me more:
      // Limitations: // - no 12-bit-per-channel JPEG // - no JPEGs with arithmetic coding // - GIF always returns *comp=4 // // Basic usage (see HDR discussion below for HDR usage): // int x,y,n; // unsigned char *data = stbi_load(filename, &x, &y, &n, 0); // // ... process data if not NULL ... // // ... x = width, y = height, n = # 8-bit components per pixel ... // // ... replace '0' with '1'..'4' to force that many components per pixel // // ... but 'n' will always be the number that it would have been if you said 0 // stbi_image_free(data)  
    • By babaliaris

      I was trying to load some textures and I was getting this access violation atioglxx.dll access violation
      stb image which i'm using to load the png file into the memory, was not reporting any errors.

      I found this on the internet explaining that it is a bug from AMD.
      I fixed that problem by changing the image file which i was using. The image that was causing this issue was generated by this online converter from gif to pngs.

      Does anyone know more about it?

      Thank you.
    • By Gnollrunner
      I was wondering if anyone knows of any tools to help design procedural textures.  More specially I need something that will output the actual procedure rather than just the texture.  It could output HLSL or some pseudo code that I can port to HLSL. The important thing is I need the algorithm, not just the texture so I can put it into a pixel shader myself.  I posted the question on the Allegorithmic forum, but someone answered that while Substance Designer uses procedures internally, it doesn't support output of code, so I guess that one is out.
    • By trapazza
      I'm trying to figure out how to design the vegetation/detail system for my procedural chunked-lod based planet renderer.
      While I've found a lot of papers talking about how to homogeneously scatter things over a planar or spherical surface, I couldn't find too much info about how the location of objects is actually encoded. There seems to be a common approach that involves using some sort of texture mapping where different layers of vegetation/rocks/trees etc. define what and where things are placed, but can't figure out how this is actually rendered. I guess that for billboards, these textures could be sampled from the vertex shader and then use a geometry shader to draw a texture onto a generated quad? what about near trees or rocks that need a 3D model instead? Is this handled from the CPU? Is there a specific solution that works better with a chunked-lod approach?
      Thanks in advance!

Important Information

By using, you agree to our community Guidelines, Terms of Use, and Privacy Policy. is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!