**0**

# Terrain texturing explained

Many people have been asking me how the terrain texturing is implemented, so I'll make a special dev journal about it.

**Sub-tiling**

The whole technique is based on sub-tiling. The idea is to create a texture pack that contains N images ( also called layers or tiles ), for example for grass, rock, snow, etc..

Let's say that each image is 512x512. You can pack 4x4 = 16 of them in a single 2048x2048. Here is an example of a pack with 13 tiles ( the top-right 3 are unused and stay black ):

**Mipmapping the texture pack**

Each image / tile was originally seamless: its right pixels column matches it left, and its top matches its bottom. This constraint must be enforced when you're generating mipmaps. The standard way of generating mipmaps ( by downsampling and applying a box filter ) doesn't work anymore, so you must construct the mipmaps chain yourself, and copy the border columns/rows so that it's seamless for all levels.

When you're generating the mipmaps chain, you will arrive at a point where each tile is 1x1 pixel in the pack ( so the whole pack will be 4x4 pixels ). Of course, from there, there is no way to complete the mipmaps chain in a coherent way. But it doesn't matter, because in the pixel shader, you can specific a maximum lod level when samplying the mipmap. So you can complete it by downsampling with a box filter, or fill garbage; it doesn't really matter.

**Texture ID lookup**

To each vertex of the terrain is associated a slope and altitude. The slope is the dot product between the up vector and the vertex normal and normalized to [0-1]. The altitude is normalized to [0-1].

On the cpu, a lookup table is generated. Each layer / tile has a set of constraints ( for example, grass must only grow when the slope is lower than 20° and the altitude is between 50m and 3000m ). There are many ways to create this table, but it's beyond the point of this article. For our use, it is sufficient to know that the lookup table indexes the dot-product of the slope on the horizontal / U axis, and the altitude on the vertical / V axis.

The lookup table ( LUT ) is a RGBA texture, but at the moment I'm only using the red channel. It contains the ID of the layer / tile for the corresponding slope / altitude. Here's an example:

Once the texture pack and the LUT are uploaded to the gpu, the shader is ready to do its job. The first step is easy:

vec4 terrainType = texture2D(terrainLUT, vec2(slope, altitude));

.. and we get in terrainType.x the ID of the tile (0-15) we need to use for the current pixel.

Here's the result in 3D. Since the ID is a small value (0-15), I have multiplied it by 16 to see it better in grayscale:

**Getting the mipmap level**

So, for each pixel you've got an UV to sample the tile. The problem is that you can't sample the

**pack**directly, as it contains many tiles. You need to sample the tile within the pack, but with mipmapping and wrapping. How to do that ?

The first natural idea is to perform those two operations in the shader:

u = fract(u)

v = fract(v)

u = tile_offset.x + u * 0.25

v = tile_offset.y + v * 0.25

( remember that there are 4x4 tiles in the pack. Since UVs are always normalized, each tile is 1/4 th of the pack, hence the 0.25 ).

This doesn't work with mipmapping, because the hardware uses the 2x2 neighbooring pixels to determine the mipmap level. The

*fract()*instructions kill the coherency between the tiles, and 1-pixel-width seams appear (which are viewer dependent, so extremely visible and annoying).

The solution is to calculate the mipmap level manually. Here is the function I'm using to do that:

/// This function evaluates the mipmap LOD level for a 2D texture using the given texture coordinates

/// and texture size (in pixels)

float mipmapLevel(vec2 uv, vec2 textureSize)

{

vec2 dx = dFdx(uv * textureSize.x);

vec2 dy = dFdy(uv * textureSize.y);

float d = max(dot(dx, dx), dot(dy, dy));

return 0.5 * log2(d);

}

Note that it makes use of the dFdx/dFdy instructions ( also called ddx/ddy ), the derivative of the input function. This pretty much ups the system requirements to a shader model 3.0+ video card.

This function must be called with a texture size that matches the size of the tile. So if the pack is 2048x2048 and each tile is 512x512, you must use a textureSize of 512.

Once you have the lod level, clamp it to the max mipmap level, ie. the 4x4 one.

**Sampling the sub-tile with wrapping**

The next problem is that the lod level isn't an integer, but a float. So this means that the current pixel can be in a transition between 2 mipmaps. So when calculating the UVs inside the pack to sample the pixel, it has to be taken into account. There's a bit of "magic" here, but I have experimentally found an acceptable solution. The complete code for sampling a pixel of a tile within a pack is the following:

/// This function samples a texture with tiling and mipmapping from within a texture pack of the given

/// attributes

/// - tex is the texture pack from which to sample a tile

/// - uv are the texture coordinates of the pixel *inside the tile*

/// - tile are the coordinates of the tile within the pack (ex.: 2, 1)

/// - packTexFactors are some constants to perform the mipmapping and tiling

/// Texture pack factors:

/// - inverse of the number of horizontal tiles (ex.: 4 tiles -> 0.25)

/// - inverse of the number of vertical tiles (ex.: 2 tiles -> 0.5)

/// - size of a tile in pixels (ex.: 1024)

/// - amount of bits representing the power-of-2 of the size of a tile (ex.: a 1024 tile is 10 bits).

vec4 sampleTexturePackMipWrapped(const in sampler2D tex, in vec2 uv, const in vec2 tile,

const in vec4 packTexFactors)

{

/// estimate mipmap/LOD level

float lod = mipmapLevel(uv, vec2(packTexFactors.z));

lod = clamp(lod, 0.0, packTexFactors.w);

/// get width/height of the whole pack texture for the current lod level

float size = pow(2.0, packTexFactors.w - lod);

float sizex = size / packTexFactors.x; // width in pixels

float sizey = size / packTexFactors.y; // height in pixels

/// perform tiling

uv = fract(uv);

/// tweak pixels for correct bilinear filtering, and add offset for the wanted tile

uv.x = uv.x * ((sizex * packTexFactors.x - 1.0) / sizex) + 0.5 / sizex + packTexFactors.x * tile.x;

uv.y = uv.y * ((sizey * packTexFactors.y - 1.0) / sizey) + 0.5 / sizey + packTexFactors.y * tile.y;

return(texture2DLod(tex, uv, lod));

}

This function is more or less around 25 arithmetic instructions.

**Results**

The final shader code looks like this:

const int nbTiles = int(1.0 / diffPackFactors.x);

vec3 uvw0 = calculateTexturePackMipWrapped(uv, diffPackFactors);

vec4 terrainType = texture2D(terrainLUT, vec2(slope, altitude));

int id0 = int(terrainType.x * 256.0);

vec2 offset0 = vec2(mod(id0, nbTiles), id0 / nbTiles);

diffuse = texture2DLod(diffusePack, uvw0.xy + diffPackFactors.xy * offset0, uvw0.z);

And here is the final image:

With lighting, shadowing, other effects:

**On the importance of noise**

The slope and altitude should be modified with many octaves of 2D noise to look more natural. I use a FbM 2D texture that I sample 10 times, with varying frequencies. 10 texture samples sound a lot, but remember that it's for a whole planet: it must look good both at high altitudes, at low altitudes and at ground level. 10 is the minimum I've found to get "acceptable" results.

Without noise, transitions between layers of different altitutes or slope look really bad:

Do you intend to also look up textures by latitude? It would break my suspension of disbelief if the poles of a planet were grassy instead of snowy

Isn't that also known as a texture atlas? I use them for all my stuff, very convinient way of reducing draw calls state changes, etc. I was eventually going to come up with a hlsl function to allow wrapping within the atlas [instaed of just tesselating the geometry], this is very convinient what you posted.

Great work.

This could easily be a GameDev article in itself...

Here is the full fragment shader I ended up with:

uniform sampler2D terrainLUT;

uniform sampler2D diffusePack;

float mipmapLevel(vec2 uv, vec2 textureSize)

{

vec2 dx = dFdx(uv * textureSize.x);

vec2 dy = dFdy(uv * textureSize.y);

float d = max(dot(dx, dx), dot(dy, dy));

return 0.5 * log2(d);

}

vec3 calculateTexturePackMipWrapped(const in vec2 uv, const in vec4 packFactors, const in vec2 tile)

{

vec3 uvw0;

uvw0.xy = uv;

/// estimate mipmap/LOD level

float lod = mipmapLevel(uv, vec2(packFactors.z));

lod = clamp(lod, 0.0, packFactors.w);

uvw0.z = lod;

/// get width/height of the whole pack texture for the current lod level

float size = pow(2.0, packFactors.w - lod) / packFactors.x;

float scale = ((size * packFactors.x - 2.0) / size);

float filterOffset = 1.0 / size;

/// perform tiling

uvw0 = fract(uvw0);

/// tweak pixels for correct bilinear filtering, and add tile offset for the wanted tile

uvw0.x = uvw0.x * scale + filterOffset + packFactors.x * tile.x;

uvw0.y = uvw0.y * scale + filterOffset + packFactors.y * tile.y;

return(uvw0);

}

void main()

{

const vec4 diffPackFactors = vec4(0.25, 0.25 , 512.0, 9.0);

vec2 uv0 = gl_TexCoord[0].st; // uv

vec2 uv1 = gl_TexCoord[1].st; // slope and altitude

// noise map for more random transitions

vec4 noiseMap = texture2D(terrainLUT, uv0 * 128.0);

vec4 noiseMap2 = texture2D(terrainLUT, uv0 * 64.0);

vec4 noiseMap3 = texture2D(terrainLUT, uv0 * 32.0);

vec4 noiseMap4 = texture2D(terrainLUT, uv0 * 16.0);

float n = noiseMap.y + noiseMap2.y + noiseMap3.y + noiseMap4.y;

n *= 0.25;

uv1.x = 0.0;

uv1.y = uv1.y + ((n - 0.5) * 0.2);

// this clamping might be wrong - because i'm scaling my LUT values by 16 below

uv1.y = clamp(uv1.y, 0.1, 0.9);

const int nbTiles = int(1.0 / diffPackFactors.x);

vec4 terrainType = texture2D(terrainLUT, uv1);

// id's are in the the texture look up table and need to be scaled from 0.0 - 1.0 to 0 - 15 (16 textures in a pack)

int id0 = int(terrainType.x * 16.0);

vec2 offset0 = vec2(mod(float(id0) , float(nbTiles)), id0 / nbTiles);

// tile the uv by 64 times across a single cube face of a planet

vec3 uvw0 = calculateTexturePackMipWrapped(uv0 * 64.0 , diffPackFactors, offset0);

gl_FragColor = texture2DLod(diffusePack, uvw0.xy, uvw0.z);

}

Here's an image of my current implementation which uses a LUT that only has 5 levels based on altitude only.

http://alexcpeterson.com/node/48?size=preview

really awesome Article.

It was really easy to implement a Terrain-Type but i am stuck on adding noise for smooth Transitions between Terrain Types.

Anyone know of an Article that describes how this is done in the shader?

Would be great:-)

Thx a lot!

public struct MappedVertex

{

public Vector3 Position;

public Vector2 TexCoord;

public float Index;

public static int SizeInBytes = 6*sizeof(float);

public static VertexElement[] VertexElements = new VertexElement[]

{

new VertexElement( 0, 0, VertexElementFormat.Vector3, VertexElementMethod.Default, VertexElementUsage.Position, 0 ),

new VertexElement( 0, sizeof(float) * 3, VertexElementFormat.Vector2, VertexElementMethod.Default, VertexElementUsage.TextureCoordinate, 0 ),

new VertexElement( 0, sizeof(float) * 5, VertexElementFormat.Single, VertexElementMethod.Default, VertexElementUsage.Color, 0 ),

};

}

The idea is to have each vertex serve as the center of a virtual tile for a map. I want the shaders to take the indices for each triangle's vertices, sample only those textures referenced by the indices, and blend them just like ordinary colors or texture coordinates.

The roadblock I've been hitting is a limitation of HLSL, namely error X3500: array reference cannot be used as an l-value; not natively addressable.

Put simply, can this type of vertex shader work?

texture mapAtlas;

sampler atlasSampler = sampler_state

{

Texture = <mapAtlas>;

MipFilter = Linear;

MinFilter = Linear;

MagFilter = Linear;

AddressU = Wrap;

AddressV = Wrap;

};

struct VS_INPUT {

float4 position : POSITION;

float4 uv : TEXCOORD0;

float tile : COLOR0;

};

struct VS_OUTPUT

{

float4 position : POSITION;

float4 uv : TEXCOORD0;

float4x4 blend : COLOR0;

};

VS_OUTPUT Transform(VS_INPUT In)

{

VS_OUTPUT Out = (VS_OUTPUT)0;

Out.position = mul( In.position , wvp);

Out.uv = In.uv;

float4x4 weights = 0;

float row = floor(In.tile*0.25);

float col = fmod(In.tile,4);

weights[col][row] = 1; // *error*

Out.blend = weights;

return Out;

}

Then there's the matter of the pixel shader:

float4 PixelShader(VS_OUTPUT In) : COLOR

{

float4 answer = 0;

float3 vertWeights = 0;

float3x4 vertColors = 0;

//Texture atlas coordinates need constraints

float2 uvin = frac(In.uv.xy*3)*0.25f;

int trow = 0;

int im = 0;

int jm = 0;

float sum = 0;

//Look up those textures that have positive blends

// and sample them. Ignore the others.

while (trow<3 && jm<4) {

float blend = In.blend[im][jm];

if (blend>0) {

float2 uvtile = (im*0.25f, jm*0.25f);

vertWeights[trow] = blend; // *error*

sum += blend;

vertColors[trow] = tex2D(atlasSampler, uvin+uvtile); // *error*

trow++;

}

im++;

if (im>3) {

im = 0;

jm++;

}

}

//Insure the blends sum to 1

vertWeights /= sum;

//Now blend and return the answer

answer = mul(vertWeights, vertColors);

return answer;

}

To illustrate the nature of the problem, this awkward kludge deals with similar issues in the pixel shader:

if (blend>0) {

float2 uvtile = (im*0.25f, jm*0.25f);

if (trow==0) {

vertWeights.x = blend;

vertColors[0] = tex2D(atlasSampler, uvin+uvtile);

}

if (trow==1) {

vertWeights.y = blend;

vertColors[1] = tex2D(atlasSampler, uvin+uvtile);

}

if (trow==2) {

vertWeights.z = blend;

vertColors[2] = tex2D(atlasSampler, uvin+uvtile);

}

sum += blend;

trow++;

}

What will it take to make this shader work? Preferably in code form.

Note: GameDev.net moderates comments.