In the past, i have tried many different approaches. Namely:
1. Texture splatting: i've got many texture layers (grass, rock, sand, snow, etc..) and for each vertex of the chunk, i assign some weights to these textures. This work is done on the CPU asynchronously.
2. Unique procedural texture, on the CPU: for each chunk, a unique texture is procedurally generated, by using the heightfield and some noise functions.
1. Moderately fast, average quality. Dependant on tesselation. Morphing hard to implement since you must morph the weights too.
2. Awfully slow, better quality. Became unusable when you started to zoom on the planet's surface. A few mipmapping problems.
So, for the next (and hopefully last) version of the terrain engine, i'm going to try a different technique: a unique, procedural texture.. but generated on the GPU instead of the CPU. This should solve the performance problem.
At the moment i'm first trying to generate some good normals to, later, do lighting, but also texturing (based on the slope of the terrain, which requires normals).
The input is the heightfield for the terrain patch. It is currently 33 x 33. That's very low resolution. Assuming a 512x512 unique texture to cover this patch, the question is.. how to upsample that heightfield, add some noise (for more details), and generate normals.. all of that... without any artifact ?
I spent hours working on that. Some problems:
- upsampling using bilinear filtering: when differencing the heights for the normals, the derivatives are not continuous -> lots "grid-based" artifacts
- 8-bits heightfield -> normals are of low quality -> lighting is very bad
- so i tried to encode the heights as f32 into an RGBA vector, and to decode it in the pixel shaders. Works better, but still some quality artifacts.
- i switched to floating point textures. Problem: bilinear filtering is not available on them.
- so, i had to implement filtering myself in a pixel shader. I tried to smooth-step the filtering parameters, too. Improved the quality a bit.
- but even after that, artifacts were still visible. So i had to blur the upsampled, filtered heigthfield twice (with a 15x15 kernel size) in a pixel shader. This smoothened the heightfield (hence the normals) by a lot.
- finally, noise is generated by taking N pre-generated 2D Perlin noise textures and adding them together with varying weights. The noise is then added on top of the upsampled, filtered heightfield.
The result is rather nice, and is generated 100% on the GPU (which means real-time):
It's not finished though. I'm concerned about the blurring. It works well here, but when two terrain patches will be adjacent, the pixel shader will not have access to the adjacent texels. This means, an ugly seam might potentially appear. To avoid it, i'd have to not use blurring.. but i have to find a way to avoid artifacts too.