We have split our terrain out into logical sectors or pages that allow us to stream in portions of the terrain based on the camera's location. Each sector of terrain is further subdivided into 256 cells in a 16x16 format. Each of these cells consist of up to 4 color textures, generally 256x256 and an alpha blend map for each color layer past the first layer along with a light map.
Rendering each of the smaller cells independently is quite easy by simply passing the color textures along with a RGB texture that combines the maximum of 3 alpha blend textures into a single texture and I do the shading inside a pixel shader. While this works quite well, it is far from efficient mainly because this pushes the batch count extremely high since I am rendering on a small cell by cell basis and since that no material is shared between any of the cells due to their varying alpha blend map texture.
I did consider the idea of using a quad-tree structure based on texture counts to split the terrain sectors into quads that contained at minimum 1 cells but more like 8x8 patches of cells, dynamically generate a material that referenced the textures used by the given set of cells and then generate a dynamic runtime texture by combining the alpha blend maps for those cells into one larger texture. In the case where a terrain sector was within limits and thus the entire sector was 1 leaf node, the alpha texture is only 1024x1024 but generally I would expect it to be 512x512 or smaller in more detailed texture-varying areas.
The problem with this approach is I am not sure how to control which 4 textures to sample and blend in my shader. As a simple example, my algorithm determines that a 2x2 set of grid cells are within the texture limits to be combined into a single material. So I bind lets say 8 textures and my blend texture. The top left cell might need to sample textures 1, 3, 5, and 8 while the top right cell might need to sample textures 2, 5, 6, and 7. Both cases, the same alpha texture is sampled and the where the red channel controls the blend weight for texture 1 in the top left grid while it controls the blend weight for texture 2 in the top right.
Is this even possible and if so, anyone have any examples or suggestions on how I could leverage this? Or is there a cleaner while efficient way to do this without overly complicating the shader's logic?