This is just an exploratory question really, but here's my situation.
I have a very large terrain, with all the textures stuffed into a texture array, addressable by their array index. Eventually there will be quite a lot of textures (I envisage around 30 or more in total), but any given 'chunk' will only have a subset of the total - no more than say 16, which conveniently fits into 4 bits. I can then use a base offset for each chunk to determine which of the consecutive textures I can access in the array. The simplest would be to use an R8 texture, but that requires 8 bits per pixel, when 4 bits would do.
DXT3 textures handily provide 4 bits per pixel in the alpha, leaving RGB empty, which would be ideal for terrain vertex normals. Now, I understand that DXT is generally a crappy way to store normals, but was wondering if anyone knows how bad it would be for terrains, bearing in mind that most normal point more or less up, rather than every-which-way.
Maybe there is a way of using RG & B to better encode XY normals? Or a better compression method that allows 4 bits per pixel.
I suspect however, that I'll just have to use bytes and bit shift to get at the data. A 'built in' method (like DXT) would be nicer/easier though, and would avoid "if( mod(...) )" type statements.
Edited by mark ds, 10 July 2014 - 03:58 PM.