I'm writing a framework which makes it easier for my team to implement procedural terrain generation algorithms. The output would be a nice random-generated heightmap. The basic idea is to share work between the CPU and the GPU, so I've written a texture class which can be used for primitives, drawn onto ( GPU ), and written/read pixel by pixel ( CPU ). I upload/download texture data to/from GPU when needed.
We're targeting DX9-compatible hardware.
When deciding about texture format, I settled for GL_LUMINANCE + GL_FLOAT, since I need only one channel for the height. Later I wanted to add brush support, so I switched to GL_LUMINANCE_ALPHA. I started changing my framework so that it uses the highest available precision ( so GL_FLOAT is more like a hint, no more a requirement ).
I did some empirical research about GL_LUMINANCE_ALPHA's support and realized that it's by far not optimal. On my NVIDIA GeForce 710M it's supported, but drawing it is ridiculously slow. My integrated Intel HD 3000 lacks support for both GL_LUMINANCE and GL_LUMINANCE_ALPHA.
Looking for a more supported replacement, I've noticed GL_RG, which has two components too, but by default doesn't expand the way I need it. I've checked texture swizzling, but it hangs around only since ~2008, which is not optimal for me.
I could expand my data to GL_RGBA which has the most chance to be supported but that way I have two unused channels.
Or I could have two separate GL_R textures, one for the height and one for the alpha. That seems valid, too, but I have to draw everything twice.
I'd appreciate any input on my plans, what method should I use, how much compatibility should I expect, etc.
tl;dr: GL_LUMINANCE_ALPHA not supported, need an alternative that can be used on DX9-compatible GPU's.
Edited by TheUnnamable, 24 January 2014 - 04:52 PM.