I've found myself in an odd situation where space-conservation is of the utmost priority. I currently have a large amount of 2D vectors expressed in floats (x and y value). These floats are placed in a texture and uploaded to the GPU for usage in a GLSL program.

An easy way to minimize space is to compress the data down to 8 bits per component. So x and y in 8 bits. This provides ok results, but there's some loss of precision that I'd like to avoid or at least mitigate.

It occurred to me that by expressing the vector in polar form, I can more accuratly determine where I need precision the most. In the direction or the length of the vector. Consequently, I'd like to try and use 10 bits for the direction, and 6 bits for the length. But I am at a loss if this is even possible on the GPU? If anyone has any idea or feedback, I'd be very grateful. This is what I know so far:

- I'm uncertain if the GLSL specifications allow for textures consisting of types that use 10 or 6 bits. Even if it does, it seems as though it might be very inefficient, due to it being quite non-standard...
- The GPU now supports bit-wise operations. So perhaps I could upload the values combined in a 16 bit texture and then use bit-shifting operations to put them into two separate 16 bit varaibles? What do you think?

Gazoo