It's been a while since I've been here!
I am building a game engine. My primary world geometry is fed to OpenGL using several arrays: coordinates, texture, etc.
My coordinates were simple [x,y,z] using GL_FLOAT, but to save space I decided to try GL_UNSIGNED_SHORT.
When I made that switch, my frame-rate dropped from ~150fps to ~40fps.
In retrospect, I realized I was feeding it six bytes per vertex, resulting in alignment problems so I changed the vertex data to [x,y,z,1], and got up to 80 fps.
But this is still just over 1/2 of what I was getting using GL_FLOAT. Is there some secret here, or is the conversion in the GPU so expensive that it causes this issue?
Note: I am using shaders and none of the fixed-function pipeline.