Help: switching from float to unsigned short killed performance

Started by
0 comments, last by HScottH 10 years, 7 months ago

Hi all,

It's been a while since I've been here!

I am building a game engine. My primary world geometry is fed to OpenGL using several arrays: coordinates, texture, etc.

My coordinates were simple [x,y,z] using GL_FLOAT, but to save space I decided to try GL_UNSIGNED_SHORT.

When I made that switch, my frame-rate dropped from ~150fps to ~40fps.

In retrospect, I realized I was feeding it six bytes per vertex, resulting in alignment problems so I changed the vertex data to [x,y,z,1], and got up to 80 fps.

But this is still just over 1/2 of what I was getting using GL_FLOAT. Is there some secret here, or is the conversion in the GPU so expensive that it causes this issue?

Note: I am using shaders and none of the fixed-function pipeline.

Advertisement

Redaction: I was mistaken. The frame rates are exactly the same. It turns out that my frame rate flip-flops between the 80/150 number depending upon a random fluctuation in my game, and it just happened that several runs using FLOAT found the higher number, then several runs using SHORT found the lower one.

<whew> I was a bit concerned on this one :-)

This topic is closed to new replies.

Advertisement