Jump to content
  • Advertisement
Sign in to follow this  
HScottH

OpenGL Help: switching from float to unsigned short killed performance

This topic is 2123 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi all,

 

It's been a while since I've been here!

 

I am building a game engine.  My primary world geometry is fed to OpenGL using several arrays: coordinates, texture, etc.

 

My coordinates were simple [x,y,z] using GL_FLOAT, but to save space I decided to try GL_UNSIGNED_SHORT.

 

When I made that switch, my frame-rate dropped from ~150fps to ~40fps.

 

In retrospect, I realized I was feeding it six bytes per vertex, resulting in alignment problems so I changed the vertex data to [x,y,z,1], and got up to 80 fps.

 

But this is still just over 1/2 of what I was getting using GL_FLOAT.  Is there some secret here, or is the conversion in the GPU so expensive that it causes this issue?

 

Note: I am using shaders and none of the fixed-function pipeline.

Share this post


Link to post
Share on other sites
Advertisement

Redaction: I was mistaken. The frame rates are exactly the same.  It turns out that my frame rate flip-flops between the 80/150 number depending upon a random fluctuation in my game, and it just happened that several runs using FLOAT found the higher number, then several runs using SHORT found the lower one.

 

<whew> I was a bit concerned on this one :-)

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!