Quick Question on Sprite Rendering

Started by
2 comments, last by GregMichael 13 years, 11 months ago
Hi, I'm working on a 2D renderer that draw sprites that can be animated. I'm currently implementing the system by storing each frame of an animation as a separate OpenGL texture. I did this because I was concerned that I might exceed the maximum texture size if I stored all the frames on one texture. My intention is that the system should be scalable. One thing I'm more concerned about is the speed. Although OpenGL renders 2D images blazingly fast, I still don't want to do something that is grossly inefficient. I am currently switching the bound texture for each frame of the animation. Would it be faster to store multiple frames in a texture and translate the texture coordinates? Also, is there a memory penalty I'm unaware of? I know that multiple high-res sprites with several frames of animation can quickly eat up video memory (especially without some sort of caching scheme). Does the video card pack textures in a certain way in video memory so that one method would be more efficient than the other? Thanks, -othello
Advertisement
I'd recommend a single texture (within reason) containing all the sprite frames, then animate the UV coordinates to get to the frame you want. You could write a "sprite" boxing tool that would let you / artist / designer etc. select a box region (and therefore UV coordinates) of each sprite then it's easy to run through an animation of them without changing texture each frame.
Thanks for the reply. So is changing the texture each animation frame expensive as opposed to changing the texture coordinates?
I believe so yes, but check the documentation / google for sure.

As a rule of thumb - any state change when rendering is not good - eg the less things change the faster you can render. Not an easy task as you do have to change textures, meshes etc. but these can be batched to minimize state changes.

This topic is closed to new replies.

Advertisement