Jump to content
  • Advertisement
Sign in to follow this  
soconne

Imposter System Design Problem

This topic is 4970 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I recently got my own impostering system working very well. At first I designed it so that each mesh in the scene had its own 64x64 texture to render an imposter to and display it on a quad. Using this method I could have 2000 imposters in the scene, each updating its imposter texture ( when needed ), and I got a frame rate of 90 - 150 fps. I decided then to pack each imposter texture into a large atlas texture, so that I could skip all huge amount of texture binding calls when rendering the imposters. For 2000 imposters, that's 2000 different texture binding calls for the 64x64 textures. Well now I have a single 1024x1024 texture, packing 256 imposters into it but I am getting horrible frame rates. Using this new method I only get around 300 fps with 256 objects. Using the old system I got around 700-800 FPS !!! And if I use 512 objects, meaning 2 1024x1024 textures, rendering each object in a single atlas texture, THEN rendering the objects that belong to the 2nd texture.....I get as low as 10 fps !! Every time an imposter is updated, it much bind the model's textures, render the model to the screen, then copy the frame buffer into the appropriate location on the atlas texture. So am I experiencing bad framerates because of using a larger imposter texture? Meaning the cost of binding a large 1024x1024 is MUCH GREATER than binding a 64x64 texture? Because that is what it seems to be. Any ideas?

Share this post


Link to post
Share on other sites
Advertisement
The code, the OpenGL drivers or both are doing something unexpected; you need accurate profiling data.
Have you tested the program on different combinations of card and driver? Having the expected good performance on some platforms would be strong evidence of a bad OpenGL implementation.
Your impostor system combines different activities: try to measure separately the rendering of impostor models to the texture atlas (write only) and the rendering of impostor quads to the screen (from a fixed texture atlas).
You could also write simple and parametric test programs without the impostor management logic.

Share this post


Link to post
Share on other sites
i assume your filling one texture fully before switching to the next texture (same thing for when youre drawing u draw all the imposters from one texture before moving onto the next one)

Share this post


Link to post
Share on other sites
Oh I fixed the problem already. Now with 2000 imposter I got it running anywhere from 650-900 fps

Share this post


Link to post
Share on other sites
It's always nice to say WHAT you did to solve the issue so those coming later don't have to repost asking the same question again. Thanks!

Share this post


Link to post
Share on other sites
Well the problem I was having was that after I implemented the texture atlases to batch together each imposter I forgot to take out the code that created a single imposter texture for each object. So basically I was allocating WAY too much GPU memory and it was causing things to run very slowly. Once I took this out, the frame rate jumped by about 150-200 fps.

Then I added frustum culling to cull out groups of imposters, instead of each one individually. Then instead of updating imposter textures EVERY frame, I only update them every 30 frames or so. You cannot tell the difference. This boosted frame rates by another 200-300 fps depending on how many imposters are in view.

Then I created a load balancing system that only updates a certain number each frame, instead of EVERY single imposter that needs updating. This increased the fps by about another 100fps, I kid you not.

Then I had been rendering every imposter with calls to glBegin and drawing quads. I would call glBegin before rendering every imposter, then after sending the vertices using glVertex3f I would call glEnd. So I 'thought' that I would convert to using vertex arrays in order to batch the geometry and send it to the GPU more efficiently. But when I did this I did not see any increase in fps whatsoever. So I went back to using glBegin and glEnd.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!