Jump to content
  • Advertisement
Sign in to follow this  
MarkS

Why does binding textures cause a performance drop?

This topic is 1075 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I've been thinking about this for a while now, but keep forgetting to post. I don't have any code to show and am commenting on what I've read in the past.

From what I understand, it is faster to group objects by texture and bind the texture that will be used in that group and draw, doing this for each group, than to load all of the textures and bind per object. This seems odd to me. What is going on behind the scenes to cause a performance drop? If I have, say, ten objects, with five using one texture and the other five using another (to make this discussion simple), why can I not load all ten objects and the two textures and then bind the correct texture to the object as I draw? Everything is residing in GPU memory, so I would suspect (obviously incorrectly) that binding is just specifying which texture to use.

There is obviously a big gap in my knowledge here.

Share this post


Link to post
Share on other sites
Advertisement

It's not texture binding that costs. It's about starting new batch drawing.

GPU is a fast and long conveyor. You can draw huge amount of triangles, but starting/stopping that conveyor costs a lot.

If your triangles sorted by texture, you can combine them and draw in single call, saving costs of starting new batch.

Sure, if you draw every objects with separate draw call, then there's no difference if you sort them or not (provided textures are resident).

Share this post


Link to post
Share on other sites
On the CPU side, every API call has a cost. Texure binding on pre-D3D12 APIs is actually really complicated on the CPU side... If you call a texture binding function 1000 times per frame, that would likely add up to several milliseconds of time. Sorting drawables by state+resources lets you reduce the number of API calls required to draw your frames.

On the GPU side, state and resource changes can cause pipeline bubbles as mentioned above by vstrakh.
Modern cards are getting better at this though - on a new AMD card, you have to change shader resources ~9 times in ~9 consecutive draw-calls (and have each of them cover only a small number of pixels) in order to cause a pipeline bubble.
On older cards, any state/resource-binding change at all might potentially cause a pipeline stall (especially if the draw-call covers < ~400 pixels).

Share this post


Link to post
Share on other sites
Thank you, both of you! It just seemed silly that telling the GPU which texture to use somehow degraded performance. It is good to know why.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!