Jump to content
  • Advertisement
Sign in to follow this  
fathom88

Interlaced Versus Non-Interlaced Vertex Array Question

This topic is 4613 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I have a question regarding vertex arrays. Are interlaced vertex arrays that much quicker than non-interlaced arrays? I know the interlaced one is faster. Is it worth the trouble converting my code to use the interlaced version? Thanks.

Share this post


Link to post
Share on other sites
Advertisement
Quote:
Original post by fathom88
I have a question regarding vertex arrays. Are interlaced vertex arrays that much quicker than non-interlaced arrays? I know the interlaced one is faster. Is it worth the trouble converting my code to use the interlaced version? Thanks.

And I wonder where you got your information. If by interlaced and non-interlaced arrays you mean glInterleavedArrays and glVertexPtr, etc., then ASFAIK the two will perform identically. Modern implementations of interleaved arrays are actually implemented by calling glVertexPtr, etc.

Hope this helps,

SwiftCoder

Share this post


Link to post
Share on other sites
Having the different arrays interleaved in memory is a good idea because the drivers don't have to jump all over the place in memory to access it, making the cache much more effective. You can still use gl*Pointer calls with interleaved data (and I would recommend not using glInterleavedArrays because it has a very limited selection of vertex formats). If it's worth it depends entirely on what you need from your application, how it is designed, and how much trouble converting it would cause you.

Share this post


Link to post
Share on other sites
Quote:
Are interlaced vertex arrays that much quicker than non-interlaced arrays?

from memory from my testing theyre about 3%quicker in benchmarks
but in my real world app theres no real speed difference between the 2

Share this post


Link to post
Share on other sites
Quote:
Original post by zedzeek
from memory from my testing theyre about 3%quicker in benchmarks
but in my real world app theres no real speed difference between the 2


I would imagine that that's simply because you've got your bottleneck elsewhere (CPU logic, Fillrate, etc.) in a real world App. Still, the point remains that most drivers do seem to be optimized for interlaced buffers, so may as well use them. If you do happen to run into that odd configuration where vertex processing IS indeed the bottleneck, they'll thank you for that extra 3% performance.

Just for kicks, you may find an older thread I started about a similar question interesting. Phantom gives a good explanation about the whole thing. See the thread here.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

Participate in the game development conversation and more when you create an account on GameDev.net!

Sign me up!