Jump to content

  • Log In with Google      Sign In   
  • Create Account

What is the point of multiple vertex buffers?


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
5 replies to this topic

#1 mrheisenberg   Members   -  Reputation: 356

Like
0Likes
Like

Posted 14 September 2012 - 09:47 AM

I noticed in the input assembler you can set an array of vertex buffers.Other than for an instance buffer,does this have any other use?

Sponsor:

#2 Quat   Members   -  Reputation: 404

Like
0Likes
Like

Posted 14 September 2012 - 01:31 PM

You can build your verrtex elements from different components that come from different VBs. For example, you could put {position, normal, tex2d} in one vertex buffer and {tangent, extra_tex2d} in another vertex buffer.
-----Quat

#3 mrheisenberg   Members   -  Reputation: 356

Like
0Likes
Like

Posted 14 September 2012 - 02:28 PM

You can build your verrtex elements from different components that come from different VBs. For example, you could put {position, normal, tex2d} in one vertex buffer and {tangent, extra_tex2d} in another vertex buffer.


but what benefit would that have?Won't it do the same thing as as ingle one,but with extra little overhead for creating/setting the second buffer?

#4 lwm   Members   -  Reputation: 1419

Like
0Likes
Like

Posted 14 September 2012 - 02:51 PM

but what benefit would that have?Won't it do the same thing as as ingle one,but with extra little overhead for creating/setting the second buffer?


If you want to only update certain attributes of the buffer, you can save some bandwidth by splitting the data. It's often used for instancing, where one buffer contains the actual model and another buffer holds the models' world matrices.

current project: Roa


#5 MJP   Moderators   -  Reputation: 11347

Like
0Likes
Like

Posted 14 September 2012 - 04:05 PM

There's all kinds of reasons. One popular reason is to split up your vertex data based on the passes used to render a mesh. So for instance you might store position in one vertex buffer and everything else in another vertex buffer, and then use just the position buffer when rendering shadows. lwm already mentioned that it can be useful if a portion of your vertex data is dynamic, therefore it makes sense to keep the dynamic data in its own buffer since typically you have to update the entire contents of a dynamic buffer. Another reason might be that it's just easier for certain cases to work with multiple vertex buffers, for instance if all of your source vertex data came in as separate channels that aren't interleaved.

#6 Jason Z   Crossbones+   -  Reputation: 5062

Like
0Likes
Like

Posted 14 September 2012 - 05:09 PM

So to summarize what MJP said, the reason is to reduce the bandwidth needed for feeding vertices into the pipeline. If you know a particular rendering pass will only use positions (i.e. a depth only pass) then it doesn't make sense to pass in a full featured vertex. Instead, you can selectively choose the parts of your vertex to pass in, with a negligible penalty for re-assembling the vertices at runtime.




Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS