Advantage of multiple vertex streams?

Started by
9 comments, last by Dirge 14 years, 10 months ago
Do the data streams work in parallel? If so, is there anything that stops the rendering from working a lot faster with several buffers in its own data stream than a single huge buffer in one stream?
Advertisement
I'd imagine the only thing that would slow down using multiple vertex streams would be setting them each time. I don't know if they run in parallel, but I've heard nothing to the effect that there are considerable performance hits from setting different ones a number of times.
According to NVidia and AMD it is a little bit faster to use just on vertex stream containing all the data instead of multiple streams due to better cache locality.

However, what exactly do you want to do?
Have fun BunnzPunch'n'Crunch
I'm developing a graphics engine.
Quote:Original post by Bunnz
According to NVidia and AMD it is a little bit faster to use just on vertex stream containing all the data instead of multiple streams due to better cache locality.
Depending on the GPU, it could be much slower - if the input assembler needs to construct a vertex from two sources, then it needs to make at least two reads, which could potentially be twice as slow as using a single stream.

There are cases however, where it's better to use multiple streams that to bloat a single stream or to cause more draw calls, but it depends on the specific case.
Quote:Original post by Evil Steve
Depending on the GPU, it could be much slower - if the input assembler needs to construct a vertex from two sources.


When does the IA do this? Is this related to dynamic vertex buffers or does it also happen for resources where only the GPU has read/write access?

*bump*
This is not related to static/dynamic buffers, IA does it for every vertex it needs to process every time; in theory, drivers could merge multiple vertex streams in a single contiguous one for static geometry, in practice they don't do that.
It can be advantagous when you are rendering your geometry in multiple passes that require different inputs.

When you are rendering say a z-prepass or a shadow map, all you generally care about is the position of the vertices, there's no need to pass along the normals, texture coordinates, tangents, and all that.

You could create 2 vertex buffers for each mesh, one with the positions only, and one with everything else to easily combine them. I've never noticed any penalty from having every property split out into its own vertex buffer, and I've noticed speed increases in multi-pass techniques by only rendering with the minimum amount of data required. And there's no need to actually allocate video memory for say the normals of a mesh, until you actually need to use them.
Quote:Original post by andur
It can be advantagous when you are rendering your geometry in multiple passes that require different inputs.

When you are rendering say a z-prepass or a shadow map, all you generally care about is the position of the vertices, there's no need to pass along the normals, texture coordinates, tangents, and all that.

You could create 2 vertex buffers for each mesh, one with the positions only, and one with everything else to easily combine them. I've never noticed any penalty from having every property split out into its own vertex buffer, and I've noticed speed increases in multi-pass techniques by only rendering with the minimum amount of data required. And there's no need to actually allocate video memory for say the normals of a mesh, until you actually need to use them.

Interesting. I never thought about it that way. I guess it is not enough to to have different inputlayouts for the different passes?

This topic is closed to new replies.

Advertisement