Jump to content
  • Advertisement
evelyn4you

skinning with VertexShader Streamout => point topology

Recommended Posts

hi,

after implementing skinning with compute shader i want to implement skinning with VertexShader Streamout method to compare performance.

The following Thread is a discussion about it.

Here's the recommended setup:

  • Use a pass-through geometry shader (point->point), setup the streamout and set topology to point list.
  • Draw the whole buffer with context->Draw(). This gives a 1:1 mapping of the vertices.
  • Later bind the stream out buffer as vertex buffer. Bind the index buffer of the original mesh.
  • draw with DrawIndexed like you would with the original mesh (or whatever draw call you had).

I know the reason why a point list as input is used, because when using the normal vertex topology as input the output would be a stream of "each of his own" primitives that would blow up the vertexbuffer. I assume a indexbuffer then would be needless ?

But how can you transform position and normal in one step when feeding the pseudo Vertex/Geometry Shader with a point list ?

In my VertexShader i first calculate the resulting transform matrix from bone indexes(4) und weights (4) and transform position and normal with the same resulting transform Matrix.

Do i have to run 2 passes ? One for transforming position and one for transforming normal ?

I think it could be done better ?

thanks for any help

 

Share this post


Link to post
Share on other sites
Advertisement

Hi,

You can bind up to 4 simultaneous stream out targets, so if your vertex positions and normal in separate buffers, you can output both of them just fine. 

In the naïve implementation, I doubt you could get more performance out of it than the compute shader approach. But there is potentially an other use case here. You stream out when you also draw. For example you have multiple passes which want to reuse the animated vertex data. The naive approach would do first do a streamout, then subsequent rendering passes would use the result of that. The improved approach would be that the first rendering pass which requires animated vertex data also does animation, rendering and streamout at the same time, then the subsequent passes would only do rendering with the animated data which is already available.

And remember, you don't have to create a geometry shader for this if you are just animating. From DX10.1 I think you can provide a vertex shader for the CreateGeometryShaderWithStreamOutput function and it will work.

Edited by turanszkij

Share this post


Link to post
Share on other sites
5 hours ago, evelyn4you said:

But how can you transform position and normal in one step when feeding the pseudo Vertex/Geometry Shader with a point list ?

You can't, unfortunately. You either have to accept having a fully expanded vertex buffer (which can be MUCH bigger than your original indexed VB), or you need to do a separate pass. Neither is ideal, really,

There is another option, but only if you're running on Windows 8 or higher: UAV's from the vertex shader. All you need to do is bind a structured buffer (or any kind of buffer) as a UAV, and then write to it from your vertex shader using SV_VertexID as the index. 

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!