Sign in to follow this  
Ryan_001

SV_VertexID and ID3D11DeviceContext::Draw()/StartVertexLocation

Recommended Posts

Ryan_001    3475

I'm moving some code from the CPU to GPU and needed to render to a 3D texture.  I figured I'd use the geometry shader to generate the trianges and just use a simple vertex shader to pass vertex id's on to the geometry shader.  Something like this for the vertex shader:

uint vs_main (uint id : SV_VertexID) : VERTEX_ID {
	return id;
	}

That way I don't actually need a vertex buffer.
 
Now I'd like to be able to specify a range of z slices to render (start at slice 5, render to slice 10).  I could pass the start value in a constant buffer but ID3D11DeviceContext::Draw() has the StartVertexLocation parameter.  I was wondering, but couldn't find any definitive, on how the StartVertexLocation interacted with SV_VertexID.

 

Any ideas?  Comments?  Suggestions?

Share this post


Link to post
Share on other sites
Ryan_001    3475

That was my original thought, but http://www.altdevblogaday.com/2011/08/08/interesting-vertex-shader-trick/ seems to state otherwise.  I read somewhere (I forget where and can't seem to find it again) that DrawInstanced() (or perhaps DrawIndexed() I don't remember exactly) could be used instead, but again the MSDN docs aren't clear, and they didn't elaborate.

Edited by Ryan_001

Share this post


Link to post
Share on other sites
ajmiles    3304

So it does, I must admit I didn't try it, but I can't understand why it would be anything other than additive. That said, since he's gone to the effort of writing an article on it, assume he's right until I've had a go at testing it myself. smile.png

 

EDIT: As expected, he was right in that regard. The sample applies to the 'startInstanceLocation' on DrawInstanced too, so you can't hijack that either.

 

I think you're just going to have to spend a couple of minutes adding a constant buffer I'm afraid. There's no need to use a Geometry Shader though. Depending on what you're doing in the pixel shader, you may find it'd be easier and faster to bind your 3D texture as a UAV on the pixel shader and have it calculate/export the values for all N slices you're interested in rather than using the geometry shader's SV_RenderTargetArrayIndex functionality, producing N quads and running the pixel shader a factor of N more times. This way any shared calculations between the N sheets only need to be calculated once.

Edited by ajmiles

Share this post


Link to post
Share on other sites
Jason Z    6434

The vertex ID is actually a running count of the vertices generated on the GPU, so it doesn't matter what you pass as the offset - it just starts at zero and counts upward.  This is actually a bit of a shame, since it would be nice to have more control over it...

 

I would recommend either selecting the slice with an appropriate render target view (i.e. not selecting it within the pipeline, but rather by the CPU before the draw call) or just using a constant buffer to specify the slice if you want to go that way.

Share this post


Link to post
Share on other sites
ajmiles    3304

The vertex ID is actually a running count of the vertices generated on the GPU, so it doesn't matter what you pass as the offset - it just starts at zero and counts upward.

 

Except in the case of indexed draw calls though, where it's the index from the index buffer (more sure about this one than the last post smile.png)

Edited by ajmiles

Share this post


Link to post
Share on other sites
Ryan_001    3475

So it does, I must admit I didn't try it, but I can't understand why it would be anything other than additive. That said, since he's gone to the effort of writing an article on it, assume he's right until I've had a go at testing it myself. smile.png

 

EDIT: As expected, he was right in that regard. The sample applies to the 'startInstanceLocation' on DrawInstanced too, so you can't hijack that either.

 

I think you're just going to have to spend a couple of minutes adding a constant buffer I'm afraid. There's no need to use a Geometry Shader though. Depending on what you're doing in the pixel shader, you may find it'd be easier and faster to bind your 3D texture as a UAV on the pixel shader and have it calculate/export the values for all N slices you're interested in rather than using the geometry shader's SV_RenderTargetArrayIndex functionality, producing N quads and running the pixel shader a factor of N more times. This way any shared calculations between the N sheets only need to be calculated once.

 

The code is being moved from the CPU to the GPU and right now it just generates 3d terrain from a function thats just a a bunch of noise functions, so there's really no inter-pixel sharing needed.  Any function that needed to share data could easily be reworked as a gather operation, I can't think of any place where I'd need an explicit scatter operation, so I don't NEED a UAV.  My thoughts were that a UAV would have worse cache performance, and might need to use sychronization primitives, than just using two 3D textures and 'ping-ponging' back and forth where needed.

 

Do you have experience with UAV's?  Is the performance better than I've been lead to expect?

Share this post


Link to post
Share on other sites
MJP    19754

Cache performance of UAV's is dependent on the locality of your writes. For the case of rasterization, pixel shader executions are grouped by locality in terms of pixel position, so writing to a texture at the pixel location should be fine.

As for synchronization, you only need to worry about that if different shader executions read from or write to the same memory locations during the same Draw or Dispatch call. If you don't do that, there's nothing that you need to worry about.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this