Jump to content
  • Advertisement
Sign in to follow this  
schupf

VS = NULL

This topic is 3766 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

As you all know the pipeline looks like this (simplified): VS, Setup, Pixelshader , where the Setup stage performs the homogeneous divide, viewport mapping and rasterization If I disable the VertexShader (device->SetVertexShader( NULL );), will the vertices immediately passed to the rasterizer without any homogeneous division or viewport mapping?

Share this post


Link to post
Share on other sites
Advertisement
Quote:
Original post by schupf
As you all know the pipeline looks like this (simplified):
VS, Setup, Pixelshader , where the Setup stage performs the homogeneous divide, viewport mapping and rasterization

If I disable the VertexShader (device->SetVertexShader( NULL );), will the vertices immediately passed to the rasterizer without any homogeneous division or viewport mapping?


No, they'll be passed through the fixed function pipeline.

Share this post


Link to post
Share on other sites
I have an example from the DX SDK where they define 4 vertices in screen space (a quad from (-0.5, -0.5) to (242.5, 242.5))

They disable the VS and only use a PS. Since this demo works these vertices have to be passed immediately to the rasterizer. I mean if these vertices would be multiplied by the viewport mapping matrix, the screen space positions would be destroyed.
So what exactly happens with vertices if i disable the VS?

Share this post


Link to post
Share on other sites
d3dfvf_xyzrhw or positiont types are assumed to be fully transformed, skip the vertex shader and viewport scaling, and are in pixels for xy. d3dfvf_xyz or position types use the fixed function vertex pipeline.

[Edited by - Namethatnobodyelsetook on May 25, 2008 8:59:03 PM]

Share this post


Link to post
Share on other sites
Well, the code indeed uses D3DFVF_XYZRHW as vertex format, but the code also uses a pixel shader!
   Device->SetVertexShader( NULL );
Device->SetFVF( FVF_TLVERTEX );
Device->SetPixelShader( g_pLumDispPS );

Does this mean the vertices sent to the pipeline are considered as the input of the pixel shader (so ALL stages before the pixel shader stage like viewport mapping and perspective division are omitted?

Share this post


Link to post
Share on other sites
Oops, meant to say "skipped the VERTEX shader". I'll go edit the above to not confuse anyone else who reads this thread. I'm not sure if perspective divide is done or not. I've always seen a w of 1.0 used. I imagine it probably does do the divide, as it's a pixel operation, and enables more functionality.

Share this post


Link to post
Share on other sites
From what I remember about D3DFVF_XYZRHW (nearly 7 years since I last had to use it!):

- perspective divide DOES happen.

- viewport transformation DOES NOT happen.

- backface culling DOES still happen.

- if you want your TL verts to be clipped, make sure the device has the D3DPMISCCAPS_CLIPTLVERTS device cap. If it doesn't have the cap, you must do all clipping yourself. If you don't want clipping, disable the CLIPPING render state and set any relevent buffers to D3DUSAGE_DONOTCLIP.

- if your TL verts represent 3D and you want them to be clipped, set up a valid D3D viewport and projection matrix. My mind is patchy on this, but ISTR this was so that D3DPMISCCAPS_CLIPTLVERTS capable hardware (and the software T&L pipe) was able to transform TL verts back into clip space to do proper perspective correct clipping.

Share this post


Link to post
Share on other sites
Ok, so Vertices with Format D3DFVF_XYZRHW will not be transformed by viewport mapping. Is it possible to use D3DFVF_XYZRHW AND use a vertex shader?

One last question: In D3D10 there is no FVF anymore - only shader semantics. So if I want to use screen space vertices in D3D10, is the semantics POSITIONT the way to go?

Share this post


Link to post
Share on other sites
Quote:
Original post by schupf
Is it possible to use D3DFVF_XYZRHW AND use a vertex shader?


I don't think that works. Why would you want to? If you're using a vertex shader, use a vertex declaration. D3DDECLUSAGE_POSITION with D3DDECLTYPE_FLOAT4 would get you vertex positions with xyzw components.

Bear in mind that a vertex shader expects to output into clip space so if you want to simply pass pre-transformed vertices through the shader to the rasterizer then they should be pre-transformed into clip space.

Quote:
One last question: In D3D10 there is no FVF anymore - only shader semantics. So if I want to use screen space vertices in D3D10, is the semantics POSITIONT the way to go?


If you don't want to perform any vertex processing at all, yes with DX9 POSITIONT is the way to go, and I think the same semantic still works under DX10. If not, a simple pass-through vertex shader will do the job.

All that said, on any current hardware I'd be extremely surprised to see vertex processing show up as a bottleneck in any sane/balanced application. Are you hoping for any significant performance gain from disabling vertex processing?

Share this post


Link to post
Share on other sites
Quote:
Original post by schupf
Ok, so Vertices with Format D3DFVF_XYZRHW will not be transformed by viewport mapping.
Correct, it would be quite simple to think of D3DFVF_XYZRHW as a "pass through" token - it just passes the data straight on to the next stage.

Quote:
Original post by schupf
Is it possible to use D3DFVF_XYZRHW AND use a vertex shader?
No, nor can you use POSITIONT with a vertex shader.

Quote:
Original post by schupf
One last question: In D3D10 there is no FVF anymore - only shader semantics. So if I want to use screen space vertices in D3D10, is the semantics POSITIONT the way to go?
There is no POSITIONT in D3D10, nor are there any of the other D3DDECLUSAGE values you may be familiar with from D3D9. The binding system with D3D10 is much more flexible and simply maps LPCSTR between the IA declaration and the VS declaration. From the VS onwards you need to start using the SV_** semantics so that later pipeline stages can pick things up correctly.


Ultimately, for either 9 or 10 when using a vertex shader you can send down nearly arbitrary data and interpret it accordingly. FVF's and semantics are most important when something else is interpretting your data - in the case of shaders you control both sides of the interface, the application and the HLSL.

You could legitimately send a POSITION element down to a D3D9 VS that is actually the XYZ elements you'd use for a D3DFVF_XYZRHW with an implicit W=1.0f [smile]

hth
Jack

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!