Sign in to follow this  

VS = NULL

This topic is 3484 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

As you all know the pipeline looks like this (simplified): VS, Setup, Pixelshader , where the Setup stage performs the homogeneous divide, viewport mapping and rasterization If I disable the VertexShader (device->SetVertexShader( NULL );), will the vertices immediately passed to the rasterizer without any homogeneous division or viewport mapping?

Share this post


Link to post
Share on other sites
Quote:
Original post by schupf
As you all know the pipeline looks like this (simplified):
VS, Setup, Pixelshader , where the Setup stage performs the homogeneous divide, viewport mapping and rasterization

If I disable the VertexShader (device->SetVertexShader( NULL );), will the vertices immediately passed to the rasterizer without any homogeneous division or viewport mapping?


No, they'll be passed through the fixed function pipeline.

Share this post


Link to post
Share on other sites
I have an example from the DX SDK where they define 4 vertices in screen space (a quad from (-0.5, -0.5) to (242.5, 242.5))

They disable the VS and only use a PS. Since this demo works these vertices have to be passed immediately to the rasterizer. I mean if these vertices would be multiplied by the viewport mapping matrix, the screen space positions would be destroyed.
So what exactly happens with vertices if i disable the VS?

Share this post


Link to post
Share on other sites
d3dfvf_xyzrhw or positiont types are assumed to be fully transformed, skip the vertex shader and viewport scaling, and are in pixels for xy. d3dfvf_xyz or position types use the fixed function vertex pipeline.

[Edited by - Namethatnobodyelsetook on May 25, 2008 8:59:03 PM]

Share this post


Link to post
Share on other sites
Well, the code indeed uses D3DFVF_XYZRHW as vertex format, but the code also uses a pixel shader!
   Device->SetVertexShader( NULL );
Device->SetFVF( FVF_TLVERTEX );
Device->SetPixelShader( g_pLumDispPS );

Does this mean the vertices sent to the pipeline are considered as the input of the pixel shader (so ALL stages before the pixel shader stage like viewport mapping and perspective division are omitted?

Share this post


Link to post
Share on other sites
Oops, meant to say "skipped the VERTEX shader". I'll go edit the above to not confuse anyone else who reads this thread. I'm not sure if perspective divide is done or not. I've always seen a w of 1.0 used. I imagine it probably does do the divide, as it's a pixel operation, and enables more functionality.

Share this post


Link to post
Share on other sites
From what I remember about D3DFVF_XYZRHW (nearly 7 years since I last had to use it!):

- perspective divide DOES happen.

- viewport transformation DOES NOT happen.

- backface culling DOES still happen.

- if you want your TL verts to be clipped, make sure the device has the D3DPMISCCAPS_CLIPTLVERTS device cap. If it doesn't have the cap, you must do all clipping yourself. If you don't want clipping, disable the CLIPPING render state and set any relevent buffers to D3DUSAGE_DONOTCLIP.

- if your TL verts represent 3D and you want them to be clipped, set up a valid D3D viewport and projection matrix. My mind is patchy on this, but ISTR this was so that D3DPMISCCAPS_CLIPTLVERTS capable hardware (and the software T&L pipe) was able to transform TL verts back into clip space to do proper perspective correct clipping.

Share this post


Link to post
Share on other sites
Ok, so Vertices with Format D3DFVF_XYZRHW will not be transformed by viewport mapping. Is it possible to use D3DFVF_XYZRHW AND use a vertex shader?

One last question: In D3D10 there is no FVF anymore - only shader semantics. So if I want to use screen space vertices in D3D10, is the semantics POSITIONT the way to go?

Share this post


Link to post
Share on other sites
Quote:
Original post by schupf
Is it possible to use D3DFVF_XYZRHW AND use a vertex shader?


I don't think that works. Why would you want to? If you're using a vertex shader, use a vertex declaration. D3DDECLUSAGE_POSITION with D3DDECLTYPE_FLOAT4 would get you vertex positions with xyzw components.

Bear in mind that a vertex shader expects to output into clip space so if you want to simply pass pre-transformed vertices through the shader to the rasterizer then they should be pre-transformed into clip space.

Quote:
One last question: In D3D10 there is no FVF anymore - only shader semantics. So if I want to use screen space vertices in D3D10, is the semantics POSITIONT the way to go?


If you don't want to perform any vertex processing at all, yes with DX9 POSITIONT is the way to go, and I think the same semantic still works under DX10. If not, a simple pass-through vertex shader will do the job.

All that said, on any current hardware I'd be extremely surprised to see vertex processing show up as a bottleneck in any sane/balanced application. Are you hoping for any significant performance gain from disabling vertex processing?

Share this post


Link to post
Share on other sites
Quote:
Original post by schupf
Ok, so Vertices with Format D3DFVF_XYZRHW will not be transformed by viewport mapping.
Correct, it would be quite simple to think of D3DFVF_XYZRHW as a "pass through" token - it just passes the data straight on to the next stage.

Quote:
Original post by schupf
Is it possible to use D3DFVF_XYZRHW AND use a vertex shader?
No, nor can you use POSITIONT with a vertex shader.

Quote:
Original post by schupf
One last question: In D3D10 there is no FVF anymore - only shader semantics. So if I want to use screen space vertices in D3D10, is the semantics POSITIONT the way to go?
There is no POSITIONT in D3D10, nor are there any of the other D3DDECLUSAGE values you may be familiar with from D3D9. The binding system with D3D10 is much more flexible and simply maps LPCSTR between the IA declaration and the VS declaration. From the VS onwards you need to start using the SV_** semantics so that later pipeline stages can pick things up correctly.


Ultimately, for either 9 or 10 when using a vertex shader you can send down nearly arbitrary data and interpret it accordingly. FVF's and semantics are most important when something else is interpretting your data - in the case of shaders you control both sides of the interface, the application and the HLSL.

You could legitimately send a POSITION element down to a D3D9 VS that is actually the XYZ elements you'd use for a D3DFVF_XYZRHW with an implicit W=1.0f [smile]

hth
Jack

Share this post


Link to post
Share on other sites
Thanks for your answers! Things getting clearer [smile]
Quote:
Correct, it would be quite simple to think of D3DFVF_XYZRHW as a "pass through" token - it just passes the data straight on to the next stage.
I wonder why the SDK Documentation just don't gives the user this important information?! The SDK Docu just says: D3DFVF_XYZRHW = transformed vertices. No word about that the perspective division WILL take place, but the viewport mapping will NOT take place. Sometimes the DK docu is really really bad and confusing (and sometimes even wrong) :(
Quote:
There is no POSITIONT in D3D10, nor are there any of the other D3DDECLUSAGE values you may be familiar with from D3D9. The binding system with D3D10 is much more flexible and simply maps LPCSTR between the IA declaration and the VS declaration. From the VS onwards you need to start using the SV_** semantics so that later pipeline stages can pick things up correctly.

According to this site: http://msdn.microsoft.com/en-us/library/bb509647(VS.85).aspx there is the semantic POSITIONT in D3D10. I think it is the only option if you need transformed vertices in screen space.
You mentioned that D3D10 just maps LPCSTR to the shaders - does this mean I can use ANY identifier as long as it fits to the shader?
So for example could I write this in the C++ host code:
D3D10_INPUT_ELEMENT_DESC layout[] = {
{ "MY_FANCY_SEMANTIC", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0, 0, D3D10_INPUT_PER_VERTEX_DATA, 0 }
};
and in the vertex shader:
OUT_STRUCTURE VS(float3 v : MY_FANCY_SEMANTIC) ... 

?

And finally: You mentioned these SV_** semantics. To be honest I never understand why I need these semantics. Whats the difference of SV_POSITION and POSITION? Can I use these SV_** Values only in Pixel shaders?

Share this post


Link to post
Share on other sites
Quote:
Original post by schupf
I wonder why the SDK Documentation just don't gives the user this important information?! The SDK Docu just says: D3DFVF_XYZRHW = transformed vertices. No word about that the perspective division WILL take place, but the viewport mapping will NOT take place. Sometimes the DK docu is really really bad and confusing (and sometimes even wrong) :(
Yup, can't argue that it could definitely be improved in places!! That said, it doesn't usually try to teach graphics theory outside of the 'programming guide' branch. People versed in traditional graphics theory can probably put 1-and-1 together...

Quote:
Original post by schupf
According to this site: http://msdn.microsoft.com/en-us/library/bb509647(VS.85).aspx there is the semantic POSITIONT in D3D10.
The page you've found is for DirectX HLSL. One of the lovely confusions in the more recent SDK's - common shader stuff is DirectX HLSL with a few pages being "Direct3D 9 HLSL" or "Direct3D 10 HLSL". Untangling the mix can be difficult.

Direct3D 10 has no fixed-function pipeline therefore the XYZRHW and POSITIONT cannot have the same definition as in D3D9. So technically you could create and use the semantic but it has no special meaning like in D3D9.

Quote:
Original post by schupf
You mentioned that D3D10 just maps LPCSTR to the shaders - does this mean I can use ANY identifier as long as it fits to the shader?
Yes. I made a point of writing my first D3D10 shaders a few years back in terms of fruits and fruit bowls. No particular reason, I just felt like it [grin]

Quote:
Original post by schupf
And finally: You mentioned these SV_** semantics. To be honest I never understand why I need these semantics.
In D3D9 the semantic annotations are fixed therefore every single one has a system-defined definition. With D3D10's more flexible system you can't easily tell the difference, so there are system generated and system interpretted values prefixed with SV_**.

Some you do need in the same way as D3D9 (e.g. outputting a position or colour) but most others are for advanced techniques or new features of D3D10. You can do some very cool things with the system generated values.

Quote:
Original post by schupf
Whats the difference of SV_POSITION and POSITION? Can I use these SV_** Values only in Pixel shaders?
Look down the page you referenced and you'll see a 9-to-10 translation table. SV_POSITION is equivalent to POSITION. Current versions of FXC will treat them interchangeably to aide with multi-targetting and backwards compatability, but you'd be sensible to adopt the D3D10 notation if you're using D3D10 specifically as we don't know how long this backward compatability will exist in the compiler...


hth
Jack

Share this post


Link to post
Share on other sites
One thing that just came into my mind: In D3D10 there is no FFP, so if I disable the VS then there just is no VS. But what happens if I dont use a VS in D3D9? Will the vertices still be transformed by the FFP and all the set FFP matrices? (and what happens if I set no PS in D3D9? Will then the rules of the fixed multi texturing stages be applied?)

About the SV_* Semantics: So the normal semantics like NORMAL or POSITION have NO meaning to D3D10, they are just hints for us humans. But all the SV_* Semantics DO have a meaning for the pipeline?

Some additional questions that came into my mind and that aren't answered in the documentation:
1) Most of the SV_* values are filled by the pipeline, aren't they? For example I could write bool b: SV_IsFrontFace and this value would be filled by the pipeline?

2) Some of these SV_* values actually only make sense in a certain shader - for example SV_IsFrontFace to me makes only sense in a geometry shader. So are some of these SV_* values only available in certain types of shaders?

3) Is it possible that some SV_* values change their meaning depending on the context? For example if I use SV_POSITION as input of a vertex shader, does D3D10 know that these are 3D coordinates? And when I use SV_POSITION as output of the VS, does D3D10 know that these coordinates are in clip space now?

Last question: If I want to use transformed vertices in screen space in D3D10, what would be the best way to do this. Currently I think I would do the following: Define the vertices with screen space positions and give them the POSITIONT semantic (but I could also use ANY other semantic name), pass the vertices just through in the VS. But there still is the viewport mapping in the D3D10 pipeline. Since I cant tell D3D10 that my vertices are already in screen space I guess I have to disable viewport mapping? If yes, how can I disable viewport mapping in D3D10.

Sorry for all my questions, but I just dont want to use DX, I really want to understand the details! :)

Share this post


Link to post
Share on other sites
Quote:
Original post by schupf
One thing that just came into my mind: In D3D10 there is no FFP, so if I disable the VS then there just is no VS. But what happens if I dont use a VS in D3D9? Will the vertices still be transformed by the FFP and all the set FFP matrices? (and what happens if I set no PS in D3D9? Will then the rules of the fixed multi texturing stages be applied?)
Yes. If you don't use shaders in D3D9, it assumes you want to use the fixed function pipeline, rather than not rendering anything.

Share this post


Link to post
Share on other sites
Quote:
Original post by schupf
About the SV_* Semantics: So the normal semantics like NORMAL or POSITION have NO meaning to D3D10, they are just hints for us humans. But all the SV_* Semantics DO have a meaning for the pipeline?
Yup.

Quote:
Original post by schupf
1) Most of the SV_* values are filled by the pipeline, aren't they? For example I could write bool b: SV_IsFrontFace and this value would be filled by the pipeline?
Strikes me as a reasonable mix of both - I make it a 6:5 of generated versus interpretted.

Quote:
Original post by schupf
So are some of these SV_* values only available in certain types of shaders?
Yes, look at the help page you referenced earlier [wink]

Quote:
Original post by schupf
3) Is it possible that some SV_* values change their meaning depending on the context? For example if I use SV_POSITION as input of a vertex shader, does D3D10 know that these are 3D coordinates? And when I use SV_POSITION as output of the VS, does D3D10 know that these coordinates are in clip space now?
Whilst I haven't exhaustively checked it, the real litmus test is when the information wrapped in a semantic hits the stage that intends to use it. Prior to this I'd imagine you can stick whatever you want in! The rasterizer will assume certain things about SV_Position and you'll get undefined behaviour if you stray outside of these constraints....

Quote:
Original post by schupf
Last question: If I want to use transformed vertices in screen space in D3D10, what would be the best way to do this.
Simon's suggestion about declaring a position semantic of float4 width is pretty reasonable.

Quote:
Original post by schupf
Define the vertices with screen space positions and give them the POSITIONT semantic (but I could also use ANY other semantic name)
Are you trying to go for legacy migration here, or are you just sticking to the old consensus for fun?? Nitpick maybe, but only use previously reserved words if you expect regression testing to pass. Otherwise you're going to make unmaintainable code for yourself and your colleagues [wink].

Quote:
Original post by schupf
But there still is the viewport mapping in the D3D10 pipeline. Since I cant tell D3D10 that my vertices are already in screen space I guess I have to disable viewport mapping? If yes, how can I disable viewport mapping in D3D10.
I'd imagine the identity viewport (0,0 to 1,1) would be sufficient, but look at the docs for the mathematical definition to be sure. Otherwise you can always go for outputting from the VS in projection space which also nets you the nice advantage of being resolution independent!

Quote:
Original post by schupf
I really want to understand the details! :)
You're going about it the right way, keep up the good work [smile]


Cheers,
Jack

Share this post


Link to post
Share on other sites
First of all a BIG thanks to jollyjeffers! I think its absolutely great how patient and detailed you help poeple like me! I really appreciate that [smile]

To understand screen coordiantes in DX10 I tried to draw a fullscreen quad with 2 approaches:
1) Relative screen coordinates: As u mentioned I drew a full screen quad with clip coordinates from (-1,1) to (1,-1) and w=1. Since w is 1 the homogeneous division after the VS doesnt do anything and the viewport mapping maps the NDC to the screen.

2) Absolute screen coordinates: Sometimes its better to have absolute values, so I tried to draw the quad in absolute screen coordiantes. Again I defined 4 vertices, this time with screen coordinates from (0,0) to (100, 50) and w=1 (I wanted to draw a little rectangle in the top left corner). Again the VS just passed the vertices through and I tried to disable the viewport mapping by setting the viewport width to 2, the height to -2, topLeftX to -1 and topLeftY to 1 (I took the equation from http://msdn.microsoft.com/en-us/library/bb205126(VS.85).aspx and just used values to make the equation: X'=X, Y'=Y).
Unfortunately with this approach I DONT see the quad. I think the problem could be the clipping. After the VS stage the vertices are clipped outside [-w,w] (which is [-1,1] for my vertices since w=1) and because except (0,0) ALL my vertices are outside the clipping volume it could be possible that they are all clipped. Is this the problem why my absolut screen coordinates dont work or do I miss something? How can I use absolute coordinates in DX10?

Share this post


Link to post
Share on other sites
Glad to hear you found my explanation(s) useful [grin]

As for the screen-space coordinates... my interpretation would be to perform the [re-]mapping at the VS stage.

The way I'm thinking of it is that the output of the VS wants it in a particular format yet your application wants to express it in a different one. Passing in additional params via a CB to map from screen-space coords to projection-space coords shouldn't be too difficult - I think there are even D3DX math functions for creating an orthographic projection matrix with these characteristics.

hth
Jack

Share this post


Link to post
Share on other sites

This topic is 3484 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this