# What does D3D expect as vertex position output?

This topic is 4705 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

So... My entire rendering system uses shaders to modify the rendering pipeline. Right now, I'm trying to make a shader that renders UI. It's very simple. You use it on a textured quad and it simply outputs the position (no transforms are preformed on it) and the U,V texture coordinates. I'm testing it on a quad size 1x1 with coordinates (0,0,0). What I thought I would see was a quarter of the quad in the upper lef hand corner of the screen, but what I get is a rectangle in the middle of the screen. Maybe I just don't understand projection space as well as I thought I did, but does the output of a vertex shader not correspond 1 to 1 with screen coordinates? So, if I pass in a position of 0,0,0 into my shader, do no transformations, and output it, does that not end up in the upper left hand corner of the screen?

##### Share on other sites
The output of the vertex shader is in "homogenous clip space" - defined as -1 to +1 for both X and Y; thus 0,0 is actually the middle of the screen.

Quote:
 Vertex Shader Semantics:Position of a vertex in homogenous space. Compute position in screen-space by dividing (x,y,z) by w.

hth
Jack

##### Share on other sites
What your shader is probably doing is using the transformation pipeline. With this I mean that in your vertex shader you are probably doing something like OUT.position = mul(IN.position, WorldViewProjection), this line will transform vertices from world space to screen space. If you want to work in screen space you will need to use transformed coordinates. This will give you what you are looking for.
Note you will if you are using the vertex declarations the following for the position if you want transformed vertices/coordinates. D3DDECLUSAGE_POSITIONT

I hope this helps.
Take care.

##### Share on other sites
Ah, I see. So I would have to actually do the scaling and offsetting myself by passing in the screen height and width...

Thanks!

##### Share on other sites
Wait... if I use POSITIONT, won't I actually have to move the physical position of the verticies? I don't want to have to lock and unlock the buffer whenever something moves.

##### Share on other sites
Quote:
 Original post by chippolotWait... if I use POSITIONT, won't I actually have to move the physical position of the verticies? I don't want to have to lock and unlock the buffer whenever something moves.

From the aforementioned documentation page, POSITIONT instructs D3D to ignore the VShader; which is functionally equivalent to using D3DFVF_XYZRHW vertices in the fixed-function pipeline.

I've seen code that used a single vertex buffer defined as 4 corners with only texture coordinates (0,0),(1,0),(0,1),(1,1) - and everything else was constructed via the vertex shader and constants that the application passed in (it passed percentage top/left/height/width values). It was quite an elegant solution, but the downside was that it required a large number of draw calls..

hth
Jack

1. 1
Rutin
46
2. 2
3. 3
4. 4
5. 5

• 12
• 10
• 12
• 10
• 13
• ### Forum Statistics

• Total Topics
632989
• Total Posts
3009746
• ### Who's Online (See full list)

There are no registered users currently online

×