# Help me understand how DirectX treats the W component

This topic is 2161 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

I have these simple shaders to draw a triangle:

cbuffer viewBuffer
{
matrix view;
};

struct VS_INPUT
{
float4 Position : POSITION;
};

struct PS_INPUT
{
float4 Position : SV_POSITION;
};

{
PS_INPUT output;
output.Position = float4(input.Position.x, input.Position.y, 0.5f, 1.0f);

return output;
}

{
return float4(1.0f, 1.0f, 1.0f, 1.0f);
}



The triangle draws fine with these parameters.  However, if I change the W component of the Position for the output of the Vertex shader, the triangle doesn't draw at all.  I understand that having a 4 component vector is useful for calculating transformations using 4x4 matrices.  However, I don't really understand why changing W is causing my triangle to not draw here.

##### Share on other sites

The x, y and z coordinates end up getting divided by the w component.

What are you changing the w component to? 0.0 would not be good. Other values should scale the x, y and z in world space possibly moving the triangle off screen or beyond the clip planes.

##### Share on other sites

When transforming vectors by matrices the w component has the effect of scaling the translational component of your matrix.  Consider the vanilla transformation matrix encountered while doing graphics work:

| ux vx nx tx|  |x|   |(x y z) dot (ux uy uz) + tx*w|
| uy vy ny ty|* |y| = |(x y z) dot (vx vy vz) + ty*w|
| uz vz nz tz|  |z|   |(x y z) dot (nx ny nz) + tz*w|
|  0  0  0  1|  |w|   |                            w|

?

?

?

?

?

The rotation and scale of the matrix is stored in the u, v, and n vectors and the translational component is stored in the t vector.  As you can in the right-hand-side the (x y z) components of the original vector where translated by w*t.  For directions w is set to 0 (or implicitly interpreted as 0) since translations are irrelevant to directions.  For points w is set to 1 (or implicitly interpreted as 1) treating the translation as expected.

Now for the good stuff.  Consider multiplying a point by a projection matrix:

| sx  0  0  0| |x|   |x*sx     |
|  0 sy  0  0|*|y| = |y*sy     |
|  0  0 sz tz| |z|   |z*sz + tz|
|  0  0  1  1| |1|   |z        |


The right-hand-side is the data spit out by your vertex shader.  For various mathematical reasons that I won't get into the GPU will clip data against the view frustum in this space and it is thus called clip space.  To apply perspective some magic happens, the right-hand-side gets homogenized or divide by its w component:

H( |x*sx   | ) = | (sx*x)    / z |
|y*sy   |     | (sy*y)    / z |
|z*sz+tz|     | (sz*z+tz) / z |
|z      |     | 1             |


From here the right-hand-side is said to be in NDC or normalized device coordinates.  The z component gets directly written to the depth buffer, the x and y component are scaled and biased to generate pixel coordinates and the w component is simply discarded (it will always be 1).

If you are hand setting the w value of your vertex in the vertex shader you must take into account that the GPU will divide (x, y, z) by w to generate NDC coordinates.

 I don't know what all that whitespace is...I can't get rid of it

Edited by nonoptimalrobot

##### Share on other sites

Thanks for the responses guys! I really appreciate it

• 18
• 29
• 11
• 21
• 16