pixel shader input SV_Position

Started by
4 comments, last by lomateron 11 years, 7 months ago
So I have this simple program were a triangle is draw in the screen like tutorial 2 of directx from SDK, were the world, view and projection matrices aren't used,So the shaders are like this:

[source lang="java"]//--------------------------------------------------------------------------------------
// Vertex Shader
//--------------------------------------------------------------------------------------
float4 VS( float4 Pos : POSITION ) : SV_POSITION
{
return Pos;
}


//--------------------------------------------------------------------------------------
// Pixel Shader
//--------------------------------------------------------------------------------------
float4 PS( float4 Pos : SV_POSITION ) : SV_Target
{
return float4(1.0f, 0.0f, 0.0f, 1.0f );
}[/source]
I want to understand very well how POSITION is tranformed to SV_POSITION
I alrewady know that if float4 Pos is (-1,1,0,1) that will mean top-left of screen, (1,1,0,1) that will be top-right of screen...etc
so i wanna know how it is transformed and what the values will mean.
Advertisement

I want to understand very well how POSITION is tranformed to SV_POSITION


EDIT: MJP's post below made me realize I should emphasize an important point: [font=courier new,courier,monospace]SV_Position[/font] is a semantic tag that means different things at different points. As an output from your vertex shader it is a set of homogenous coordinates. As MJP says below, as the input to the vertex shader it is in screen coordinates!

As for the "transform", you get to do any extra transformation you want in your vertex shader! In the case of the shader you posted, there is no transformation at all, you are just returning the [font=courier new,courier,monospace]Pos [/font]value the same as you received it. So they will be identical. And then the screen space transformation MJP describes occurs before it gets to your pixel shader.


I alrewady know that if float4 Pos is (-1,1,0,1) that will mean top-left of screen, (1,1,0,1) that will be top-right of screen...etc
so i wanna know how it is transformed and what the values will mean.
[/quote]

So, the position at this point is in homogenous coordinates which means that for a given (x,y,z,w), the final value w implicitly divides all the others to obtain the final normalized device coordinates. The division operation is helpful for perspective projection since it involve a divide that you can't accomplish just with normal matrix algebra.

So, the homogenous coordinates (-1,1,0,1) really mean (-1,1,0), and (1,0,1,5) really means (0.2,0,0.2).

In D3D the normalized device coordinates (x,y,z) form a 3D cube like so:

x: -1 (left of screen) to 1 (right)
y: -1 (bottom of screen) to 1 (top)
z: 0 (near) to 1 (far)

The z-value is used for depth buffering.

You will also hear of the homogenous coordinates or normalized device coordinates described as being in "clip space" or as "clip space coordinates," since that is the space in which triangle clipping is performed.
The vertex shader outputs vertex positions in clip space using homogenous coordinates. Usually this clip-space position is calculated in the vertex shader by transforming the incoming object-space position by a combined world * view * projection matrix. Afterwards during rasterization the perspective divide-by-w is performed on the clip-space positition, after which the position is in coordinate space you are referring to (where -1 is the bottom left of the viewport and +1 is the top right). After this you have the viewport transform, which basically flips Y, converts from [-1, 1] to [0, 1], and multiplies by the size of the viewport. At this point you have the pixel position that gets passed as SV_Position to the pixel shader, which is of the range [0, ViewportSize] where (0, 0) is the top left and [ViewportWidth, ViewportHeight] is the bottom right.
[color=#282828][font=helvetica, arial, verdana, tahoma, sans-serif]

[background=rgb(250, 251, 252)]After this you have the viewport transform, which basically flips Y, converts from [-1, 1] to [0, 1], and multiplies by the size of the viewport. At this point you have the pixel position that gets passed as SV_Position to the pixel shader, which is of the range [0, ViewportSize] where (0, 0) is the top left and [ViewportWidth, ViewportHeight] is the bottom right. [/background]

[/font]

ohh that was it, exactly what i wanted to know, thanks.

Yeah if you're interested this is the exact math that is used:


X = (X + 1) * Viewport.Width * 0.5 + Viewport.TopLeftX
Y = (1 - Y) * Viewport.Height * 0.5 + Viewport.TopLeftY
Z = Viewport.MinDepth + Z * (Viewport.MaxDepth - Viewport.MinDepth)
I am having a problem were the Position input of my pixel shader isnt tranformed using the viewport and I want to know in wich cases it doesnt happenss like in my case, i dont know why it isnt.
[source lang="cpp"]struct VS_TX
{
float4 Pos : POSITION;
float2 Tex : TEXCOORD;
};

struct PS_TX
{
float4 Pos : SV_POSITION;
float2 Tex : TEXCOORD0;
};[/source]

I have 4 viewports and every time i want to change between viewports, I call this RSSetViewports( 1, &vp3 )



OHH never mind, it was being transformed but the value wasn't directly the pixel number, i mean it wasn't lets say 230, it was 230.5 so it messed up with my code.

This topic is closed to new replies.

Advertisement