# DX11 [hlsl] Unsure about modifying position in geometry shader for particle expansion

## Recommended Posts

Posted (edited)

I'm writing a shader which takes particles as points and expands it into 4 vertices. I'm doing this by adding to xy in the geometry shader for the 4 points. However the values I'm adding don't make sense to me. My understanding is that the output of VS/GS is in the [-1, 1] space range. However I'm adding 5 to the values and getting a reasonable result as in the attached image. Also the z depth does seem to affect the final size of the particle which is weird to me because the worldviewproj has already been applied.

Does the pipeline do some kind of additional transformation after the GS? Does z affect that? What positional space is the output of the GS in? Clip space (normalized)? Screen space (pixel)?

cbuffer data :register(b0)
{
float4x4 worldViewProj;
float4x4 world;
float4x4 viewProj;
}

struct VS_INPUT
{
float4 position : POSITION;
float4 color : COLOR;
float2 texcoord : TEXCOORD;
};

struct GS_INPUT
{
float4 position : POSITION;
float4 color : COLOR;
float2 texcoord : TEXCOORD;
};

struct PS_INPUT
{
float4 position : SV_POSITION;
float4 color : COLOR;
float2 texcoord : TEXCOORD;
};

Texture2D diffuseMap;
SamplerState textureSampler
{
Filter = MIN_MAG_MIP_LINEAR;
};

GS_INPUT VS(VS_INPUT input)
{
GS_INPUT output = (GS_INPUT)0;
output.position = mul(worldViewProj, input.position);
output.color = input.color;
output.texcoord = input.texcoord;
return output;
}

[maxvertexcount(4)]
void GS(point GS_INPUT input[1], inout TriangleStream<PS_INPUT> outputStream)
{
PS_INPUT vertices[4];
float s = 5.0;
float2 a = float2(-1, -1) * s;
float2 b = float2(+1, -1) * s;
float2 c = float2(-1, +1) * s;
float2 d = float2(+1, +1) * s;

vertices[0].color = input[0].color;
vertices[1].color = input[0].color;
vertices[2].color = input[0].color;
vertices[3].color = input[0].color;

vertices[0].texcoord = float2(0, 0);
vertices[1].texcoord = float2(0, 1);
vertices[2].texcoord = float2(1, 0);
vertices[3].texcoord = float2(1, 1);

vertices[0].position.zw = input[0].position.zw;
vertices[1].position.zw = input[0].position.zw;
vertices[2].position.zw = input[0].position.zw;
vertices[3].position.zw = input[0].position.zw;

vertices[0].position.xy = input[0].position.xy + a;
vertices[1].position.xy = input[0].position.xy + b;
vertices[2].position.xy = input[0].position.xy + c;
vertices[3].position.xy = input[0].position.xy + d;

outputStream.Append(vertices[0]);
outputStream.Append(vertices[1]);
outputStream.Append(vertices[2]);
outputStream.Append(vertices[3]);
outputStream.RestartStrip();
}

float4 PS(PS_INPUT input) : SV_TARGET
{
return diffuseMap.Sample(textureSampler, input.texcoord) * input.color;
//return input.color;
}

Edited by Axiverse

##### Share on other sites

The output of the VS is not in normalized [-1, 1] space, it's in clip space which means XY are all the range [-W, W], and Z is [0, W]. You have to divide by W to get "normalized device coordinates" which are in the [-1, 1] range for X and Y. This is why the Z depth is affecting the size: the W coordinate after applying a perspective projection is equivalent to the Z coordinate after transforming to view-space, so the further the point is from the camera the smaller the result of 5 / W will be, which means the quad will be smaller in screen-space. If you want the size to be constant in screen-space, you need to multiply the offset by W.

If what you're after is a constant world-space size for your particles, then it may be simpler to defer applying your projection matrix until your GS. Have your VS output particles in view-space, then in your GS expand your particles in view-space and then apply your projection matrix to all 4 vertices of the quad.

##### Share on other sites

Thank you, that's it.

## Create an account

Register a new account

• 9
• 10
• 12
• 9
• 33