# Having problems getting soft particles to work

This topic is 1677 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

I've been trying to implement soft particles as described in the nvidia paper. I'm having a problem with this part: "If we want to compare consistent depth values, the fetched value Zbuf needs to be transformed into projection space". I'm using log-depth, so I can't use the formula they're using. They also never explain what any of the variables in the equation are or why the comparison needs to be done in projection-space and not "Normalized device coordinate" space.

I know how to reconstruct a world-space position from a log-depth value from a previous topic, so I tried several different ways to get it to work for the soft particles.

For reference, this is the equation for the reconstruction:
float depthVal = DepthMap.Sample(depthSampler, texCoord).r;
depthVal = (pow(0.001 * FarPlane + 1, depthVal) - 1) / 0.001;
depthVal /= (1 * FarPlane) / (FarPlane - 1); //The 1 is the near plane value

float2 invProjPos = mul(input.ScreenPosition.xy * depthVal, InverseProjection);
float4 position = mul(float4(invProjPos, -depthVal, 1), InverseView);

position /= position.w;

So I figured to get the projection-space value, the equation should be just the first 3 lines. Next, I figured the particle's Z value should be calculated like this (from the vertex shader):
float4 worldPosition = mul(float4(input.Position, 1), instanceTransform);
float4 viewPosition = mul(worldPosition, View);
output.Position = mul(viewPosition, Projection);

output.Position.z = log(0.001 * output.Position.z + 1) / log(0.001 * NearFar.y + 1) * output.Position.w;
output.Z = output.Position.z;

Now I know the value output to the depth buffer will be divided by the W normally, so I also tried changing the last line to:

output.Z = output.Position.z / output.Position.w;

In either case, the result is the same: no fading what-so-ever. The particles still have their hard edges. I'm not really sure what I'm doing wrong here and was hoping someone could point it out.

[spoiler]
const float SoftParticleContrast = 2.0;
const float SoftParticleScale = 1;
const float zEpsilon = 0.0;

float4x4 View;
float4x4 Projection;
float4 NearFar; //x = near, y = far, z = far * near, w = far - near

Texture2D Texture;
Texture2D<float> Depth;

SamplerState Sampler
{
Filter = MIN_MAG_MIP_LINEAR;
};

{
float3 Position		: POSITION0;
float2 TextureCoordinate : TEXCOORD0;
};

{
float4 Position		: SV_POSITION;
float4 Color			: COLOR0;
float2 TextureCoordinate	: TEXCOORD0;
float  Z				: TEXCOORD1;
};

{
float4 Color	: SV_TARGET0;
};

float Contrast(float Input, float ContrastPower)
{
//piecewise contrast function
bool IsAboveHalf = Input > 0.5 ;
float ToRaise = saturate(2*(IsAboveHalf ? 1-Input : Input));
float Output = 0.5*pow(ToRaise, ContrastPower);

Output = IsAboveHalf ? 1-Output : Output;

return Output;
}

// Vertex shader helper function shared between the two techniques.
{

// Apply the world and camera matrices to compute the output position.
float4 worldPosition = mul(float4(input.Position, 1), instanceTransform);
float4 viewPosition = mul(worldPosition, View);
output.Position = mul(viewPosition, Projection);

output.Position.z = log(0.001 * output.Position.z + 1) / log(0.001 * NearFar.y + 1) * output.Position.w;
//output.Position.z = log(output.Position.z / 0.001) / log(FarPlane / 0.001) * output.Position.w;

output.TextureCoordinate = input.TextureCoordinate * rect.zw + rect.xy;
output.Color = color;
output.Z = output.Position.z / output.Position.w;

return output;
}

// Hardware instancing reads the per-instance world transform from a secondary vertex stream.
{
}

{

clip(input.Color.a <= 0 ? -1 : 1);

float depthVal = Depth.Load(int4(input.Position.xy, 0, 0));
depthVal = (pow(0.001 * NearFar.y + 1, depthVal) - 1) / 0.001;
depthVal /= NearFar.z / NearFar.w;
float zdiff = (depthVal - input.Z);
float c = Contrast(zdiff * SoftParticleScale, SoftParticleContrast);

if( c * zdiff <= zEpsilon )
{
}
//output.Color = float4(depthVal * 100, 0, 0, 1);
output.Color = Texture.Sample(Sampler, input.TextureCoordinate) * input.Color * c;

return output;
}

technique11 HardwareInstancing
{
pass Pass1
{
}
}

[/spoiler]

##### Share on other sites

As long as you use some kind of linear z distribution it should not matter in which space you are. Best to use either camera/eye space or projection space, which ever is easier in your engine. In this case world space is not really useful, in general it is not really useful for billboard/particle/projection handling. I think that you will already align your particles to the camera, therefor you particle will most likely use a constant depth. Use this depth (make it lineare if neccessary) to compare it with the (linear) camera depth of your gbuffer.

##### Share on other sites
Well, as far as I can tell, that is what I'm doing. The gbuffer value I'm reading in should be converted to projection-space and compared with the projection-space particle depth. I must have done something wrong though, because the particles look exactly the same as before; there's no fade-out near the intersection area. Edited by Telanor

##### Share on other sites

Try flipping the order you do the difference or use abs(), your value might just be negative.

##### Share on other sites

Have you stepped through the shader to see where your values are going wrong?

##### Share on other sites
Ok so I ran through the shader a few times. It seems that input.Z is in the range of 0-1 (which is what the nvidia contrast function seems to expect) and depthVal wasn't, so I removed these two lines:
depthVal = (pow(0.001 * NearFar.y + 1, depthVal) - 1) / 0.001;
depthVal /= NearFar.z / NearFar.w;

Now they're both in the 0-1 range... but all the particles are invisible. Stepping through one of the pixels came out with the following values:

depthVal = 0.210246200
input.Z = 0.184789200
zdiff = 0.025457070
c = 0.001296125

My only guess at this issue is that maybe my Z values aren't linear, but I don't know how to fix that if that's the case.

##### Share on other sites

What is your blending code ?

Here's an example:

depthVal = normalized and linear depth value of g-buffer, 1 is farest
z = current depth of the pixel, normalized and linear

zdiff = depthVal-z;
alpha_blend_factor = smoothstep(0.0, 0.1, zdiff); //<-- play around with the 0.1 value

final_alpha_blend_factor = alpha_blend_factor * particle_color.a;



##### Share on other sites
Sorry for not replying sooner, somehow I didn't get a notification email. I'm using the same blending function from the nvidia sample:

float Contrast(float Input, float ContrastPower)
{
//piecewise contrast function
bool IsAboveHalf = Input > 0.5 ;
float ToRaise = saturate(2*(IsAboveHalf ? 1-Input : Input));
float Output = 0.5*pow(ToRaise, ContrastPower);

Output = IsAboveHalf ? 1-Output : Output;

return Output;
}

The spoiler tag in the OP has the full shader code

##### Share on other sites

Double check your zdiff and depth reconstruction implementation. Btw. why are you using a log-depth buffer, why don't use a linear one ? Nevertheless, the blending seems ok, even if your example pixel will be rendered, the very low c = 0.001296125 value will make it more or less invisible. Use testcode first, until you see the particle squares (like the uncommented output.Color = float4(depthVal * 100, 0, 0, 1); ), then try to add some smooth blending.

##### Share on other sites
Well the reconstruction code looks fine to me but I'm particularly bad at this kind of math, so even if I stare at it for hours it's going to look fine to me with my basic understanding.

We're using a log depth because a) we didn't want all the artifacts associated with the standard z/w buffer and b) it doesn't require disabling early-z. While I probably could switch, it'd be a bit of a pain, and I think it could work with log depth with the right math.