• Create Account

Need scary sound effects or creepy audio loops for your next horror-themed game? Check out Highscore Vol.3 - The Horror Edition in our marketplace. 50 sounds and 10 loops for only \$9.99!

#Actuallipsryme

Posted 03 December 2012 - 12:59 PM

I've now pre multiplied the shadow matrices with the inverse view(my camera's view) and multiply that with my position in view space.
I still don't quite understand this part:

"in terms of X and Y the second cascade is 2x the width and height of the first cascade and is located 1 unit to the right and 2.25 units upwards. Using that, you could project using the first cascade matrix and then apply the appropriate offset and scale to get coordinates in terms of the second cascade."

I'm having a hard time understanding where those numbers come from and how I would obtain my sample coordinates from that.

Let me go through this process from the beginning.
I calculate the view projection of my light source which is the shadow matrix.
Then I transform these from post-projection space to uv space like this:
// Apply the scale/offset/bias matrix, which transform from [-1, 1]
// post-projection space to [0, 1] UV space
const float bias = Bias;
XMMATRIX texScaleBias;
texScaleBias.r[0] = XMVectorSet(0.5f,  0.0f, 0.0f,  0.0f);
texScaleBias.r[1] = XMVectorSet(0.0f, -0.5f, 0.0f,  0.0f);
texScaleBias.r[2] = XMVectorSet(0.0f,  0.0f, 1.0f,  0.0f);
texScaleBias.r[3] = XMVectorSet(0.5f,  0.5f, -bias, 1.0f);


Now inside the shader function I obtain this shadow matrix(which I pre-multiplied by the inverseView) and multiply it by the view space position of my geometry to basically get my geometry position in the light's space, correct ?
Then I use the xy position of the projected position in shadow space as texture coordinate to sample the shadow map and comparing this with the projected z position. If I'm wrong somewhere what's the mistake I'm making here ?
float SampleShadowCascade(in float3 positionVS, in uint cascadeIdx)
{

// Edge tap smoothing
const int NumSamples = (Radius *2 + 1) * (Radius * 2 + 1);

float leftEdge = 1.0f - fracs.x;
float rightEdge = fracs.x;
float topEdge = 1.0f - fracs.y;
float bottomEdge = fracs.y;

[unroll(NumSamples)]
{
[unroll(NumSamples)]
{
{
float2 offset = float2(x, y) * (1.0f / ShadowMapSize);
float2 sampleCoord = shadowTexCoord + offset;

float xWeight = 1;
float yWeight = 1;

{
xWeight = leftEdge;
}
{
xWeight = rightEdge;
}

{
yWeight = topEdge;
}
{
yWeight = bottomEdge;
}

shadowVisibility += currentSample * xWeight * yWeight;
}
}

}

}


#8lipsryme

Posted 03 December 2012 - 12:52 PM

Before going into the actual projection I tried going through my shader part by part starting from the beginning, to see where it breaks.
And it seems that it starts with the cascades itself. Or rather the DepthVS.
Which is odd since it should just be the Z component of the geometry in view space. And that's also how it is in your sample.
My Z value never gets beyond 1.0f so the cascades will never switch.
Is there anything wrong with the way I calculate my view space position ? I'm using the same code that I use for my lighting which works fine.

update: Ah of course I'm scaling my linear depth by the far clip. I probably shouldn't do that
I've now pre multiplied the shadow matrices with the inverse view(my camera's view) and multiply that with my position in view space.
I still don't quite understand this part:

"in terms of X and Y the second cascade is 2x the width and height of the first cascade and is located 1 unit to the right and 2.25 units upwards. Using that, you could project using the first cascade matrix and then apply the appropriate offset and scale to get coordinates in terms of the second cascade."

I'm having a hard time understanding where those numbers come from and how I would obtain my sample coordinates from that.

Let me go through this process from the beginning.
I calculate the view projection of my light source which is the shadow matrix.
Then I transform these from post-projection space to uv space like this:
// Apply the scale/offset/bias matrix, which transform from [-1, 1]
// post-projection space to [0, 1] UV space
const float bias = Bias;
XMMATRIX texScaleBias;
texScaleBias.r[0] = XMVectorSet(0.5f,  0.0f, 0.0f,  0.0f);
texScaleBias.r[1] = XMVectorSet(0.0f, -0.5f, 0.0f,  0.0f);
texScaleBias.r[2] = XMVectorSet(0.0f,  0.0f, 1.0f,  0.0f);
texScaleBias.r[3] = XMVectorSet(0.5f,  0.5f, -bias, 1.0f);


Now inside the shader function I obtain this shadow matrix(which I pre-multiplied by the inverseView) and multiply it by the view space position of my geometry to basically get my geometry position in the light's space, correct ?
Then I use the xy position of the projected position in shadow space as texture coordinate to sample the shadow map and comparing this with the projected z position. If I'm wrong somewhere what's the mistake I'm making here ?
float SampleShadowCascade(in float3 positionVS, in uint cascadeIdx)
{

// Edge tap smoothing
const int NumSamples = (Radius *2 + 1) * (Radius * 2 + 1);

float leftEdge = 1.0f - fracs.x;
float rightEdge = fracs.x;
float topEdge = 1.0f - fracs.y;
float bottomEdge = fracs.y;

[unroll(NumSamples)]
{
[unroll(NumSamples)]
{
{
float2 offset = float2(x, y) * (1.0f / ShadowMapSize);
float2 sampleCoord = shadowTexCoord + offset;

float xWeight = 1;
float yWeight = 1;

{
xWeight = leftEdge;
}
{
xWeight = rightEdge;
}

{
yWeight = topEdge;
}
{
yWeight = bottomEdge;
}

shadowVisibility += currentSample * xWeight * yWeight;
}
}

}

}


#7lipsryme

Posted 03 December 2012 - 12:50 PM

Before going into the actual projection I tried going through my shader part by part starting from the beginning, to see where it breaks.
And it seems that it starts with the cascades itself. Or rather the DepthVS.
Which is odd since it should just be the Z component of the geometry in view space. And that's also how it is in your sample.
My Z value never gets beyond 1.0f so the cascades will never switch.
Is there anything wrong with the way I calculate my view space position ? I'm using the same code that I use for my lighting which works fine.

update: Ah of course I'm scaling my linear depth by the far clip. I probably shouldn't do that
I've now pre multiplied the shadow matrices with the inverse view(my camera's view) and multiply that with my position in view space.
I still don't quite understand this part:

"in terms of X and Y the second cascade is 2x the width and height of the first cascade and is located 1 unit to the right and 2.25 units upwards. Using that, you could project using the first cascade matrix and then apply the appropriate offset and scale to get coordinates in terms of the second cascade."

I'm having a hard time understanding where those numbers come from and how I would obtain my sample coordinates from that.

Let me go through this process from the beginning.
I calculate the view projection of my light source which is the shadow matrix.
Then I transform these from post-projection space to uv space like this:
// Apply the scale/offset/bias matrix, which transform from [-1, 1]
// post-projection space to [0, 1] UV space
const float bias = Bias;
XMMATRIX texScaleBias;
texScaleBias.r[0] = XMVectorSet(0.5f,  0.0f, 0.0f,  0.0f);
texScaleBias.r[1] = XMVectorSet(0.0f, -0.5f, 0.0f,  0.0f);
texScaleBias.r[2] = XMVectorSet(0.0f,  0.0f, 1.0f,  0.0f);
texScaleBias.r[3] = XMVectorSet(0.5f,  0.5f, -bias, 1.0f);


Now inside the shader function I obtain this shadow matrix(which I pre-multiplied by the inverseView) and multiply it by the view space position of my geometry to basically get my geometry position in the light's space, correct ?
Then I use the xy position of the projected position in shadow space as texture coordinate to sample the shadow map and comparing this with the projected z position.
float SampleShadowCascade(in float3 positionVS, in uint cascadeIdx)
{

// Edge tap smoothing
const int NumSamples = (Radius *2 + 1) * (Radius * 2 + 1);

float leftEdge = 1.0f - fracs.x;
float rightEdge = fracs.x;
float topEdge = 1.0f - fracs.y;
float bottomEdge = fracs.y;

[unroll(NumSamples)]
{
[unroll(NumSamples)]
{
{
float2 offset = float2(x, y) * (1.0f / ShadowMapSize);
float2 sampleCoord = shadowTexCoord + offset;

float xWeight = 1;
float yWeight = 1;

{
xWeight = leftEdge;
}
{
xWeight = rightEdge;
}

{
yWeight = topEdge;
}
{
yWeight = bottomEdge;
}

shadowVisibility += currentSample * xWeight * yWeight;
}
}

}

}


#6lipsryme

Posted 03 December 2012 - 12:50 PM

Before going into the actual projection I tried going through my shader part by part starting from the beginning, to see where it breaks.
And it seems that it starts with the cascades itself. Or rather the DepthVS.
Which is odd since it should just be the Z component of the geometry in view space. And that's also how it is in your sample.
My Z value never gets beyond 1.0f so the cascades will never switch.
Is there anything wrong with the way I calculate my view space position ? I'm using the same code that I use for my lighting which works fine.

update: Ah of course I'm scaling my linear depth by the far clip. I probably shouldn't do that
I've now pre multiplied the shadow matrices with the inverse view(my camera's view) and multiply that with my position in view space.
I still don't quite understand this part:

"in terms of X and Y the second cascade is 2x the width and height of the first cascade and is located 1 unit to the right and 2.25 units upwards. Using that, you could project using the first cascade matrix and then apply the appropriate offset and scale to get coordinates in terms of the second cascade."

I'm having a hard time understanding where those numbers come from and how I would obtain my sample coordinates from that.

Let me go through this process from the beginning.
I calculate the view projection of my light source which is the shadow matrix.
Then I transform these from post-projection space to uv space like this:
// Apply the scale/offset/bias matrix, which transform from [-1, 1]
// post-projection space to [0, 1] UV space
const float bias = Bias;
XMMATRIX texScaleBias;
texScaleBias.r[0] = XMVectorSet(0.5f,  0.0f, 0.0f,  0.0f);
texScaleBias.r[1] = XMVectorSet(0.0f, -0.5f, 0.0f,  0.0f);
texScaleBias.r[2] = XMVectorSet(0.0f,  0.0f, 1.0f,  0.0f);
texScaleBias.r[3] = XMVectorSet(0.5f,  0.5f, -bias, 1.0f);


Now inside the shader function I obtain this shadow matrix(which I pre-multiplied by the inverseView) and multiply it by the view space position of my geometry to basically get my geometry position in the light's space, correct ?
Then I use the xy position of the projected position in shadow space as texture coordinate to sample the shadow map and comparing this with the projected z position.
float SampleShadowCascade(in float3 positionVS, in uint cascadeIdx)
{

// Edge tap smoothing
const int NumSamples = (Radius *2 + 1) * (Radius * 2 + 1);

float leftEdge = 1.0f - fracs.x;
float rightEdge = fracs.x;
float topEdge = 1.0f - fracs.y;
float bottomEdge = fracs.y;

[unroll(NumSamples)]
{
[unroll(NumSamples)]
{
{
float2 offset = float2(x, y) * (1.0f / ShadowMapSize);
float2 sampleCoord = shadowTexCoord + offset;

float xWeight = 1;
float yWeight = 1;

{
xWeight = leftEdge;
}
{
xWeight = rightEdge;
}

{
yWeight = topEdge;
}
{
yWeight = bottomEdge;
}

shadowVisibility += currentSample * xWeight * yWeight;
}
}

}

}


#5lipsryme

Posted 03 December 2012 - 12:49 PM

Before going into the actual projection I tried going through my shader part by part starting from the beginning, to see where it breaks.
And it seems that it starts with the cascades itself. Or rather the DepthVS.
Which is odd since it should just be the Z component of the geometry in view space. And that's also how it is in your sample.
My Z value never gets beyond 1.0f so the cascades will never switch.
Is there anything wrong with the way I calculate my view space position ? I'm using the same code that I use for my lighting which works fine.

update: Ah of course I'm scaling my linear depth by the far clip. I probably shouldn't do that
I've now pre multiplied the shadow matrices with the inverse view(my camera's view) and multiply that with my position in view space.
I still don't quite understand this part:

"in terms of X and Y the second cascade is 2x the width and height of the first cascade and is located 1 unit to the right and 2.25 units upwards. Using that, you could project using the first cascade matrix and then apply the appropriate offset and scale to get coordinates in terms of the second cascade."

I'm having a hard time understanding where those numbers come from and how I would obtain my sample coordinates from that.

Let me go through this process from the beginning.
I calculate the view projection of my light source which is the shadow matrix.
Then I transform these from post-projection space to uv space like this:
// Apply the scale/offset/bias matrix, which transform from [-1, 1]
// post-projection space to [0, 1] UV space
const float bias = Bias;
XMMATRIX texScaleBias;
texScaleBias.r[0] = XMVectorSet(0.5f,  0.0f, 0.0f,  0.0f);
texScaleBias.r[1] = XMVectorSet(0.0f, -0.5f, 0.0f,  0.0f);
texScaleBias.r[2] = XMVectorSet(0.0f,  0.0f, 1.0f,  0.0f);
texScaleBias.r[3] = XMVectorSet(0.5f,  0.5f, -bias, 1.0f);


Now inside the shader function I obtain this shadow matrix(which I pre-multiplied by the inverseView) and multiply it by the view space position of my geometry to basically get my geometry position in the light's space, correct ?
Normally in basic shadow mapping wouldn't you then compare this value's z component with the camera's depth value to check if it's shadowed or not. I don't really see this happening here? Is it because the comparison sampler is already doing this ?
Then I use the xy position of the projected position in shadow space as texture coordinate to sample the shadow map and comparing this with the projected z position.
float SampleShadowCascade(in float3 positionVS, in uint cascadeIdx)
{

// Edge tap smoothing
const int NumSamples = (Radius *2 + 1) * (Radius * 2 + 1);

float leftEdge = 1.0f - fracs.x;
float rightEdge = fracs.x;
float topEdge = 1.0f - fracs.y;
float bottomEdge = fracs.y;

[unroll(NumSamples)]
{
[unroll(NumSamples)]
{
{
float2 offset = float2(x, y) * (1.0f / ShadowMapSize);
float2 sampleCoord = shadowTexCoord + offset;

float xWeight = 1;
float yWeight = 1;

{
xWeight = leftEdge;
}
{
xWeight = rightEdge;
}

{
yWeight = topEdge;
}
{
yWeight = bottomEdge;
}

shadowVisibility += currentSample * xWeight * yWeight;
}
}

}

}


#4lipsryme

Posted 03 December 2012 - 12:46 PM

Before going into the actual projection I tried going through my shader part by part starting from the beginning, to see where it breaks.
And it seems that it starts with the cascades itself. Or rather the DepthVS.
Which is odd since it should just be the Z component of the geometry in view space. And that's also how it is in your sample.
My Z value never gets beyond 1.0f so the cascades will never switch.
Is there anything wrong with the way I calculate my view space position ? I'm using the same code that I use for my lighting which works fine.

update: Ah of course I'm scaling my linear depth by the far clip. I probably shouldn't do that
I've now pre multiplied the shadow matrices with the inverse view(my camera's view) and multiply that with my position in view space.
I still don't quite understand this part:

"in terms of X and Y the second cascade is 2x the width and height of the first cascade and is located 1 unit to the right and 2.25 units upwards. Using that, you could project using the first cascade matrix and then apply the appropriate offset and scale to get coordinates in terms of the second cascade."

I'm having a hard time understanding where those numbers come from and how I would obtain my sample coordinates from that.

Let me go through this process from the beginning.
I calculate the view projection of my light source which is the shadow matrix.
Then I transform these from post-projection space to uv space like this:
// Apply the scale/offset/bias matrix, which transform from [-1, 1]
// post-projection space to [0, 1] UV space
const float bias = Bias;
XMMATRIX texScaleBias;
texScaleBias.r[0] = XMVectorSet(0.5f,  0.0f, 0.0f,  0.0f);
texScaleBias.r[1] = XMVectorSet(0.0f, -0.5f, 0.0f,  0.0f);
texScaleBias.r[2] = XMVectorSet(0.0f,  0.0f, 1.0f,  0.0f);
texScaleBias.r[3] = XMVectorSet(0.5f,  0.5f, -bias, 1.0f);


Now inside the shader function I obtain this shadow matrix(which I pre-multiplied by the inverseView) and multiply it by the view space position of my geometry to basically get my geometry position in the light's space, correct ?
Normally in basic shadow mapping wouldn't you then compare this value's z component with the camera's depth value to check if it's shadowed or not. I don't really see this happening here? Is it because the comparison sampler is already doing this ?
But then why am I comparing the z position in shadow space with the xy position + some offset in the same shadow space ?
float SampleShadowCascade(in float3 positionVS, in uint cascadeIdx)
{

// Edge tap smoothing
const int NumSamples = (Radius *2 + 1) * (Radius * 2 + 1);

float leftEdge = 1.0f - fracs.x;
float rightEdge = fracs.x;
float topEdge = 1.0f - fracs.y;
float bottomEdge = fracs.y;

[unroll(NumSamples)]
{
[unroll(NumSamples)]
{
{
float2 offset = float2(x, y) * (1.0f / ShadowMapSize);
float2 sampleCoord = shadowTexCoord + offset;

float xWeight = 1;
float yWeight = 1;

{
xWeight = leftEdge;
}
{
xWeight = rightEdge;
}

{
yWeight = topEdge;
}
{
yWeight = bottomEdge;
}

shadowVisibility += currentSample * xWeight * yWeight;
}
}