Jump to content

  • Log In with Google      Sign In   
  • Create Account

FREE SOFTWARE GIVEAWAY

We have 4 x Pro Licences (valued at $59 each) for 2d modular animation software Spriter to give away in this Thursday's GDNet Direct email newsletter.


Read more in this forum topic or make sure you're signed up (from the right-hand sidebar on the homepage) and read Thursday's newsletter to get in the running!


lipsryme

Member Since 02 Mar 2010
Offline Last Active Dec 23 2014 08:06 AM

Topics I've Started

Tube AreaLight - Falloff problem

27 October 2014 - 09:32 AM

I've implemented spherical and tube lights based on the "representative point" technique presented in epic's siggraph 2013 notes (http://blog.selfshadow.com/publications/s2013-shading-course/karis/s2013_pbs_epic_notes_v2.pdf).

I'm also using the inverse squared falloff presented in that paper which works fine for point or spherical area lights but does not for the tube lights.

float falloff = pow(saturate(1 - pow(lightDistance / lightRadius, 4)), 2) / ((lightDistance*lightDistance) + 1);

On tube lights the diffuse light contribution looks like the following:

Note: These screen only show the diffuse light from the tube area light

http://cl.ly/image/1s32140D2W1F (highest position from ground)

http://cl.ly/image/1X3T2X171016

http://cl.ly/image/0O3i1c2c0Q1u (closest to the ground)

http://cl.ly/image/0o0D2D1b0d3T (same but from a different angle)

 

As you can (hopefully) see the diffuse light is shaped like some kind of 'P' depending on the viewing angle. If you look at it from the top then it seems fine (shaped like a capsule).

I know of the shader toy example but the falloff they're using doesn't work for me because I need to limit the light depending on the (sphere) radius.

 

I believe the code is correct but here is how I calculate the tube light:

float3 L0 = CenterAndRadius.xyz + float3(0, 0, lightRadius) - input.PosWS;
float3 L1 = CenterAndRadius.xyz - float3(0, 0, lightRadius) - input.PosWS;
float distL0 = length(L0);
float distL1 = length(L1);

float NdotL0 = dot(N, L0) / (2.0f * distL0);
float NdotL1 = dot(N, L1) / (2.0f * distL1);
float NdotL = (2.0f * saturate(NdotL0 + NdotL1)) / (distL0 * distL1 + dot(L0, L1) + 2.0f);

float3 Ld = L1 - L0;
float L0dotLd = dot(L0, Ld);
float RdotL0 = dot(R, L0);
float RdotLd = dot(R, Ld);
float distLd = length(Ld);
float t = (RdotL0 * RdotLd - L0dotLd) / (distLd * distLd - RdotLd * RdotLd);

float3 closestPoint = L0 + Ld * saturate(t);
float3 centerToRay = dot(closestPoint, R) * R - closestPoint;
closestPoint = closestPoint + centerToRay * saturate(tubeRadius / length(centerToRay));
L = normalize(closestPoint);	
lightDistance = length(closestPoint);

// Calculate normalization factor for area light
alphaPrime = saturate(tubeRadius / (lightDistance * 2.0f) + roughness*roughness);

Any ideas ? huh.png


Forward+ Rendering - best way to updating light buffer ?

06 October 2014 - 08:34 AM

I've got a prototype implementation of it working with point lights at the moment based on the most recent AMD SDK sample. Problem is their implementation uses a statically allocated array of XMFLOAT4 light data containing Position(Center) and Radius that they use to create an ID3D11Buffer once.

Now of course I want to be able to instantiate point lights in my scene on a button press (also being able to remove them again) so I was wondering what you guys think the best approach to creating/updating this buffer during runtime would be ? I guess what I should do here is have some kind of dirty flag that is set when instantiating/removing or changing a light's position/radius, right ? But I still don't really know how to update the ID3D11Buffer...I would guess releasing/recreating it would not be a good idea (which is what I'm doing right now in my prototype only when a light is added).

But even then wouldn't that be expansive to update the entire buffer if one or several lights just changed position (even more so what if they keep moving all the time I would have to update it all the time).


Alpha-Test and Forward+ Rendering (Z-prepass questions)

09 August 2014 - 12:27 PM

I'm currently trying to get the Z-Prepass to work on alpha tested geometry. I've looked at the sample "TiledLighting" from the AMD SDK and saw that they seperate alpha tested from the regular opaque geometry during the prepass (They do opaque first, then all alpha tested). Is there any reason why ? I'm also wondering why in their alpha tested prepass shader they output the (float4) DiffuseMap color using SV_TARGET.

 

The prepass shader looks like the following:

float4 RenderSceneAlphaTestOnlyPS( VS_OUTPUT_POSITION_AND_TEX Input ) : SV_TARGET
{ 
    float4 DiffuseTex = g_TxDiffuse.Sample( g_Sampler, Input.TextureUV );
    float fAlpha = DiffuseTex.a;
    if( fAlpha < g_fAlphaTest ) discard;
    return DiffuseTex;
}

Screen-Space Subsurface Scattering artifacts (help)

28 June 2014 - 10:14 AM

Update: I just had another look at his sample app and he has the exact same artifacts when FOLLOW SURFACE is disabled but when enabled they are completely gone unlike mine...I have no idea why. I've looked over every line of his shaders and mine does the exact same thing. Also compared texture/RT formats -> same.

And the issue is not just with the skybox...tried it with a bright white plane in the BG, same artefacts. MSAA also not the culprit.

 

I've implemented the screen space SSS based on jimenez sample but I'm having big issues with artifacts that seem to occur when there's bright pixels near or behind the pixel that the SSS is applied to (the skin of a character).

 

I've set the diffuse color to black so the artifacts can be seen clearly.

 

This is with "FOLLOW SURFACE off":

eDhWnks.png?1?7860

 

 

This is with "FOLLOW SURFACE" on:

WgfDH5V.png?1?4256

As you can see this makes it a little better but the artifacts are still there. (rainbow colored lines)

 

Any ideas ? Is this depth related ? Thing is I'm not seeing these artifacts in his sample app so this must a problem in my renderer.

Since I'm using MSAA I'm not using the stencil buffer to mask out the geometry but the separate RT where I render the specular to.

So it looks like the following:

 

Render Mesh (non skin shading): RT0: Color [R16G16B16A16]

                                                      RT1: (Spec) Specular.rgb / (SSS Mask) Specular.a [R16G16B16A16]

                                                      RT2: Linear Depth (input.PositionCS.w) [R32_FLOAT]

 

then post process that binds the spec target and rejects based on the spec.a value that is > 0.0f.

The error happens during the SSS blur passes.

 

Let me show you my code:

#define SSSS_N_SAMPLES 17
const static float4 kernel[] =
{
        float4(0.546002, 0.63378, 0.748867, 0),
        float4(0.00310782, 0.000131535, 3.77269e-005, -2),
        float4(0.00982943, 0.00089237, 0.000275702, -1.53125),
        float4(0.0141597, 0.00309531, 0.00106399, -1.125),
        float4(0.0211795, 0.00775237, 0.00376991, -0.78125),
        float4(0.0340081, 0.01474, 0.00871983, -0.5),
        float4(0.0559159, 0.0280422, 0.0172844, -0.28125),
        float4(0.0570283, 0.0643862, 0.0411329, -0.125),
        float4(0.0317703, 0.0640701, 0.0532821, -0.03125),
        float4(0.0317703, 0.0640701, 0.0532821, 0.03125),
        float4(0.0570283, 0.0643862, 0.0411329, 0.125),
        float4(0.0559159, 0.0280422, 0.0172844, 0.28125),
        float4(0.0340081, 0.01474, 0.00871983, 0.5),
        float4(0.0211795, 0.00775237, 0.00376991, 0.78125),
        float4(0.0141597, 0.00309531, 0.00106399, 1.125),
        float4(0.00982943, 0.00089237, 0.000275702, 1.53125),
        float4(0.00310782, 0.000131535, 3.77269e-005, 2),
};

float4 SSS_Blur(in float2 TexCoord, in float2 dir)
{
	// Fetch color of current pixel:
	float4 colorM = InputTexture0.SampleLevel(PointSampler, TexCoord, 0);

	// Use SpecularTarget alpha as mask
	if (InputTexture2.Sample(PointSampler, TexCoord).a > 0.0f)
		return colorM;
			
	// Fetch linear depth of current pixel:
	float depthM = InputTexture1.SampleLevel(PointSampler, TexCoord, 0).r;

	// Calculate the sssWidth scale (1.0 for a unit plane sitting on the
	// projection window):
	float distanceToProjectionWindow = 1.0f / tan(0.5f * SSS_FOV);
	float scale = distanceToProjectionWindow / depthM;

	// Calculate the final step to fetch the surrounding pixels:
	float2 finalStep = 0.012f * scale * dir;
	finalStep *= colorM.a; // Modulate it using the alpha channel.
	finalStep *= 1.0f / 3.0f; // Divide by 3 as the kernels range from -3 to 3.
		
	// Accumulate the center sample:
	float3 colorBlurred = colorM.rgb * kernel[0].rgb;
				
	
	// Accumulate the other samples:
	[unroll]
	for (int i = 1; i < SSSS_N_SAMPLES; i++)
	{
		// Fetch color and depth for current sample:
		float2 offset = TexCoord + kernel[i].a * finalStep;
		float3 color = InputTexture0.SampleLevel(LinearClampSampler, offset, 0).rgb;

		// --- FOLLOW SURFACE ------------------------------------------------------------------
		// If the difference in depth is huge, we lerp color back to "colorM":
		float depth = InputTexture1.SampleLevel(LinearClampSampler, offset, 0).r;
		float s = saturate(300.0f * distanceToProjectionWindow * 0.012f * abs(depthM - depth));
		color.rgb = lerp(color.rgb, colorM.rgb, s);
		//--------------------------------------------------------------------------------------

		// Accumulate:
		colorBlurred.rgb += kernel[i].rgb * color.rgb;
         }

	
	return float4(colorBlurred, colorM.a);
}


float4 SSS_Convolution_H(in VSOutput input) : SV_TARGET0
{
	return SSS_Blur(input.TexCoord, float2(1, 0));
}


float4 SSS_Convolution_V(in VSOutput input) : SV_TARGET0
{
	return SSS_Blur(input.TexCoord, float2(0, 1));
}


float3 SSS_AddSpecular(in VSOutput input) : SV_TARGET0
{
	return InputTexture0.Sample(LinearClampSampler, input.TexCoord).rgb +
	       InputTexture1.Sample(LinearClampSampler, input.TexCoord).rgb;
}

UE4 IBL / shading confusion

27 June 2014 - 07:55 AM

Few months back I had a thread here on trying to implement their approach of IBL using this so called split-sum-approximation and I thought I understood what it was trying to do (same with other aspects of their paper)...I guess I was wrong and I'm even more confused now.

 

For anyone who doesn't know which paper I'm refering to: http://blog.selfshadow.com/publications/s2013-shading-course/karis/s2013_pbs_epic_notes_v2.pdf

 

1. Metallic shading parameter: I'm still unsure if I get what this parameter actually does. What I thought it does is have some value [0-1] that just attenuates the diffuse term like so:

Diffuse *= 1.0f - metallic;

Because if we want the material to be more metallic the diffuse term needs to decrease until it's fully gone for an actual metal. Just attenuating the fresnel reflectance does not completely remove that. Looking more closely at the pictures in the paper it looks as if they control the actual FresnelReflectance with it, something like:

FresnelReflectance = lerp(0.04, 1.0f, metallic);

Does anyone know what the parameter actually does ?

 

2. Cavity Map: They describe this as small-scale shadowing...so is this just an AO map ? But then they talk about this being a replacement for specular (do they mean specular intensity?)

 

3. Split-sum-approxmation: Now this is the most confusing thing to me. My understanding of this was that they additionally precompute a 2D texture that handles the geometry and fresnel portion of the BRDF and that this is a lookup texture that is used during the realtime lighting pass to attenuate the FresnelReflectance so that it looks more realistic(?) / e.g. removes the glowing edges at high roughness. Am I wrong ?

I've generated this 2d lookup texture and it look almost precisely like the picture in his paper except that it's rotated 45 degrees for some odd reason ? I've tried rotating it so it looks the same as in the paper using photoshop but the result of looking up the values seems completely wrong (did he just artificially rotate the texture just for the paper ??) while the original texture does produce reasonable(?) results in that if applied to a sphere the edges become increasingly stronger the smoother it is.

Let me give you a few screenshots:

 

This is the texture generated by me:

zB8UF1G.png?1

 

This is the texture shown in his paper:

 

TDb6Dhh.png?1

And here's a quick video showing how the fresnel reflectance looks like using the formula described in the paper:

FresnelReflectance * EnvBRDF.x + EnvBRDF.y

https://www.youtube.com/watch?v=bdY1rvDPCB8&feature=youtu.be

 

 

Also what does he mean by can only afford a single sample for each (why would you have more ?):

 

Even with importance sampling, many samples still need to be taken. The sample count can be
reduced significantly by using mip maps [3], but counts still need to be greater than 16 for sufficient
quality. Because we blend between many environment maps per pixel for local reflections, we can only
practically afford a single sample for each

 

In his sample code he uses a sample_count of 1024 which in my tests produces lots of dots on the env map from the random distribution and it only gets better using at least 5k samples. I don't see how he does that. Is this just a case of making the precomputation faster because of hundreds/thousands of probes ?


PARTNERS