Injection Step of VPLs into 3D Grid

Started by
8 comments, last by tuccio 8 years, 3 months ago

Hi Guys,

I'm currently trying to implement light propagation volumes in DirectX 11 and I've already done the RSM part which contains position, normal, depth and flux map generation. but for Injecting Virtual point lights into light propagation volumes, we have to inject lights into an x*y*z grid for which each element consists of 3 vectors for each of red, green and blue channels and each vector consists of four coefficients of a two-band spherical harmonics.

by definition, it means just make a injection shader that gets 4 RSM render targets and apply those thing that i've written above, but for output from pixel shader I'm totally confused because it seems like we only multiply flux with Spherical Harmonics coefficients and return its value for each red, green and blue channel. is that correct ? so how we are going to use other maps in this step ?

and also it's unclear for me that how this result can bind with 3D grid.

Regards

Advertisement

You mean you're only using the flux? How do you decide in which cell of the 3d grid you're going to write (so basically your SV_Position and SV_RenderTargetArrayIndex ) if you don't use the position from the RSM? You can get it from the depth or from the position texture, you don't need both of them. Also you need the normal to create the cosine lobe, you're probably using zonal harmonics coefficients, but you need to rotate the cosine lobe so that it faces towards your normal.

I've said that based on the codes that i've seen before

that's why i said "it seems like"

Demo with source code for you to peruse http://blog.blackhc.net/2010/07/light-propagation-volumes/


struct VS_INJECTION_INPUT {
	uint posIndex : SV_VertexID;
};

struct VS_INJECTION_OUTPUT {
	float4 cellIndex : SV_POSITION;
	float3 normal : WORLD_NORMAL;
	float3 flux : LIGHT_FLUX;
};

struct GS_INJECTION_OUTPUT {
	uint depthIndex : SV_RenderTargetArrayIndex;
	float4 screenPos : SV_Position;
	
	float3 normal : WORLD_NORMAL;
	float3 flux : LIGHT_FLUX;
};

struct PS_INJECTION_OUTPUT {
	float4 redSH : SV_Target0;
	float4 greenSH : SV_Target1;
	float4 blueSH : SV_Target2;
};

// helper function
float4 SH_evaluateCosineLobe( float3 direction ) {
    direction = normalize( direction );
    
    return float4( SH_cosLobe_c0, -SH_cosLobe_c1 * direction.y, SH_cosLobe_c1 * direction.z, -SH_cosLobe_c1 * direction.x );
}

VS_INJECTION_OUTPUT VS_gpuInjection( VS_INJECTION_INPUT input ) {
	int2 RSMsize;
	worldPosition.GetDimensions( RSMsize.x, RSMsize.y );
	int3 rsmCoords = int3( input.posIndex % RSMsize.x, input.posIndex / RSMsize.x, 0 );
	
	float3 posWorld = worldPosition.Load( rsmCoords );
	float3 normalWorld = worldNormal.Load( rsmCoords );
	float3 flux = lightFlux.Load( rsmCoords );
	
	VS_INJECTION_OUTPUT output;
	output.cellIndex = float4( int3( (posWorld - lpv_minCorner) / lpv_cellSize + 0.5 * normalWorld ), 1.0 );
	
	output.normal = normalWorld;
	output.flux = flux;
	
	return output;
}

[maxvertexcount(1)]
void GS_gpuInjection( point VS_INJECTION_OUTPUT input[1], inout PointStream<GS_INJECTION_OUTPUT> OutputStream ) {
	GS_INJECTION_OUTPUT output;
	
	output.depthIndex = input[0].cellIndex.z;
	output.screenPos.xy = (float2(input[0].cellIndex.xy) + 0.5) / lpv_size.xy * 2.0 - 1.0;
	// invert y direction because y points downwards in the viewport?
	output.screenPos.y = -output.screenPos.y;
	output.screenPos.zw = float2( 0, 1 );

	output.normal = input[0].normal;
	output.flux = input[0].flux;
	
	OutputStream.Append( output );
}

PS_INJECTION_OUTPUT PS_gpuInjection( GS_INJECTION_OUTPUT input ) {
	PS_INJECTION_OUTPUT output;
	
	if( length( input.normal ) < 0.01 ) {
		discard;
	}
	
	
#if 0
	// this is what the paper says
	float4 SHcoeffs = SH_evaluateCosineLobe( input.normal );
	
	output.redSH = SHcoeffs * input.flux.r;
	output.greenSH = SHcoeffs * input.flux.g;
	output.blueSH = SHcoeffs * input.flux.b;
#else
	// the SHcoeffs model an intensity function
	// thus the integral over all intensities (over the sphere) should be the original flux
	// and that integral = Pi * totalFlux, so divide by Pi
	float4 SHcoeffs = SH_evaluateCosineLobe( input.normal ) / Pi;
	
	output.redSH = SHcoeffs * input.flux.r;
	output.greenSH = SHcoeffs * input.flux.g;
	output.blueSH = SHcoeffs * input.flux.b;
#endif
	
	return output;
}

this is his code for injection part, and he just multiplied flux with SHcoeffs(that obtained by normals) for PS injection output.

finally i understood that i have to make 3 dimensional texture array of which each slice of array is slice of grid. and each cell is a point of virtual point lights.

but as you see for VS_INPUT, he used VertexID (or position in some codes)


struct VS_INJECTION_INPUT {
	uint posIndex : SV_VertexID;
};

now my question is how long each slices have a distance with each other ? am i have to make an array of points from 3D texture array and insert it into a vertex buffer ?

how can i obtain this VertexID from CPU side and pass it to the vertex buffer ?

I understood what SV_VertexID is and how it can be used when we want to render without any buffer and tell us the index of a vertex.

in his VS code, he calculated cellIndex in this way:


output.cellIndex = float4( int3( (posWorld - lpv_minCorner) / lpv_cellSize + 0.5 * normalWorld ), 1.0 );

what is that lpv_minCorner ? i saw that he used that minCorner during loading models, can someone guide me what is it and how can i generate it from CPU or GPU side ?

The 3D texture in LPV represents an actual AABB in world space, in your example it's represented with min/max corners and you're trying to calculate the cell you need to write in (i.e. the cell where the VPL is located), the 0.5 * normal shift might be confusing, it's a bias to reduce bleeding. You can find most the information you need here

So far, i've made a grid that covers all of the environment with 3 textures that each of them used for red, green and blue SH, compute all those Injection codes and write on these textures.

now I wanna do the propagation part. this step was quite hard to understand at first, but hopefully with many other articles and references, i can have a overall snapshot of what i have to do theoretically.

the propagation is used to make sure every that we not have any zero valued grid element in our environment. so each grid element will propagate into sixth directions and each of those directions has 5 cosine lab projections, so at the end, we have 30 projections for each grid element.

for implementation, based on other articles (because i didn't understand a way that described in original LPV paper), we must have two rendering pass. first one is our previous injection rendering result ( i guess), samples neighboring cells and then render into different texture. since i have 3 textures from injection part, i also have to make another 3 textures to write my computations output from previous textures. is that right ? do you have any suggestion for implementation of this part ?

Two passes won't accomplish much, each pass a cell will propagate to its neighbors, that's too local to call it global illumination. You will need a bunch of passes (depending on the 3D texture resolution), and to do so you can use the sets of 3d textures you described, using ping pong buffering (first pass you read from A and write to B, then read from B and write to A and so on).

This is quite an expensive procedure, and to reduce the iterations the crytek paper introduces cascaded LPV.

This topic is closed to new replies.

Advertisement