i'm only voxelizing it once at the highest resolution. Then when it comes to creating the octree - i can use the voxels to subdivide the octree structure - much much faster instead of having to test triangle-box overlap at every level. I am thinking that maybe its better to prevoxelize objects into a file - and then load that file so I won't have to voxelize every time I compile the project.
You can voxelize a object once, and revoxelize it only if it deforms.
If you're going for only diffuse indirect lighting, there is nice and fast method here:
http://graphics.cs.aueb.gr/graphics/docs/papers/RadianceHintsPreprint.pdf
It looks pretty fast and accurate, huh?
I'll be trying to implement it, I have to learn something about spherical harmonics thought.
I found a source code for this technique. I'm posting it here.
First pass:
//----------------------------------------------------------------------------------
//
// Radiance hints generation fragment shader (first bounce: RSM sampling).
//
// G. Papaioannou (gepap@aueb.gr), 2011
//
//----------------------------------------------------------------------------------
//
// Abreviations:
// CSS: Canonical Screen Space
// WCS: World Coordinate System
// ECS: Eye (camera) Coordinate System
// LCS: Light Clip Space Coordinate System
//
//----------------------------------------------------------------------------------
//
// The shader is executed for every RH (voxel) of the RH buffer (3D texture), i.e.
// for every fragment of a sweep plane through the volume. For every slice, a
// quad is drawn. The process is repeated for every RSM that contributes to the GI.
// For this reason, the blending equation should be set to ADD and blending coefs to
// GL_ONE (src), GL_ONE (tgt)
// If the blending parameters for color attachment 0 (distance data) can be separately
// set, use MIN as the blending equation.
//
// Notes:
// This is the 2nd order SH (4 coef) version of the demo/example shader source code.
// higher order SHs are similarly utilized. The shader could also be further optimized
// in order to avoid re-computing certain quantities.
vec3 VectorLECS2WCS(in vec3 sample)
{
// For vectors, the transposed inverse matrix from Light ECS to WCS is used
// (i.e. the transposed L_ecs). You could also pass the matrix transposed
// outside the shader.
return (transpose(L_ecs)*vec4(sample,0)).xyz;
}
vec2 ShadowProjection( in vec3 point_WCS )
{
// note: projected points on the RSM are clamped to the map extents,
// not rejected
vec4 pos_LCS = L*vec4(point_WCS+vec3(0.01,0.01,0.01),1.0);
pos_LCS/=pos_LCS.w;
float reverse = sign(dot(light_dir,point_WCS-light_pos));
vec2 uv = vec2( reverse*0.5*pos_LCS.x + 0.5, reverse*0.5*pos_LCS.y + 0.5);
return clamp(uv,vec2(0.0,0.0),vec2(1.0,1.0));
}
// ---------------- Main shader function -----------------------------------------
void main(void)
{
// Determine the RH position (WCS)
int gy = gl_FragCoord.y;
int gx = gl_FragCoord.x;
int gz = slice;
vec3 extents = bbox_max-bbox_min;
vec3 stratum;
stratum.x = extents.x/(resolution.x-1);
stratum.y = extents.y/(resolution.y-1);
stratum.z = extents.z/(resolution.z-1);
vec3 pos = bbox_min + vec3(gx, gy, gz) * stratum;
// Project the RH location on the RSM and determine the
// center of the sampling disk
vec2 l_uv = ShadowProjection(pos);
// initialize measured distances from RH location to the RSM samples
vec4 SH_dist_ave = vec4(0,0,0,0);
float dist, dist_min=R_wcs, dist_max=0.0, dist_ave=0.0;
// Declare and initialize various parameters
vec4 smc, smp, smn;
vec4 SHr=vec4(0.0,0.0,0.0,0.0); // accumulated SH coefs for the incident radiance from
vec4 SHg=vec4(0.0,0.0,0.0,0.0); // all RSM samples
vec4 SHb=vec4(0.0,0.0,0.0,0.0);
vec3 normal;
vec4 color;
for (int i=0;i<samples;i++)
{
// produce a new sample location on the RSM texture
vec3 rnd = 2.0*texture3D(Noise, 14*pos/extents+vec3(i,0,0)/samples).xyz-vec3(1.0,1.0,1.0);
vec2 uv = l_uv+vec2(rnd.x*spread*cos(6.283*rnd.y),rnd.x*spread*sin(6.283*rnd.y));
// produce a new sampling location in the RH stratum
vec3 p = pos+(0.5*rnd)*stratum;
// determine the the WCS coordinates of the RSM sample position (smp) and normal (smn)
// and read the corresponding lighting (smc)
float depth = (texture2D(Shadow,uv)).z;
vec4 pos_LCS = vec4 (uv.x*2.0-1, uv.y*2.0-1.0, depth*2.0-1.0,1.0);
vec4 isect4 = L_inv*pos_LCS;
smp = isect4.xyz/isect4.w;
smc=texture2D(ShadowColor,uv);
vec4 ntex = texture2D(ShadowNormal,uv);
smn = vec3(ntex.x*2.0-1.0,ntex.y*2.0-1.0,ntex.z);
smn = normalize(VectorLECS2WCS(normalize(smn)));
// Normalize distance to RSM sample
dist = distance(p,smp)/R_wcs;
// Determine the incident direction.
// Avoid very close samples (and numerical instability problems)
vec3 dir = dist<=0.007?vec3(0,0,0):normalize(p-smp);
float dotprod = max(dot(dir,smn),0.0);
FF = dotprod/(0.1+dist*dist);
// Initialize visibility to 1
float depth_visibility = 1.0;
// set number of visibility samples along the line of sight. Can be set with #define
float vis_samples = 4.0; // 3 to 8
vec3 Qj;
vec3 Qcss;
// Check if the current RH point is hidden from view. If it is, then "visible" line-of-sight
// samples should also be hidden from view. If the RH point is visible to the camera, the
// same should hold for the visibility samples in order not to attenuate the light.
Qcss = PointWCS2CSS(p);
float rh_visibility = Qcss.z<(2.0*texture2D(Depth,0.5*Qcss.xy+vec2(0.5,0.5)).r-1.0)*1.1?1.0:-1.0;
// Estimate attenuation along line of sight
for (int j=1; j<vis_samples; j++)
{
// determine next point along the line of sight
Qj = smp+(j/vis_samples)*(pos-smp);
Qcss = PointWCS2CSS(Qj);
// modulate the visibility according to the number of hidden LoS samples
depth_visibility -= rh_visibility*Qcss.z<rh_visibility*(2.0*texture2D(Depth,0.5*Qcss.xy+vec2(0.5,0.5)).r-1.0)?0.0:1.0/vis_samples;
}
depth_visibility = clamp(depth_visibility,0,1);
FF *= depth_visibility;
#endif
color = smc*FF;
vec4 shr, shg, shb;
// encode radiance into SH coefs and accumulate to RH
RGB2SH (dir, color.rgb, shr, shg, shb);
SHr+=shr;
SHg+=shg;
SHb+=shb;
// update distance measurements
dist_max=(dist>dist_max )?dist:dist_max;
dist_min=(dist<dist_min )?dist:dist_min;
dist_ave+=dist;
}
// cast samples to float to resolve some weird compiler issue.
// For some reason, both floatXint and intXfloat are auto-boxed to int!
SHr/=(3.14159*float(samples));
SHg/=float(3.14159*float(samples));
SHb/=float(3.14159*float(samples));
dist_ave/=float(samples);
gl_FragData[0] = vec4 (dist_min,dist_max,dist_ave,1.0)/num_lights;
// distances must be divided by the number of lights this shader is run for, because
// because they should not be accumulated. Ideally, the MIN/MAX frame buffer operator should
// be used instead of the ADD operator for this data channel. In this case (e.g. MIN),
// frag data[0] should contain: dist_min, r_max-dist_max, dist_ave (not used actually), 1.0 )
Second pass:
//----------------------------------------------------------------------------------
//
// Radiance hints GI reconstruction fragment shader.
//
// G. Papaioannou (gepap@aueb.gr), 2011
//
//
//----------------------------------------------------------------------------------
//
// Abreviations:
// CSS: Canonical Screen Space
// WCS: World Coordinate System
// ECS: Eye (camera) Coordinate System
//
//----------------------------------------------------------------------------------
//
// This shader assumes that a GI buffer-covering quad is being drawn. The vertex shader
// only passes the tex. coords through to the fragment shader. Tex coord set 0 is used
// for recovering the CSS coordinates of the current fragment. Bottom left: (0,0), top
// right: (1,1).
//
// Notes:
// This is the 2nd order SH (4 coef) version of the demo/example shader source code.
// higher order SHs are similarly utilized. The shader could also be further optimized
// in order to avoid re-computing certain quantities.
// ---------------- Main shader function -----------------------------------------
void main(void)
{
// Accumulated global illumination, initialized at (0,0,0)
vec3 GI = vec3(0.0,0.0,0.0);
// Blending factor for current GI estimation frame
// If blending < 1, current result is blended with previous GI values (temporal smoothing)
float blending = 0.8;
// convert fragment position and normal to WCS
vec3 pos_css = vec3(2.0*gl_TexCoord[0].x-1.0, 2.0*gl_TexCoord[0].y-1.0, 2*depth-1.0);
vec3 pos_wcs = PointCSS2WCS(pos_css);
vec4 ntex = texture2D(RT_normals,gl_TexCoord[0].st);
vec3 normal_ecs = ntex.xyz;
normal_ecs.x = normal_ecs.x*2.0-1.0;
normal_ecs.y = normal_ecs.y*2.0-1.0;
normal_ecs = normalize(normal_ecs);
vec3 normal_wcs = normalize(VectorECS2WCS(normal_ecs));
// determine volume texture coordinate of current fragment location
vec3 uvw = (pos_wcs-bbox_min)/extents;
float denom = 0.05;
// Sample the RH volume at 4 locations, one directly above the shaded point,
// three on a ring 80degs away from the normal direction. All samples are
// moved away from the shaded point to avoid sampling RHs behind the surface.
//
// You can introduce a random rotation of the samples around the normal:
vec3 rnd = vec3(0,0,0); //texture3D(Noise,25*uvw).xyz - vec3(0.5,0.5,0.5);
// Generate the sample locations;
vec3 v_rand = vec3(0.5, 0.5, 0.5);
vec3 v_1 = normalize(cross(normal_wcs,v_rand));
vec3 v_2 = cross(normal_wcs,v_1);
vec3 D[4];
D[0] = vec3(1.0,0.0,0.0);
int i;
for (i=1; i<4; i++)
{
D = vec3(0.1, 0.8*cos((rnd.x*1.5+i)*6.2832/3.0), 0.8*sin((rnd.x*1.5+i)*6.2832/3.0));
D = normalize(D);
}
i'm only voxelizing it once at the highest resolution. Then when it comes to creating the octree - i can use the voxels to subdivide the octree structure - much much faster instead of having to test triangle-box overlap at every level. I am thinking that maybe its better to prevoxelize objects into a file - and then load that file so I won't have to voxelize every time I compile the project.
Do we need an actual octree for this technique?
Can't we just use mipmapped 3d texture?
Saw that Radiance Hints paper a while ago, and since I've yet to see it used I'm assuming there's something wrong with it in practice. Specifically I'd worry about secondary occlusion, bounce lighting is a lot higher frequency than is usually given credit; and besides, realtime reflections is just fantastic to have (even if it is going to look odd on truly reflective surfaces).
That being said, I'd love to hear I'm wrong. If anyone has experience actually implementing radiance hints and the results it gives I'd love to know your opinion.
If you're going for only diffuse indirect lighting, there is nice and fast method here:
http://graphics.cs.a...ntsPreprint.pdf
If you want a fast and proven technique for real-time diffuse indirect lighting you should look into light propagation volumes as proposed by Kaplanyan and Dachsbacher
http://www6.incrysis.com/Light_Propagation_Volumes.pdf
I wouldn't call it easy to implement, but it's still a lot easier than the cone tracing technique
Saw that Radiance Hints paper a while ago, and since I've yet to see it used I'm assuming there's something wrong with it in practice. Specifically I'd worry about secondary occlusion, bounce lighting is a lot higher frequency than is usually given credit; and besides, realtime reflections is just fantastic to have (even if it is going to look odd on truly reflective surfaces).
That being said, I'd love to hear I'm wrong. If anyone has experience actually implementing radiance hints and the results it gives I'd love to know your opinion.
Sadly, Radiance Hints is rarely known technique. I found only one video of its implementation.
What do you mean by "bounce lighting is a lot higher frequency than is usually given credit" ?
It's low frequency.
[quote name='MrOMGWTF' timestamp='1347289006' post='4978594']
If you're going for only diffuse indirect lighting, there is nice and fast method here:
http://graphics.cs.a...ntsPreprint.pdf
If you want a fast and proven technique for real-time diffuse indirect lighting you should look into light propagation volumes as proposed by Kaplanyan and Dachsbacher
http://www6.incrysis.com/Light_Propagation_Volumes.pdf
I wouldn't call it easy to implement, but it's still a lot easier than the cone tracing technique
[/quote]
LPV doesn't support occlusion, that's its con.
At least it's pretty fast method to approximate GI.
The funniest actually thing, is this:
1. I was playing around with cryengine 3.
2. I've loaded the forest map.
3. I was switching the GI check box, and there was NO difference in image realism.
GI isn't as this important in games? Well, maybe because it was a forest. A scene in a building would need GI to be realistic, I think?
If you want a fast and proven technique for real-time diffuse indirect lighting you should look into light propagation volumes as proposed by Kaplanyan and Dachsbacher http://www6.incrysis...ion_Volumes.pdf
I wouldn't call it easy to implement, but it's still a lot easier than the cone tracing technique
I'd recommend a more up to date iteration of LPV, which includes corrections and annotations for the original paper, plus a DX10.1 demo with source
[quote name='Radikalizm' timestamp='1348365078' post='4982784']
If you want a fast and proven technique for real-time diffuse indirect lighting you should look into light propagation volumes as proposed by Kaplanyan and Dachsbacher http://www6.incrysis...ion_Volumes.pdf
I wouldn't call it easy to implement, but it's still a lot easier than the cone tracing technique
I'd recommend a more up to date iteration of LPV, which includes corrections and annotations for the original paper, plus a DX10.1 demo with source
[/quote]
Thanks for the link, wasn't too sure of the latest paper on the subject as I get my information about the technique from GPU Pro 2
GI isn't as this important in games? Well, maybe because it was a forest. A scene in a building would need GI to be realistic, I think?
Even if you don't directly see any drastic changes to the scene, GI will still be a very important factor when it comes to realistic lighting in pretty much every setting.
EDIT:
[quote name='MrOMGWTF']
LPV doesn't support occlusion, that's its con.
[/quote]
I think you're confusing the LPV technique with VPLs. Indirect occlusion can be perfectly incorporated in the LPV technique by using a geometry volume.