Cone Tracing and Path Tracing - differences.

This topic is 1832 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

Recommended Posts

[quote name='gboxentertainment' timestamp='1346279316' post='4974599']
[quote name='PjotrSvetachov' timestamp='1346262028' post='4974500']
he says that in each node there is a pointer to a brick which makes me believe the bricks are stored in a compact way in the 3d texture
[/quote]

Yeh, I think it might just be this way. One thing I don't understand with this compact structure is how does trilinear filtering work with a compact structure? Wouldn't there be artifacts due to the blending of two neighbouring bricks in the 3d texture that are not actually neighbouring voxels in world space?
[/quote]thats what the border on the 2x2x2 voxel chunks is for (which effectively makes them 3x3x3). In one of the papers Crassin explains how the trilinear sampling works, pointing out that he sample from the center of the voxel, and thus requires the 3x3x3 brick to prevent artifacts.

Share on other sites
ah I think I get it now - the 3x3x3 bricks are theoretical so are only used to obtain the trilinearly filtered color in each node.
So in the 3d texture I would store the bricks by taking offset values of 0.5*voxel size from each node to store 3x3x3 voxels.
For sampling, if it were a 512x512x512 3d texture, we would just sample the first node at texture coordinate (1,1,1) and the second node at texture coordinate (3,1,1), etc.? so every node would be 2 texels apart?

Also, if I want to store other information like normals, would I store that in the w-direction of the 3d texture? Like, for the first node at texture coordinate (1,1,4) and the second node at (3,1,4), etc.? Edited by gboxentertainment

Share on other sites
The node structure is stored in linear memory. The bricks are stored in the 3D texture but there are pointers in the node stucture that tell you where the bricks are. So the brick belonging to the first node could be at (1,1,1) but the brick belonging to the second node could be at (36,4,13). Of course it's not likely but this can be done. As for other information like normals, I'm not sure but I believe this is stored in a seperate 3D texture using the same brick layout. This way you only need to have one pointer to acces all the information.

Share on other sites
I'm going to begin actually implementing this technique now and I'll continue to provide updates on each step with some of my code. All coding is done in Directx11 with C++.

I'll start off by voxelizing my simple scene which consists of a plane for a floor, 3 walls and several spheres and cubes. There are only 3 models loaded into a single vertex buffer (cube, sphere, plane) with instances of the cube and sphere in the scene.

I'm pretty much going to follow the GPU Pro 3 example "Practical Binary Surface and Solid Voxelization with Direct3D 11" and I will use conservative surface voxelization as used in Crassin's OpenGL Insights chapter.

At the moment I will use three 3D Textures to store position (with the scene transformed to voxel space [0...1]), color (with opacity stored in the alpha float) and normals of each voxel. These will be format R8G8B8A8_UNORM, but I will work out later a more compact way of storing my voxels. I'm leaving out texture coordinates for the meanwhile and am going to just assign direct material colors to each instance to be voxelized.

I am going to use the compute shader to voxelize, because for some reason, I can't seem to get omsetrendertargetsandunorderedaccessviews to write to any buffer objects in the pixel shader.

For now, my lowest octree level will be 512x512x512 nodes - each pointing to bricks of 3x3x3 voxels. So there will be 513x513x513 voxels in a full grid - but since this is a sparse structure I think the 3d Textures that I will use will be 513x513x3. Edited by gboxentertainment

Share on other sites
[quote name='gboxentertainment' timestamp='1346488176' post='4975359']
I'm going to begin actually implementing this technique now and I'll continue to provide updates on each step with some of my code. All coding is done in Directx11 with C++.

I'll start off by voxelizing my simple scene which consists of a plane for a floor, 3 walls and several spheres and cubes. There are only 3 models loaded into a single vertex buffer (cube, sphere, plane) with instances of the cube and sphere in the scene.

I'm pretty much going to follow the GPU Pro 3 example "Practical Binary Surface and Solid Voxelization with Direct3D 11" and I will use conservative surface voxelization as used in Crassin's OpenGL Insights chapter.

At the moment I will use three 3D Textures to store position (with the scene transformed to voxel space [0...1]), color (with opacity stored in the alpha float) and normals of each voxel. These will be format R8G8B8A8_UNORM, but I will work out later a more compact way of storing my voxels. I'm leaving out texture coordinates for the meanwhile and am going to just assign direct material colors to each instance to be voxelized.

I am going to use the compute shader to voxelize, because for some reason, I can't seem to get omsetrendertargetsandunorderedaccessviews to write to any buffer objects in the pixel shader.

For now, my lowest octree level will be 512x512x512 nodes - each pointing to bricks of 3x3x3 voxels. So there will be 513x513x513 voxels in a full grid - but since this is a sparse structure I think the 3d Textures that I will use will be 513x513x3.
[/quote]

Great, will you share the source code if you manage to successfully implement it?

Share on other sites
Like I said, I will share some of my code - like the entire shader codes and the relevant setting-up code of the textures, buffers, etc. I won't provide my entire engine source code yet - its only experimental at this stage.

Anyway, at the moment I am trying to find out the most efficient way of visualizing my voxels and octree structure for debugging. Is raycasting generally the best method to use? Would it be faster to raycast in the pixel shader or the compute shader (because the GPU Pro 3 example does it in the pixel shader)?

Share on other sites
[quote name='gboxentertainment' timestamp='1346551382' post='4975615']
Like I said, I will share some of my code - like the entire shader codes and the relevant setting-up code of the textures, buffers, etc. I won't provide my entire engine source code yet - its only experimental at this stage.

Anyway, at the moment I am trying to find out the most efficient way of visualizing my voxels and octree structure for debugging. Is raycasting generally the best method to use? Would it be faster to raycast in the pixel shader or the compute shader (because the GPU Pro 3 example does it in the pixel shader)?
[/quote]

What's up with your implementation? It's been a while.

Share on other sites
Glad you asked. I've spent all this time trying to get the voxelization working. I can't seem to get OMSetRenderTargetsAndUnorderedAccessViews working to voxelize using the pixel shader, so I have to use the compute shader, which I have had to gain an understanding of.

Anyway, a lot of time was spent trying to pass my instanced objects into the compute shader - i.e. using a single vertex buffer with each different mesh and then using another buffer to store instanced information - which I use as indexing for the vertex buffer to extract my scene in the compute shader.

I've just managed to get all my triangles into voxel space (so the minimum vertex in my scene is now located at 0.5,0.5,0.5, where the scene is within a 513x513x513 voxel coordinate system). I've also gotten the bounding box for each triangle in the compute shader, now I just need to find out the most efficient method of testing each triangle against each voxel within each bounding box, where I use 1 thread per triangle.

Once I have successfully voxelized, I will post that code as a start.
MrOMGWTF, you mentioned that you already know how to do fast scene voxelization? Have you successfully implemented? In the "Practical Binary Surface and Solid Voxelization with Direct3D 11" code, I have trouble understanding one thing that's done under the conservative surface voxelization method. I cannot understand the use of this statement: "if((flatDimensions & 3) >= 22)", when they test the case for 1D bounding boxes (1 voxel thick objects). I know that it is a bitwise & operation, but there does not seem to be any way that this statement can ever be true?

Share on other sites
[quote name='gboxentertainment' timestamp='1347174289' post='4978197']
MrOMGWTF, you mentioned that you already know how to do fast scene voxelization? Have you successfully implemented? In the "Practical Binary Surface and Solid Voxelization with Direct3D 11" code, I have trouble understanding one thing that's done under the conservative surface voxelization method. I cannot understand the use of this statement: "if((flatDimensions & 3) >= 22)", when they test the case for 1D bounding boxes (1 voxel thick objects). I know that it is a bitwise & operation, but there does not seem to be any way that this statement can ever be true?
[/quote]

Yep, I know how to do a fast voxelization.
And as I said before, It's in this paper:
http://graphics.snu.ac.kr/class/graphics2011/materials/paper09_voxel_gi.pdf Page 6.
Since you're doing cone tracing you have to voxelize it few times, each time at lower resolution?

Share on other sites
i'm only voxelizing it once at the highest resolution. Then when it comes to creating the octree - i can use the voxels to subdivide the octree structure - much much faster instead of having to test triangle-box overlap at every level. I am thinking that maybe its better to prevoxelize objects into a file - and then load that file so I won't have to voxelize every time I compile the project.

Share on other sites
[quote name='gboxentertainment' timestamp='1347189980' post='4978262']
i'm only voxelizing it once at the highest resolution. Then when it comes to creating the octree - i can use the voxels to subdivide the octree structure - much much faster instead of having to test triangle-box overlap at every level. I am thinking that maybe its better to prevoxelize objects into a file - and then load that file so I won't have to voxelize every time I compile the project.
[/quote]

You can voxelize a object once, and revoxelize it only if it deforms.

Share on other sites
You might like to check out this:

Cone-tracing gi based roughly on the Crassin approach, but approximated to run on the PS3. Edited by gboxentertainment

Share on other sites
[quote name='gboxentertainment' timestamp='1347269596' post='4978519']
You might like to check out this:

Cone-tracing gi based roughly on the Crassin approach, but approximated to run on the PS3.
[/quote]

Sounds awesome. I'll check it later, I can't do it now.

Share on other sites
If you're going for only diffuse indirect lighting, there is nice and fast method here:

It looks pretty fast and accurate, huh?
I'll be trying to implement it, I have to learn something about spherical harmonics thought.
I found a source code for this technique. I'm posting it here.

First pass:
[code]
//----------------------------------------------------------------------------------
//
//
// G. Papaioannou (gepap@aueb.gr), 2011
//
//----------------------------------------------------------------------------------
//
// Abreviations:
// CSS: Canonical Screen Space
// WCS: World Coordinate System
// ECS: Eye (camera) Coordinate System
// LCS: Light Clip Space Coordinate System
//
//----------------------------------------------------------------------------------
//
// The shader is executed for every RH (voxel) of the RH buffer (3D texture), i.e.
// for every fragment of a sweep plane through the volume. For every slice, a
// quad is drawn. The process is repeated for every RSM that contributes to the GI.
// For this reason, the blending equation should be set to ADD and blending coefs to
// GL_ONE (src), GL_ONE (tgt)
// If the blending parameters for color attachment 0 (distance data) can be separately
// set, use MIN as the blending equation.
//
// Notes:
// This is the 2nd order SH (4 coef) version of the demo/example shader source code.
// higher order SHs are similarly utilized. The shader could also be further optimized
// in order to avoid re-computing certain quantities.

// ---------------- FLAGS ---------------------------------------------------------

// #define DEPTH_OCCL // if defined, depth-based RSM sample occlusion is enabled.

// ---------------- INPUT ---------------------------------------------------------

uniform ivec3 resolution; // The volume resolution
uniform int slice; // The current volume slice
uniform sampler3D Noise; // A pre-computed 3D noise texture (32X32X32). Value range (r,g,b): [0,1]
uniform float R_wcs; // Rmax: maximum sampling distance (in WCS units)
uniform vec3 bbox_min; // Bounding box limits of the radiance hints volume
uniform vec3 bbox_max; //
uniform sampler2D Shadow; // RSM depth
uniform sampler2D ShadowColor; // RSM vpl flux
uniform sampler2D ShadowNormal; // RSM normals
uniform sampler2D Depth; // camera depth buffer
uniform int samples; // Number of RSM samples
uniform mat4 L; // Light transformation matrix ( WCS -> LCS(RSM))
uniform mat4 L_inv; // Inverse light transformation matrix ( LCS(RSM) -> WCS)
uniform mat4 L_ecs; // Light modelview transformation matrix ( WCS -> Light ECS )
uniform mat4 MVP; // Final modelview and projection camera matrix (CSS -> WCS)
uniform int num_lights; // Number of GI lights
uniform vec3 light_pos; // Current light position in WCS coords
uniform vec3 light_dir; // Current light direction in WCS coords

// ---------------- SH functions -------------------------------------------------

vec4 SHBasis (const in vec3 dir)
{
float L00 = 0.282095;
float L1_1 = 0.488603 * dir.y;
float L10 = 0.488603 * dir.z;
float L11 = 0.488603 * dir.x;
return vec4 (L11, L1_1, L10, L00);
}

void RGB2SH (in vec3 dir, in vec3 L, out vec4 sh_r, out vec4 sh_g, out vec4 sh_b)
{
vec4 sh = SHBasis (dir);
sh_r = L.r * sh;
sh_g = L.g * sh;
sh_b = L.b * sh;
}

// ---------------- Coordinate transformations -----------------------------------

vec3 PointWCS2CSS(in vec3 sample)
{
vec4 p_css = MVP*vec4(sample,1);
return p_css.xyz/p_css.w;
}

vec3 VectorLECS2WCS(in vec3 sample)
{
// For vectors, the transposed inverse matrix from Light ECS to WCS is used
// (i.e. the transposed L_ecs). You could also pass the matrix transposed
return (transpose(L_ecs)*vec4(sample,0)).xyz;
}

vec2 ShadowProjection( in vec3 point_WCS )
{
// note: projected points on the RSM are clamped to the map extents,
// not rejected
vec4 pos_LCS = L*vec4(point_WCS+vec3(0.01,0.01,0.01),1.0);
pos_LCS/=pos_LCS.w;
float reverse = sign(dot(light_dir,point_WCS-light_pos));
vec2 uv = vec2( reverse*0.5*pos_LCS.x + 0.5, reverse*0.5*pos_LCS.y + 0.5);
return clamp(uv,vec2(0.0,0.0),vec2(1.0,1.0));
}

// ---------------- Main shader function -----------------------------------------

void main(void)
{
// Determine the RH position (WCS)
int gy = gl_FragCoord.y;
int gx = gl_FragCoord.x;
int gz = slice;
vec3 extents = bbox_max-bbox_min;
vec3 stratum;
stratum.x = extents.x/(resolution.x-1);
stratum.y = extents.y/(resolution.y-1);
stratum.z = extents.z/(resolution.z-1);
vec3 pos = bbox_min + vec3(gx, gy, gz) * stratum;

// Project the RH location on the RSM and determine the
// center of the sampling disk

// initialize measured distances from RH location to the RSM samples
vec4 SH_dist_ave = vec4(0,0,0,0);
float dist, dist_min=R_wcs, dist_max=0.0, dist_ave=0.0;

// Declare and initialize various parameters
vec4 smc, smp, smn;
vec4 SHr=vec4(0.0,0.0,0.0,0.0); // accumulated SH coefs for the incident radiance from
vec4 SHg=vec4(0.0,0.0,0.0,0.0); // all RSM samples
vec4 SHb=vec4(0.0,0.0,0.0,0.0);
vec3 normal;
vec4 color;

for (int i=0;i<samples;i++)
{
// produce a new sample location on the RSM texture
vec3 rnd = 2.0*texture3D(Noise, 14*pos/extents+vec3(i,0,0)/samples).xyz-vec3(1.0,1.0,1.0);

// produce a new sampling location in the RH stratum
vec3 p = pos+(0.5*rnd)*stratum;

// determine the the WCS coordinates of the RSM sample position (smp) and normal (smn)
// and read the corresponding lighting (smc)
vec4 pos_LCS = vec4 (uv.x*2.0-1, uv.y*2.0-1.0, depth*2.0-1.0,1.0);
vec4 isect4 = L_inv*pos_LCS;
smp = isect4.xyz/isect4.w;
smn = vec3(ntex.x*2.0-1.0,ntex.y*2.0-1.0,ntex.z);
smn = normalize(VectorLECS2WCS(normalize(smn)));

// Normalize distance to RSM sample
dist = distance(p,smp)/R_wcs;
// Determine the incident direction.
// Avoid very close samples (and numerical instability problems)
vec3 dir = dist<=0.007?vec3(0,0,0):normalize(p-smp);
float dotprod = max(dot(dir,smn),0.0);
FF = dotprod/(0.1+dist*dist);

#ifdef DEPTH_OCCL
// ---- Depth-buffer-based RSM sample occlusion

// Initialize visibility to 1
float depth_visibility = 1.0;

// set number of visibility samples along the line of sight. Can be set with #define
float vis_samples = 4.0; // 3 to 8
vec3 Qj;
vec3 Qcss;

// Check if the current RH point is hidden from view. If it is, then "visible" line-of-sight
// samples should also be hidden from view. If the RH point is visible to the camera, the
// same should hold for the visibility samples in order not to attenuate the light.
Qcss = PointWCS2CSS(p);
float rh_visibility = Qcss.z<(2.0*texture2D(Depth,0.5*Qcss.xy+vec2(0.5,0.5)).r-1.0)*1.1?1.0:-1.0;

// Estimate attenuation along line of sight
for (int j=1; j<vis_samples; j++)
{
// determine next point along the line of sight
Qj = smp+(j/vis_samples)*(pos-smp);
Qcss = PointWCS2CSS(Qj);
// modulate the visibility according to the number of hidden LoS samples
depth_visibility -= rh_visibility*Qcss.z<rh_visibility*(2.0*texture2D(Depth,0.5*Qcss.xy+vec2(0.5,0.5)).r-1.0)?0.0:1.0/vis_samples;
}
depth_visibility = clamp(depth_visibility,0,1);
FF *= depth_visibility;

#endif

color = smc*FF;
vec4 shr, shg, shb;
// encode radiance into SH coefs and accumulate to RH
RGB2SH (dir, color.rgb, shr, shg, shb);
SHr+=shr;
SHg+=shg;
SHb+=shb;
// update distance measurements
dist_max=(dist>dist_max )?dist:dist_max;
dist_min=(dist<dist_min )?dist:dist_min;
dist_ave+=dist;
}

// cast samples to float to resolve some weird compiler issue.
// For some reason, both floatXint and intXfloat are auto-boxed to int!
SHr/=(3.14159*float(samples));
SHg/=float(3.14159*float(samples));
SHb/=float(3.14159*float(samples));
dist_ave/=float(samples);

gl_FragData[0] = vec4 (dist_min,dist_max,dist_ave,1.0)/num_lights;
// distances must be divided by the number of lights this shader is run for, because
// because they should not be accumulated. Ideally, the MIN/MAX frame buffer operator should
// be used instead of the ADD operator for this data channel. In this case (e.g. MIN),
// frag data[0] should contain: dist_min, r_max-dist_max, dist_ave (not used actually), 1.0 )

gl_FragData[1] = SHr;
gl_FragData[2] = SHg;
gl_FragData[3] = SHb;

}
[/code]
Second pass:
[code]
//----------------------------------------------------------------------------------
//
//
// G. Papaioannou (gepap@aueb.gr), 2011
//
//
//----------------------------------------------------------------------------------
//
// Abreviations:
// CSS: Canonical Screen Space
// WCS: World Coordinate System
// ECS: Eye (camera) Coordinate System
//
//----------------------------------------------------------------------------------
//
// only passes the tex. coords through to the fragment shader. Tex coord set 0 is used
// for recovering the CSS coordinates of the current fragment. Bottom left: (0,0), top
// right: (1,1).
//
// Notes:
// This is the 2nd order SH (4 coef) version of the demo/example shader source code.
// higher order SHs are similarly utilized. The shader could also be further optimized
// in order to avoid re-computing certain quantities.

// ---------------- INPUT ---------------------------------------------------------

uniform sampler2D RT_normals; // Deferred renderer normal and depth buffers:
uniform sampler2D RT_depth; //

uniform sampler3D Noise; // A pre-computed 3D noise texture (32X32X32). Value range (r,g,b): [0,1]
uniform mat4 MVP; // Final modelview and projection camera matrix: CSS -> WCS
uniform mat4 MVP_inv; // Final inverse modelview and projection camera matrix: CSS -> WCS
uniform mat4 P_inv; // Final inverse camera projection matrix: ECS -> CSS
uniform float R_wcs; // Rmax: maximum sampling distance (in WCS units)
uniform float factor; // GI contribution multiplier
uniform vec3 bbox_min; // Bounding box limits of the radiance hints volume
uniform vec3 bbox_max; //
uniform sampler3D points[4]; // Radiance Hint buffers:
// points[0]: dist_min, dist_max, dist_ave, 1.0
// points[1]: Lr(1,1) Lr(1,-1) Lr(1,0) Lr(0,0)
// points[2]: Lg(1,1) Lg(1,-1) Lg(1,0) Lg(0,0)
// points[3]: Lb(1,1) Lb(1,-1) Lb(1,0) Lb(0,0)

// ---------------- SH functions -------------------------------------------------

vec4 SHBasis (const in vec3 dir)
{
float L00 = 0.282095;
float L1_1 = 0.488603 * dir.y;
float L10 = 0.488603 * dir.z;
float L11 = 0.488603 * dir.x;
return vec4 (L11, L1_1, L10, L00);
}

vec3 SH2RGB (in vec4 sh_r, in vec4 sh_g, in vec4 sh_b, in vec3 dir)
{
vec4 Y = vec4(1.023326*dir.x, 1.023326*dir.y, 1.023326*dir.z, 0.886226);
return vec3 (dot(Y,sh_r), dot(Y,sh_g), dot(Y,sh_b));
}

// ---------------- Coordinate transformations -----------------------------------

vec3 VectorECS2WCS(in vec3 sample)
{
vec4 vector_WCS = transpose(P_inv*MVP)*vec4(sample,0);
return vector_WCS.xyz;
}

vec3 PointCSS2WCS(in vec3 sample)
{
vec4 p_wcs = MVP_inv*vec4(sample,1.0);
return p_wcs.xyz/p_wcs.w;
}

// ---------------- Main shader function -----------------------------------------

void main(void)
{
// Accumulated global illumination, initialized at (0,0,0)
vec3 GI = vec3(0.0,0.0,0.0);

// Blending factor for current GI estimation frame
// If blending < 1, current result is blended with previous GI values (temporal smoothing)
float blending = 0.8;

float depth = texture2D(RT_depth,gl_TexCoord[0].st).r;
if (depth==1.0)
{
return;
}

vec3 extents = bbox_max-bbox_min;
ivec3 sz = textureSize(points[0],0);

// convert fragment position and normal to WCS
vec3 pos_css = vec3(2.0*gl_TexCoord[0].x-1.0, 2.0*gl_TexCoord[0].y-1.0, 2*depth-1.0);
vec3 pos_wcs = PointCSS2WCS(pos_css);
vec4 ntex = texture2D(RT_normals,gl_TexCoord[0].st);
vec3 normal_ecs = ntex.xyz;
normal_ecs.x = normal_ecs.x*2.0-1.0;
normal_ecs.y = normal_ecs.y*2.0-1.0;
normal_ecs = normalize(normal_ecs);
vec3 normal_wcs = normalize(VectorECS2WCS(normal_ecs));

// determine volume texture coordinate of current fragment location
vec3 uvw = (pos_wcs-bbox_min)/extents;

float denom = 0.05;

// Sample the RH volume at 4 locations, one directly above the shaded point,
// three on a ring 80degs away from the normal direction. All samples are
// moved away from the shaded point to avoid sampling RHs behind the surface.
//
// You can introduce a random rotation of the samples around the normal:
vec3 rnd = vec3(0,0,0); //texture3D(Noise,25*uvw).xyz - vec3(0.5,0.5,0.5);

// Generate the sample locations;
vec3 v_rand = vec3(0.5, 0.5, 0.5);
vec3 v_1 = normalize(cross(normal_wcs,v_rand));
vec3 v_2 = cross(normal_wcs,v_1);
vec3 D[4];
D[0] = vec3(1.0,0.0,0.0);
int i;
for (i=1; i<4; i++)
{
D[i] = vec3(0.1, 0.8*cos((rnd.x*1.5+i)*6.2832/3.0), 0.8*sin((rnd.x*1.5+i)*6.2832/3.0));
D[i] = normalize(D[i]);
}

for (i=0; i<4; i++)
{
vec3 sdir = normal_wcs*D[i].x + v_1*D[i].y + v_2*D[i].z;
vec3 uvw_new = 0.5*normal_wcs/sz+ sdir/sz + uvw;

vec4 rh_shr = texture3D(points[1],uvw_new);
vec4 rh_shg = texture3D(points[2],uvw_new);
vec4 rh_shb = texture3D(points[3],uvw_new);

vec3 rh_pos = bbox_min+extents*uvw_new;
vec3 path = rh_pos - pos_wcs;
float dist = length(path);
float rel_dist = dist/R_wcs;
dist/=length(extents/sz);
path = normalize(path);
float contrib = dist>0.005?1.0:0.0;
GI+= contrib*SH2RGB (rh_shr, rh_shg, rh_shb, -normal_wcs);
denom+=contrib;
}
GI*=factor/denom;

// Note: tone mapping is normally required on the final GI buffer
gl_FragColor = vec4(GI,blending);
}
[/code]

Share on other sites
[quote name='gboxentertainment' timestamp='1347189980' post='4978262']
i'm only voxelizing it once at the highest resolution. Then when it comes to creating the octree - i can use the voxels to subdivide the octree structure - much much faster instead of having to test triangle-box overlap at every level. I am thinking that maybe its better to prevoxelize objects into a file - and then load that file so I won't have to voxelize every time I compile the project.
[/quote]

Do we need an actual octree for this technique?
Can't we just use mipmapped 3d texture?

Share on other sites
Saw that Radiance Hints paper a while ago, and since I've yet to see it used I'm assuming there's something wrong with it in practice. Specifically I'd worry about secondary occlusion, bounce lighting is a lot higher frequency than is usually given credit; and besides, realtime reflections is just fantastic to have (even if it is going to look odd on truly reflective surfaces).

That being said, I'd love to hear I'm wrong. If anyone has experience actually implementing radiance hints and the results it gives I'd love to know your opinion.

Share on other sites
[quote name='MrOMGWTF' timestamp='1347289006' post='4978594']
If you're going for only diffuse indirect lighting, there is nice and fast method here:
http://graphics.cs.a...ntsPreprint.pdf
[/quote]

If you want a fast and proven technique for real-time diffuse indirect lighting you should look into light propagation volumes as proposed by Kaplanyan and Dachsbacher
http://www6.incrysis.com/Light_Propagation_Volumes.pdf

I wouldn't call it easy to implement, but it's still a lot easier than the cone tracing technique

Share on other sites
[quote name='Frenetic Pony' timestamp='1348361865' post='4982777']
Saw that Radiance Hints paper a while ago, and since I've yet to see it used I'm assuming there's something wrong with it in practice. Specifically I'd worry about secondary occlusion, bounce lighting is a lot higher frequency than is usually given credit; and besides, realtime reflections is just fantastic to have (even if it is going to look odd on truly reflective surfaces).

That being said, I'd love to hear I'm wrong. If anyone has experience actually implementing radiance hints and the results it gives I'd love to know your opinion.
[/quote]

Sadly, Radiance Hints is rarely known technique. I found only one video of its implementation.
What do you mean by "bounce lighting is a lot higher frequency than is usually given credit" ?
It's low frequency.

[quote name='MrOMGWTF' timestamp='1347289006' post='4978594']
If you're going for only diffuse indirect lighting, there is nice and fast method here:
http://graphics.cs.a...ntsPreprint.pdf
[/quote]

If you want a fast and proven technique for real-time diffuse indirect lighting you should look into light propagation volumes as proposed by Kaplanyan and Dachsbacher
http://www6.incrysis.com/Light_Propagation_Volumes.pdf

I wouldn't call it easy to implement, but it's still a lot easier than the cone tracing technique
[/quote]

LPV doesn't support occlusion, that's its con.
At least it's pretty fast method to approximate GI.
The funniest actually thing, is this:
1. I was playing around with cryengine 3.
2. I've loaded the forest map.
3. I was switching the GI check box, and there was NO difference in image realism.

GI isn't as this important in games? Well, maybe because it was a forest. A scene in a building would need GI to be realistic, I think?

Share on other sites
If you want a fast and proven technique for real-time diffuse indirect lighting you should look into light propagation volumes as proposed by Kaplanyan and Dachsbacher
[url="http://www6.incrysis.com/Light_Propagation_Volumes.pdf"]http://www6.incrysis...ion_Volumes.pdf[/url]

I wouldn't call it easy to implement, but it's still a lot easier than the cone tracing technique
[/quote]I'd recommend a [url="http://blog.blackhc.net/2010/07/light-propagation-volumes/"]more up to date iteration of LPV[/url], which includes corrections and annotations for the original paper, plus a DX10.1 demo with source Edited by Necrolis

Share on other sites
[quote name='Necrolis' timestamp='1348391062' post='4982855']
If you want a fast and proven technique for real-time diffuse indirect lighting you should look into light propagation volumes as proposed by Kaplanyan and Dachsbacher
[url="http://www6.incrysis.com/Light_Propagation_Volumes.pdf"]http://www6.incrysis...ion_Volumes.pdf[/url]

I wouldn't call it easy to implement, but it's still a lot easier than the cone tracing technique
[/quote]I'd recommend a [url="http://blog.blackhc.net/2010/07/light-propagation-volumes/"]more up to date iteration of LPV[/url], which includes corrections and annotations for the original paper, plus a DX10.1 demo with source [img]http://public.gamedev.net//public/style_emoticons/default/smile.png[/img]
[/quote]

Thanks for the link, wasn't too sure of the latest paper on the subject as I get my information about the technique from GPU Pro 2 [img]http://public.gamedev.net//public/style_emoticons/default/smile.png[/img]

[quote name='MrOMGWTF' timestamp='1348381188' post='4982834']
GI isn't as this important in games? Well, maybe because it was a forest. A scene in a building would need GI to be realistic, I think?
[/quote]

Even if you don't directly see any drastic changes to the scene, GI will still be a very important factor when it comes to realistic lighting in pretty much every setting.

EDIT:

[quote name='MrOMGWTF']
LPV doesn't support occlusion, that's its con.
[/quote]

I think you're confusing the LPV technique with VPLs. Indirect occlusion can be perfectly incorporated in the LPV technique by using a geometry volume. Edited by Radikalizm

Share on other sites
[quote name='MrOMGWTF' timestamp='1348381188' post='4982834']
[quote name='Frenetic Pony' timestamp='1348361865' post='4982777']
Saw that Radiance Hints paper a while ago, and since I've yet to see it used I'm assuming there's something wrong with it in practice. Specifically I'd worry about secondary occlusion, bounce lighting is a lot higher frequency than is usually given credit; and besides, realtime reflections is just fantastic to have (even if it is going to look odd on truly reflective surfaces).

That being said, I'd love to hear I'm wrong. If anyone has experience actually implementing radiance hints and the results it gives I'd love to know your opinion.
[/quote]

Sadly, Radiance Hints is rarely known technique. I found only one video of its implementation.
What do you mean by "bounce lighting is a lot higher frequency than is usually given credit" ?
It's low frequency.

[quote name='MrOMGWTF' timestamp='1347289006' post='4978594']
If you're going for only diffuse indirect lighting, there is nice and fast method here:
[url="http://graphics.cs.a...ntsPreprint.pdf"]http://graphics.cs.a...ntsPreprint.pdf[/url]
[/quote]

If you want a fast and proven technique for real-time diffuse indirect lighting you should look into light propagation volumes as proposed by Kaplanyan and Dachsbacher
[url="http://www6.incrysis.com/Light_Propagation_Volumes.pdf"]http://www6.incrysis...ion_Volumes.pdf[/url]

I wouldn't call it easy to implement, but it's still a lot easier than the cone tracing technique
[/quote]

LPV doesn't support occlusion, that's its con.
At least it's pretty fast method to approximate GI.
The funniest actually thing, is this:
1. I was playing around with cryengine 3.
2. I've loaded the forest map.
3. I was switching the GI check box, and there was NO difference in image realism.

GI isn't as this important in games? Well, maybe because it was a forest. A scene in a building would need GI to be realistic, I think?
[/quote]

Wow, that's more impressive than I'd given it credit. I suppose thin geometry is going to cause light bleeding, but then again that's a problem with voxel cone tracing as well.

Regardless, yes, bounce lighting is a lot higher frequency than is acknowledged generally. First off, it's obviously responsible for reflections, which are obviously "high frequency". Secondly, if you look around any average house with a lot of windows you're certainly going to see a number of fairly high frequency shadows from indirect light. It's just that it's not as obvious and high frequency as direct light from a source, and so it's easier to approximate with something smooth and low frequency.

Also, as stated occlusion is supported in LPV. Not that it's perfect, which is the real challenge of any GI method. Another weakness to note (as far as I know) is that it doesn't deal well with distances. Propagation just isn't a good way to get light bouncing from that mountain on one side of your map to the mountain on the other side, even though it really should. An extreme example you'd probably be able to ignore well enough I know, but you still do run into problems with it.

Share on other sites
[quote name='Frenetic Pony' timestamp='1348438762' post='4983031']
Another weakness to note (as far as I know) is that it doesn't deal well with distances. Propagation just isn't a good way to get light bouncing from that mountain on one side of your map to the mountain on the other side, even though it really should. An extreme example you'd probably be able to ignore well enough I know, but you still do run into problems with it.
[/quote]

Sadly enough there's no real-time GI solution which covers all problem areas, and I don't think we should expect one all too soon either. Right now it's not about completely being physically correct, but more about being believable within the confines of your targeted environments.

I must say that Radiance Hints technique does look impressive, I hadn't heard anything about it before but it's definitely worth implementing IMO.
Seeing as the paper about it is still quite young it maybe hasn't gained a lot of public interest yet, that could be the reason not a lot of implementations of it exist.