Results on Voxel Cone Tracing

Started by
11 comments, last by Lightness1024 11 years, 6 months ago

...
I mean that, there is a white wall, a green wall, and a blue wall occluding green wall. The green wall will be still illuminationg the white wall, but it shouldn't. Because the blue wall is occluding the green wall. Shouldn't you stop tracing at the first intersection you find? Also, you do cone tracing for each pixel, yeah?



Cone tracing voxel mipmaps means you progressively lookat higher and higher level mipmaps. A higher level mipmap stores an occlusion distribution built from child voxels (and not concrete occluders). In his case, I think he's just storing average occlusion for the voxel, and not something that varies by direction/position/etc.
Advertisement

I mean that, there is a white wall, a green wall, and a blue wall occluding green wall. The green wall will be still illuminationg the white wall, but it shouldn't. Because the blue wall is occluding the green wall. Shouldn't you stop tracing at the first intersection you find? Also, you do cone tracing for each pixel, yeah?



Cone tracing voxel mipmaps means you progressively lookat higher and higher level mipmaps. A higher level mipmap stores an occlusion distribution built from child voxels (and not concrete occluders). In his case, I think he's just storing average occlusion for the voxel, and not something that varies by direction/position/etc


jameszhao00, you're right. The voxels of the highest resolution mipmap are either completely opaque or completely transparent. But for the lower resolution voxels they usually are partially transparent due to the averaging of the voxels. Therefore, calculating occlusion is not as simple as looking for the first intersection (i.e. the first fully opaque voxel), when sampling the voxels we need to keep track of the accumulated opacity of all the voxels we have sampled so far. When the accumulated opacity reaches 1.0 we can stop tracing because the next samples would be completely occluded.
Hey man, I read your thesis, nice work there ! too much specialized for your particular scenery (not scalable enough) in my opinion but still interesting.
notably the novel par where you use screen space sky visibility.
anyways, the sparse voxel octree will allow you to go down to 512*512*512 leaf precision which is better than your 128 precision, particularly for light leaks in interiors.
also, i havn't read a leak suppressor technique in Crassin's paper, but clearly it should be possible to apply the central difference scheme used in Crytek's LPV to try to suppress leaks. Careful though, I have implemented it and I can tell you it sometime causes severe artefacts depending on the empirical strength of the anisotropic value you choose. But the good part is that with a 512 division it should be much less noticeable.

About the specular, you should really try to fix it because it is one of the things that gives the most of the wow factor for this technique and it only requires ONE cone tracing where you alreay have 20 ! so it should not hurt your perf.

about the perf using sparse octree, you will get lesser performance, i'm sure you are familiar with the horrors Crassin had to cope with using global shared list of sleeping threads and multi passes until they are empty, not even talking about the bricks lateral two passes communication, this is just hell on earth and I feel like he must be a god of GPU debugging to got that working OK, or he is just bullshitting us with his paper.

the only idea is to get better precision by avoiding the otherwise necessary 9GB of data with a dense grid. remainder: in his paper he already needs 1GB this is huge.

one question for you: I didn't follow very thoroughly the part in Crassin's paper where he talks about light-view buffer to perform multi bounce GI, though it seems like a crucial part of his method, did you implement that at all ??

This topic is closed to new replies.

Advertisement