Voxel Cone Tracing, raymarch collision testing problems

Started by
12 comments, last by spek 11 years, 4 months ago
I believe UE4 already shows a cascaded volume approach, right? I could be wrong but I definitely remember lower and lower resolutions seeming present...

Regardless, one of the big things Epic did was to make sure not to revoxelize EVERYTHING in the scene each frame. They tag static geometry and just re-use the octrees for that geometry unless it's passing into a higher resolution cascade or moves.

Of course, they also use half or fourth? Sized buffers, and mention some magic "scattering" they do to upres it. Not sure what you'd do for thin geometry, but then again thin geometry is a problem to begin with. Anyway, even if you're asking for help thanks for all the posts on it! Definitely a cool idea, and I'm always in support of "realtiming" it. Gameplay designers should never be told what they can and can't do with something if and when possible.
Advertisement
Voxelizing is not the real problem in my case, because I "pre-voxelized" my models. This also works for dynamic stuff, simply by multiplying the voxel positions with the object matrix. For animated objects it;s more tricky, but not impossible either. The octree construction therefore doesn't each that much energy. Though it could still be faster by indeed only inserting static voxels once. This is a bit difficult due camera distanced based LOD though, so in my case quite a lot will change with each step you make anyway.

The whole performance goes to hell with the final screenpass that fires rays into the octree. But I have to say that my code is not optimized, and the hardware is getting dated.

This walls are indeed a problem, though I haven't seen that much artifacts with it yet (but also tried it in only a few pretty simple environments). But thin geometry is also a problem with Propagating volumes, 3D texture based raymarching, and pretty much all other realtime techniques I can think of.


By the way, is UDK4 already available? I guess not, but if it is, then I really wonder if their VCT implementation runs on a "normal" gamer computer. I guess they don't just implement a technique that only runs on 1% of the computers... unless they're planning to use VCT as a default GI technique... I could be wrong, but Crysis2 didn't really show realtime GI either while they showed LPV proudly.

Voxelizing is not the real problem in my case, because I "pre-voxelized" my models. This also works for dynamic stuff, simply by multiplying the voxel positions with the object matrix. For animated objects it;s more tricky, but not impossible either. The octree construction therefore doesn't each that much energy. Though it could still be faster by indeed only inserting static voxels once. This is a bit difficult due camera distanced based LOD though, so in my case quite a lot will change with each step you make anyway.

The whole performance goes to hell with the final screenpass that fires rays into the octree. But I have to say that my code is not optimized, and the hardware is getting dated.

This walls are indeed a problem, though I haven't seen that much artifacts with it yet (but also tried it in only a few pretty simple environments). But thin geometry is also a problem with Propagating volumes, 3D texture based raymarching, and pretty much all other realtime techniques I can think of.


By the way, is UDK4 already available? I guess not, but if it is, then I really wonder if their VCT implementation runs on a "normal" gamer computer. I guess they don't just implement a technique that only runs on 1% of the computers... unless they're planning to use VCT as a default GI technique... I could be wrong, but Crysis2 didn't really show realtime GI either while they showed LPV proudly.


Pretty sure the Elemental demo was running on a GTX 680/Core i7, so don't feel too bad about your performance not being up to snuff. I'd been wondering how easy pre-voxelization was, apparently not difficult, which makes this even more viable for actual games! Being limited by polycounts because you have to keep re-rasterizing everything would suuuuuck.

And yeah, LPV isn't terribly "realtime". The propagation is slow, the distance is severely limited. Maybe they've made improvements?
Ah, so Unreal4 has banding artifacts AND a turbo GPU as well. It makes me feel less ashamed hehe. Though if I want to record another video in realtime, I'll have to buy such a card and throw the 2009 laptop in the shredder. I was thinking about baking the VCT results into a lightMap, or per vertex so older hardware can fallback on simplified techniques, but without looking radical different. That's one of the problems I have right now. The GI has been switched so many times past years (lightmap, AO map, secundary pointlights, handdraw AO, LPV, VCT, shitty GI, ....) that its hard to perfect a scene. The 3D scenes I once made look terrible now, as they haven't been tweaked for the current GI yet. So IF choosing VCT finally, I still have to got a backup method that roughly produces the same results (but not realtime).

Using pre-voxalized models is pretty easy indeed. It costs some extra VBO's of course, and you have to keep in mind that multiple voxels may insert themselves into the same octree node. I'[m using a Max Blend filter instead of averaging things when injecting voxels. It may also have consequences on your Compute Shader implementation, when it comes to safely inserting multiple voxels in the same node at the same time.

In Crysis2 I didn't see anything changing in the GI at all when moving lights or closing doors. But maybe this feature wasn't enabled on my hardware, no idea. Or maybe they decided that handmade prebaked results still beat the realtime techniques and uses those ones instead. I'll make sure I can still manually override the GI results by painting (per vertex) darker or brighter areas. In a horror game, lighting shouldn't always be realistic anyway!

This topic is closed to new replies.

Advertisement