Voxel Cone Tracing, more drama

Started by
17 comments, last by gboxentertainment 11 years, 2 months ago
You never know what kind of tricks they use hehe. But I'm pretty sure VCT (or LPV, or any other realtime GI technique) isn't enough to make the lighting truly "good". In my implementation, I have the option to manually override by storing multiplier colors per vertex. So, if the GI results suck (or leak light, or too bright/dark, or...) the artist can still polish a bit with the magic hand.
Using the low(est) mipmap levels for an overal blur might not be a bad idea, especially to lit particles and such. I liked the idea of having GI "everywhere" in the LPV approach. VCT or the textured variant I'm trying right now does not provide this by default, unless you grab it from the higher mipmapped levels. Just succeeded into baking a first bounce into the voxels. The engine will have a quality setting where you can toggle the GI between
"Suicide" = (slow) 2 realtime bounces
"Smart" = (medium) First bounce baked into the voxels (using static lights only), 2nd bounce done realtime
"Fake" = (fast) both bounces baked per vertex. Not realtime at all, but fast and not too different from the realtime variant
One of my main issues with changing and trying GI all the time, is that my lighting would be messed up each time. It requires carefull tweaking of the scene to get it look right. But once the GI technique changed, it was all wrong again. So having the ability to toggle is kinda important for me. Hopefully this will be the last time (at least for the next few years) to implement GI :D
post-80126-0-78925500-1356733118_thumb.j
Attenuation can indeed be skipped I think. Due the mipmapping nature, distant litten surfaces already have less influence as they get mixed with other surfaces. The reason we apply attenuation on stuff like a pointlight is not because its realistic, but because the whole lighting method of it works different (read "fakish").
Cheers!
Advertisement

While I can barely understand what is going on, it is a very interesting discussion :D

"I AM ZE EMPRAH OPENGL 3.3 THE CORE, I DEMAND FROM THEE ZE SHADERZ AND MATRIXEZ"

My journals: dustArtemis ECS framework and Making a Terrain Generator

And the nice specular that reflects the lit white wall, this effect is underrated. We never had world aware speculars before. classic lighting only gives specular of point sources :) good job

That particular reflection is made with RLR, not with GI raymarching :) Though I have that effect implemented too, its only useful for situations where RLR isn't. RLR produces high quality reflections, but only for stuff present on the screen, as its a screen-space effect. Glossy "GI Reflections" produces reflections for any situation, but only vague (unless you make an awfull detailed VCT octree or 3D texture resolution size). It's still an useful effect, but it has to be used in combination with other reflection techniques I think.

Here a glossy reflection shot. I got a feeling there is some inaccuresy as the vertical wall bends to the right when looking at the floor reflection. But it doesn't matter that much for glossy reflections anyway :p

Neat screenshots!

Just wondering, did you solve the issues you were having? And were those screenies generated with the 3D texture or with the SVO approach?

Hey!

The shots are using the 3D texture method, thus probably sort of the same technique you are using. It solved all the "blocky artifacts" (due better mipmapping probably), and it runs quite a lot faster as well. But, of course, I'm restricted to a limited area around the camera as well. The 128^3 texture covers 32m3. There is a second coarse grid, but I haven't implemented it further in the raymarcher yet.

The results now are acceptable. As said, the worst artifacts are gone, though there are some weird bandings here and there, and in narrow corners I would get leaks or intercollisions. This is solved by multiplying the result with sort of AO (meaning if the rays collide nearby already, the color gets darkened).

As for the "T-junction" issue, for some reason I catch more light. Maybe I'm using slight different cone angles here. But also, to get things light, I cheat a bit by letting the rays penetrate walls here and there a bit as you suggested. This can lead to receiving false light, but most of the time, you won't really notice. And the "maximum occlusion factor" is adjustable per room. So if a room suffers from leaking really, this factor can be brought down so the rays don't cheat (or less).

Correctly inserting multiple voxels in the same cell has not been solved yet. Instead, I just apply a max-filter so the colors won't add up. This is acceptable, though when going to the bigger grid, it gets more difficult as a single cell always gets filled by a lot of smaller voxels. One possible solution is to simply produce a second array of larger voxels that are already "mixed". And small objects would be skipped then.

All in all, I'm quite happy with it. But... I got to see it working in some more complicated environments first. I only have a few testrooms so far, and it often happens with these things it totally sucks in another environment. So, still being a bit sceptic ;)

Have you been able to implement soft shadowing using cone tracing? According to Crassin in his thesis, soft shadowing can be done through the use of the opacity (alpha channel) values. Because these should already have been calculated through the GI cone tracing process, it should just be a matter of accumulating the opacity values through another cone trace in the fragment shader. I'm currently trying to work out how to do this.

This topic seems to have died shortly after my last post. So hopefully my latest attempt at voxel cone tracing can keep this discussion alive:

[attachment=13583:vct0.jpg][attachment=13584:vct1.jpg]

This topic is closed to new replies.

Advertisement