Jump to content

  • Log In with Google      Sign In   
  • Create Account


Voxel Cone Tracing Experiment - Progress


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
28 replies to this topic

#1 gboxentertainment   Members   -  Reputation: 766

Like
4Likes
Like

Posted 17 February 2013 - 06:45 AM

Hi all,

 

I thought I might share with you all the latest progress of my voxel cone tracing engine, implemented in OpenGL 4.3 and based on Alex Nankervis' voxel cone tracing method (http://www.geeks3d.com/20121214/voxel-cone-tracing-global-illumination-in-opengl-4-3/) - except, I have built on that by creating my own engine from scratch and implementing a number of things differently.

 

Here are a number of screenshots to show my results so far - of course there are many things that need to be improved, which you will notice a number of artifacts that exist in my test engine:

 

Basic scene with cone-traced soft-shadows, specular reflections, as well as light specular:

gibox0.jpg gibox1.jpg

 

Single-bounce gi vs "unlimited" multi-bounce gi (unlimited in quotes being that the intensity of each subsequent bounce converges towards zero):

gibox2-1.jpg gibox2.jpg

 

First voxelization of scene (done only in first frame) and then the revoxelization loop of the scene after cone tracing for "unlimited" bounces:

gibox2-2.jpg gibox2-3.jpg

 

Just showing off the quality of the specular reflections:

gibox3.jpg gibox4.jpg

 

and finally...

Single-bounce emissive vs multi-bounce emissive ([edit] I made a mistake here where I accidentally had the green wall outside of the scene volume, thus it was not correctly lit, which is why there is no color bleeding). The last image shows that there is a major problem with the revoxelization of the emissive scene (which leads to flickering artifacts - I guess this may not be a problem in a real game because it can add an effect of flickering lights):

giboxemissive1.jpg giboxemissive0.jpg giboxemissive2.jpg

 

To give you guys an idea of the scale of this scene, here are the specs:

  • 64x64x64 voxels for the entire scene
  • Runs at around 30-35fps on 1024x768 on my gtx485m i7 2720qm with 8gb ram Windows 7 64-bit. Drops to about 23fps if I get close to the Buddha model.
  • 64-bit OpenGL 4.3
  • 1 3D texture using dominant axis voxelization (plus a second 3D texture for image atomic average operations which reduces flickering artifacts significantly - but doesn't eliminate them).
  • (4 diffuse cones traced + 1 specular cone traced + 1 shadow cone traced in direction of light) x 2 (2nd time for revoxelization to achieve "unlimited" multiple bounces).
  • Buddha model is the most costly object, with over 500,000 vertices.
  • I also apply the lighting and shadows prior to voxelization (and it is applied in each pass). Until someone comes up with a convincing explanation, I don't see any advantage to splatting light into the 3D texture (which is not as accurate) after voxelization from a lightmap texture that has world position information in it.

Things that I need to improve (hopefully I can get some advice from this community):

 

  • If you notice the specular reflection under the red box - part of it is missing which I think is caused by some opacity problem during cone trace of the base mip level:

gibox0-1.jpg gibox0-2.jpg

 

  • You would have noticed that the soft shadows are very voxelized and have holes in them - again, some opacity issue that might be related to the red box specular reflection problem. What I had to do was to increase the step size of just the initial cone trace step because originally, the shadow cone tracing was producing the self-shadowing artifacts as shown in the second image, even though the smaller step size produced more accurate shadows without holes in them:

giboxshadows3.jpg giboxshadows4.jpg

 

  • In the images above of the emissive test, emissive objects really bring out the incorrect filtering, because I believe I am using 2x2x2 bricks during the mip-mapping process. Another explanation could be that i am not distributing my four diffuse cones evenly enough.
  • In some cases, activating multiple bounces actually makes the lighting look worse than a single-bounce due to the scene progressively reducing in intensity for each bounce. However, I think I can address this by turning up the intensity of the direct lighting in each bounce pass.
  • I need to implement an octree structure using 3x3x3 bricks.
  • I will probably implement some sort of performance debugging which shows the cost in ms of each action.

Edited by gboxentertainment, 17 February 2013 - 07:37 AM.


Sponsor:

#2 mrheisenberg   Members   -  Reputation: 356

Like
0Likes
Like

Posted 17 February 2013 - 10:20 AM

wow this is amazing, how much video memory does the whole thing take up?



#3 gboxentertainment   Members   -  Reputation: 766

Like
0Likes
Like

Posted 17 February 2013 - 03:35 PM

wow this is amazing, how much video memory does the whole thing take up?

I believe by itself its about 127mb so far (according to GPU-Z).



#4 Lightness1024   Members   -  Reputation: 694

Like
0Likes
Like

Posted 18 February 2013 - 09:22 AM

I'd like to know how you make the multi bounce with that method ? great work :)



#5 gboxentertainment   Members   -  Reputation: 766

Like
1Likes
Like

Posted 18 February 2013 - 04:55 PM

I'd like to know how you make the multi bounce with that method ? great work smile.png

Good question, I actually couldn't believe it myself when I heard about unlimited bounces. The way you would do a second bounce with voxel cone tracing is by revoxelizing the scene after the first bounce - this captures the all of the lighting, which includes the indirect lighting captured by the first trace. A second mip-map filtering pass will then filter this captured indirect+direct lighting through the 3D texture mip-map levels. The second cone trace will then capture this indirect+direct lighting and thus a second bounce will automatically result - quite clever really.

 

If you re-use the same 3D texture for every bounce, you can just loop the process so that with every frame, you are voxelizing a scene that has the lighting information from the previous frame already captured.

 

Of course, in my implementation, it is somewhat sensitive and there are still a lot of issues which requires a lot of tweaking.



#6 Lightness1024   Members   -  Reputation: 694

Like
0Likes
Like

Posted 19 February 2013 - 08:52 AM

Oh I see. thanks :)

because in the original paper, the multi bounce technique explanation is lost in the middle of a very long chat about how tricky it is, rather than to talk about the fundamentals. And frankly I never got it. Now it makes sense, though it sounds painfully slow.

Is there a way to optimize the reconstruction ?

Maybe on the base that the voxels will have the same geometric source, therefore same position, and the octree will have the same structure; so maybe just re-feeding the lighting into the existing voxel ?

..

Ah wait you're using a grid, and not an octree, that may accelerate things quite a lot.

We have yet to see somebody else than Crassin doing an octree implementation of that method. I think we can wait... because frankly, from the paper it sounds like something that would take months of blind debugging. The note about the iterative pass system for octree's nodes construction, with waiting queues of threads that were causing collisions on the previous pass is crazy. Not to mention the goddamn "brick" stuff, (the 3x3x3 clusters). that one is just ultimate crazy. I can't believe anyone can get this implementation right. Even Crassin, there must be bugs in his stuff.



#7 gboxentertainment   Members   -  Reputation: 766

Like
0Likes
Like

Posted 19 February 2013 - 05:29 PM

Ah wait you're using a grid, and not an octree, that may accelerate things quite a lot.

 

Yes, you are correct. Unlimited bounces requires dynamic voxelization and with an octree structure, that would just be too expensive to be doing every frame for an entire scene. Because I'm not using an octree structure, my implementation is only as expensive as doubling the amount of cone traces (because the scene only needs to be revoxelized and mip-mapped once per frame).

 

I was originally going to try and implement an octree structure; however, upon hearing about sparse textures in the new amd gpus, I'm not sure whether it would be worth the time and effort. Unless someone can prove me wrong, I believe that sparse textures may make octrees irrelevant - or alternatively, for extremely large scenes, you could use some sort of high-level (low detail) octree structure and the finer details would use sparse textures.

 

Then again, if the next generation of consumer nvidia cards have dynamic parallelism available (which is currently only available in the overvalued teslas), then I believe that octrees can become very fast and it would be reasonable to build them per frame (eg. the galaxy clusters tech demo).



#8 gboxentertainment   Members   -  Reputation: 766

Like
0Likes
Like

Posted 20 February 2013 - 07:19 AM

I've recently discovered that the previous images I posted of a purely emissive scene was implemented wrongly - I had mistakenly left some aspect of the spotlight in my calculations and thus it was not truly an emissive scene.

 

It took a bit of tweaking to get the purely emissive lighting working properly:

giboxemissive3.jpg

 

The infinite-bounce lighting seems to be extremely sensitive to adjustments in parameters. If I got it wrong, this occurs:

giboxemissivewrong.jpg

 

Here's another problem that i just can't seem to fix:

giboxemissive4.jpg giboxemissive5.jpg

You will notice that when the buddha is placed to the left of the light-sphere, there is a volume of space where, if it falls - it blocks out all light to the red wall. This does not occur at all when it is to the left of the light-sphere.

It occurs in the single-bounce scene as well so it must be some sort of mip-mapping error. I've tested the scene with only one cone traced along all surface normals and the problem still occurs so it can't be anything to do with not distributing cones evenly enough.

 

Another issue you will notice is the red wall seems to be a lot more well lit than the blue wall, even though the sphere is slightly more closer to the blue wall. If I move the red wall a little bit more towards the right, it becomes a bit more even:

giboxemissive6.jpg

This has nothing to do with vicinity to the light - it seems to be something to do with the mip-map filtering or voxelization where objects in the wrong locations would not be voxelized or mip-map filtered correctly.

 

The key difficulty of implementing physically correct emissive lighting is the amount of tweaking required in all of the indirect lighting parameters. The mip-map filtering that I have done really loses its effect at higher intensities. When I move the light-sphere, there is no smooth transition between high level voxels so you get jumps in the lighting. I guess that emissive lighting can only be used in a subtle nature as it would be unfeasible for lighting an entire scene. I would probably also put a limit to the number of bounces as well.


Edited by gboxentertainment, 20 February 2013 - 07:47 AM.


#9 Lightness1024   Members   -  Reputation: 694

Like
0Likes
Like

Posted 21 February 2013 - 09:42 AM

I think like you, but it is common, like you did, to discover bugs after random chance, and improve quality thanks to fixing them.

Though sometime it happens that fixing a bug degrades quality, or worse, kills all of your parameters finetuning and you have to redo everything.

It happened to me I don't know how many times when I implemented light propagation volumes.

I made it support the sun, then finetuned it great. then support the spot lights, and finetuning the spots broke the sun, and then no way to make them work together because of huge intensity differences. the spot would be ridiculously weak and contribute to nothing to the indirect solution.

It caused problems of accumulation, because LPV works by accumulating Reflective Shadow Maps (VPLs) pixels into the light volume and if you add 1000 vpls to one voxel, you must get your range very carefully so that it respects the sensitivity of the FP16 format. And that caused me terrible color tints with curious jumps from green to pink when I would vary a light intensity just because some floating point increment would jump at different time for different color channels.

So the "fix" is to rescale everything but doing that is a hell. nothing can stick to theretical values that would ensure energy conservation for instance. because none of these values work in practice. (for example, the flux should be solid-angle*intensity thus the flux through one face of a cube is 4*PI/6 * intensity, using this formula creates energy gain, an epsilon has to be used to empirically modulate that so that propagation attenuates cutely...)

all of this shi** is hell on earth, and really makes one hope that we could simply do stupid monte carlo path tracing at each frame and be done with it.

one day....



#10 gboxentertainment   Members   -  Reputation: 766

Like
0Likes
Like

Posted 22 February 2013 - 10:13 PM

really makes one hope that we could simply do stupid monte carlo path tracing at each frame and be done with it.

 

I've thought about implementing realtime pathtracing with random parameter filtering to minimize the noise, but I got lost in the process about creating an acceleration structure. Now I've gone back into vct and I probably still need to implement an acceleration structure (octrees). Maybe one day I might get back into pathtracing. In the meanwhile, I've gotten too far into this vct engine and its at least getting some half-decent results, despite the many artifacts. It doesn't have to be perfect, but I guess it would be good to iron out as many of the artifacts as possible.

 

Anyhow, here's my latest results on standard lighting + colored emissive lighting in the same scene:

giboxemissive7.jpg giboxemissive8.jpg

 

The main question is - would emissive objects still have shadows from the spotlight in the scene? Anyhow, I've worked out a way of making the objects not cast a shadow if they are emissive. In terms of having non-emissive objects cast shadows by the emissive object, this implementation probably doesn't have the level of accuracy for that to be possible, nevertheless, I cannot make the emissive object strong enough with the current mip-map level fade ratio, so I will need to input some sort of emissive intensity factor in the mip-map process.

 

[EDIT]

So it turns out that emissive objects do cast shadows provided that the object is not brighter than the scene light.

giboxemissive9.jpg

 

Also, I forgot to mention that in an attempt to make the emissive object physically based, those artifacts occur with multiple bounces. Something to do with opacity issues again.

 

[EDIT 2]

I've also just realised that in order to increase the emissive object's intensity, I would need to find out a way of altering the mip-map of that particular object. Because the mip-map filtering process takes the entire scene (inside the 3D texture), objects are not treated individually. I think the only way around this is to have an emissivity falloff factor included in each voxel during voxelization.


Edited by gboxentertainment, 23 February 2013 - 12:31 AM.


#11 gboxentertainment   Members   -  Reputation: 766

Like
0Likes
Like

Posted 23 February 2013 - 08:03 AM

Here's transparency:

giboxtransparency0.jpg giboxtransparency1.jpg

 

Not sure entirely how correct this is - the backface seems to be on top in the depth order. I've tried placing the code:

	glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
	glEnable(GL_BLEND);

 

both before and after enabling depth test, but doesn't make any difference. You will notice when I go to the back of the glass Buddha that only his back shows, whereas when I am in front of him, both his back and front faces are shown. There is no backface culling that I am aware of.

 

Other things I have to add is refractions and and caustics.



#12 gboxentertainment   Members   -  Reputation: 766

Like
0Likes
Like

Posted 23 February 2013 - 09:24 AM

Cone-traced refraction without transparency (Translucency):

 

giboxrefraction.jpg

 

So it seems that I've got three choices:

  1. Render using alpha blended transparency which cannot provide refractions (but still looks cool).
  2. Cone trace refraction by copying the specular cone trace and replacing the reflection vector with a refraction vector - very slow (13fps with the Buddha) and still very blurry even with a 0.01 cone ratio. At some view directions, the object does not look translucent at all.
  3. Render refraction using cube maps - may provide most accurate image but I'd imagine this would also be quite slow. I might give this a go though.

giboxrefraction1.jpg

 

The sphere is a lot faster, as expected [27 fps]

I still would like to get clearer refractions.

 

Here's a translucent cube:

giboxrefraction2.jpg

 

One thing that is missing from cone-traced refraction is the back faces are not captured - not sure if there is a possible way to do this or to fake it.


Edited by gboxentertainment, 23 February 2013 - 09:56 AM.


#13 Frenetic Pony   Members   -  Reputation: 1186

Like
0Likes
Like

Posted 23 February 2013 - 04:22 PM

I'm going to go ahead and say "Accurate refraction, who cares?" Don't particularly see the appeal of it, not at the performance hit being talked about, not above just a screenspace refraction anyhow.

 

It's nice in hypothesis, and then if you actually look at the results, it's not really that spectacular unless you are looking at some extremely specific object designed to show it off.

 

A thing that might be worth exploring more is voxel anti aliasing. If your mesh partially covers a voxel space, might it be better to have that voxel "partially" filled? Assuming you can get it working I wonder what the cone traced shadows would look like. At best it might also help with emissive stuff "popping" into a neighboring voxel, and it's associated light popping as well. You could get as many emissive objects as you'd want in a scene, unlimited shadows casting lights definitely sounds like a worthy goal, even if the shadowing is going to be a bit, blocky or something.

 

Great work so far smile.png


Edited by Frenetic Pony, 23 February 2013 - 04:23 PM.


#14 gboxentertainment   Members   -  Reputation: 766

Like
0Likes
Like

Posted 23 February 2013 - 07:38 PM

Really I've just been trying to get as accurate refractions as I can get for the sake of seeing if I can. But you are correct, I think I will just push this one aside - being a natural engineer, I tend to become obsessed with the little details sometimes too much - although that's what got me into graphics programming in the first place.

 

More importantly though, I do need to work out a way of either getting more accurate cone-traced soft shadows, or somehow masking the very voxelized core of the shadow that is closest to the occluder. I think the latter will be more achievable:

 

My idea is to render a shadow map and this would be blended in with the cone traced soft shadow. What I would start off doing is to fade out the shadow map at a certain distance away from each occluder. I remember being able to do this with Unreal Engine 3 shadows, however I never understood the technical specifics of it. I'd like to know if anyone knows of a way of doing this?

 

givox7.jpg

 

Another thing I have to resolve is - if you look at the translucent cube, you will notice its shadow is incorrect. This is related to the issue in the following image, where the reflection of the red box only shows the top and side voxels as opaque and anything in shadow is transparent:

 

gibox0-1.jpg

 

Technically, the base mip should be completely opaque with an alpha value of 1 - even when I try forcing the base mip alpha values to 1 the problem still persists.

 

A thing that might be worth exploring more is voxel anti aliasing. If your mesh partially covers a voxel space, might it be better to have that voxel "partially" filled?

 

So currently, my voxels are just blended together and when I sample for the cone trace, I take a voxel offset for six directions:

 

givox8.jpg

 

which pretty much achieves what you are saying - unless you are referring to something else? Anyhow, I have heard an octree structure might be able to provide more accurate voxel antialiasing - which is something I will implement later down the track.



#15 gboxentertainment   Members   -  Reputation: 766

Like
0Likes
Like

Posted 26 February 2013 - 08:16 AM

Here's some more cone-traced transparency tests, where I've made everything semi-transparent:

 

giboxtransparency2.jpg giboxtransparency3.jpg giboxtransparency4.jpg

 

Just for a test, when I make everything else completely opaque - this is what I see when I look through a transparent sphere:

giboxtransparency5.jpg

 

Just like the specular issue, everything else becomes transparent (especially notice the red wall), when it should not.

I know that it is to do with the way values are accumulated from the cone trace after the samples are filtered.

 

Filtering is required for smooth speculars and smooth refraction, however, because every object is only surface voxelized, the opaque surface voxels are filtered together with the transparent empty voxels inside each mesh. When the cone traces through an object, it does not saturate immediate after hitting the object due to the semi-opacity and thus continues on, adding values from behind the object - which leads to the transparent results.

 

I guess the best way to resolve this is to use solid voxelization; however, I'm pretty sure that Crassin uses surface voxelization according to his OpenGL Insights chapter, and in one of his videos he shows a very defined specular image with the moving hand, and there are no opacity issues.

 

I'm not sure whether this is because I am not filtering correctly because I am not using 3x3x3 bricks. The issue is still present when I use 512x512x512 voxel resolution for this small scene.


Edited by gboxentertainment, 26 February 2013 - 08:18 AM.


#16 gboxentertainment   Members   -  Reputation: 766

Like
3Likes
Like

Posted 28 February 2013 - 06:03 AM

I might as well show off the effects that my current engine can achieve, before I proceed to improving it by implementing solid voxelization and octrees:

 

giboxbumps0.jpg gibox6.jpg

gibox7.jpg gibox8.jpg

giboxemissive10.jpg gibox-cyber.jpg

 

I'm trying to put together a video soon as well.



#17 gboxentertainment   Members   -  Reputation: 766

Like
1Likes
Like

Posted 06 March 2013 - 04:20 AM

Stress test:

 

100 spheres ~24fps

gibox-stress0.jpg gibox-stress1.jpg

 

1000 spheres ~12fps

gibox-stress2.jpg



#18 bwhiting   Members   -  Reputation: 647

Like
0Likes
Like

Posted 06 March 2013 - 05:08 AM

Looks really good! Video footage is a must to get a better idea of how it is in action/motion :D

How much time have you spent optimizing i.e. is there room for improvement or have you exhausted everything?



#19 gboxentertainment   Members   -  Reputation: 766

Like
0Likes
Like

Posted 06 March 2013 - 05:58 AM

Looks really good! Video footage is a must to get a better idea of how it is in action/motion
How much time have you spent optimizing i.e. is there room for improvement or have you exhausted everything?

 

Actually I have not optimized very much at all. No octrees...yet. The scene is entirely within one 64x64x64 3D texture. Entire scene is revoxelized per frame.

I never even purposely implemented any other screen-space techniques - apart from the cone tracing itself, which is naturally a screen-space algorithm.

The only other optimization that I had done was substituting a low-poly version of the Stanford Buddha model for the voxelization pass.

 

I am in the process of creating a tech demo video that shows off each effect. Recently Ive been side tracked by a sudden obsession with trying to implement tessellation. I'd also would like to fix the shadow issue - which I believe that switching to solid voxelization will resolve the problem...hopefully. But I guess I'll end up finishing the video before that.



#20 gboxentertainment   Members   -  Reputation: 766

Like
2Likes
Like

Posted 19 March 2013 - 08:17 AM

I have finally put up a video:

 






Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS