• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.
Sign in to follow this  
Followers 0
spek

Voxel Cone Tracing, more drama

18 posts in this topic

Oh, one easier to answer little side question... I noticed the performance would suffer quite a lot when chosing a different struct size for my Octree nodes (each node is stored in a VBO). Not sure what I did, I believe reducing the size from 128 bytes to 64 or something. I thought that would make things a bit faster, but the performance actually dropped a lot. Maybe that was a bug elsewhere but anyway: what is a desired struct size for the GPU? Right now my voxels are 64 bytes, octree nodes 128 bytes (using some empty filling to make it 128b).
1

Share this post


Link to post
Share on other sites
Let's walk through the issues. But first, there might be bugs in my implementation that contribute to these errors. Then again, ALL raymarch / 3D-texture related techniques I tried so far are showing the same problems, not VCT in particular. So maybe I'm always making the same mistakes.

 

 

Then you must be doing something wrong, I have an implementation working with 3D textures that gives flawless results for the diffuse GI (smooth and reallistic lighting, no artifacts nor banding).

 

 

1- Light doesn't spread that far
In Tower22, the environments are often narrow corridors. If you shine a light there, the opposite wall, floor or ceiling would catch light, making an "area" around a spotlight. But that's pretty much it. Not that I expect too much from a single bounce, but in Unreal4, the area is noticably affected by incoming light. Even if it's just a narrow beam falling through a ceiling gap. The light gradually fades in or not, nost just pops in a messy way on surfaces that suddenly catch a piece of light.

 

 

Try to remove the opacity calculation from the cone tracing and see if the light spreads further. I'm saying this because I've seen the voxel opacity causing too much blocking of the light and cause some of the simptoms you describe. In your case the light has to go through corridors which is problematic because in the mipmapped representation of the scene the opacity of the walls is propagated to the empty spaces of the corridor thus causing unwanted occlusion.

 

 

 

Assuming the light only comes from the topleft corner, then how does the shadowed parts of the red-carpet compute light if they only use 1 bounce?? The first bounce would fire the light back into the air, explaining the ceiling, spheres and walls receiving some "red" from the carpet. But the floor itself should remain black mostly.

 

 

The red carpet receives some reflected light from the object in the center of the room which is directly lit from the sun.

 

 

This has to do with mipmapping problems, see picture. The higher mipmapped levels aren't always smoothed, making them look MineCrafted. This is because when I sample from the brick corners in their child nodes, those locations may not be filled by geometry at that exact location (thus black pixels). Ifso, the
mipmapper samples from the brick center instead to prevent the result turning black. Well, difficult story.

 

 

You'll need to have your mipmapping working perfectly for the technique to work, taking shortcuts like this will hurt the quality badly. Make sure the octree mipmmapping gives the same results as a mipmapped 3D texture. 

 

 

My previous post explained it with pictures pretty well, and the conclusion was that you will always have banding errors more or less. Unless doing truly crazy tricks. In fact, user Jcabeleira showed me a picture of Unreal4 banding artifacts. However, those bands looked way less horrible than mines. Is it because they apply very strong blurring afterwards? I think their initial input already looks smoother. Also in the Crassin video, the glossy reflections look pretty smooth. Probably he uses a finer octree, more rays, more steps, and so on. Or is there something else I should know about?

 

 

The banding artifacts from the Unreal 4 demo are smooth because their mipmapping is good, I'm convinced that most of your problems are caused by the fact that your mipmapping isn't right yet. And yes, Crassing doesn't show the banding probably because he used a high resolution (which is only possible because his test scene is really small).

Edited by jcabeleira
2

Share this post


Link to post
Share on other sites

Thanks again you both.

 

Indeed, the mipmapping issues are giving the tiled look I think. It will be hard to solve, but eventually I'll find something on that. I've also been thinking about scrapping the whole brick idea (which is already problematic on my somewhat older hardware) and to fall back on 3D textures covering the world. Like you described earlier. But, instead of raymarching through the textures (thus sampling 3 textures each step), I could still keep using the VCT octree to see if there is anything useful to sample at a certain point. So, a hybrid solution. I think sampling textures is actually faster than keep using a SVO, but it might be worth a try. Especially if the opacity calculation becomes more complicated (see below).

 

If I may ask, how do your textures cover the world (texture count, resolution, cubic cm coverage per pixel)? I suppose you use a cascaded approach like they do in LPV, thus having multiple sized textures following the camera. Do you really mipmap anything, or just interpolate between the multiple textures?

 

 

 

Anyhow, you said your solution didn't show banding errors. How did you manage that? Even with more steps and only using the finest mipmap level (which is smooth as you can see in the shots above), banding keeps occuring beceause of the sampling coordinate offsets. In the shots above, you'll see the banding on the wall. The finest brown bands on the wall are sampled from smoothed bricks (mipmap level 0). For the info:

* the smallest octree nodes are 25 cm3

* the ray takes about 4 steps in each node (slightly less, as the travel distance increases each step, depending on the cone angle)

 

Of course, there still could be a bug in the sample coordinates, but I'd say bandless sampling is just impossible. At least not in the way how I push the ray forwards.

 

 

 

>> Try to remove the opacity calculatuon

Good idea, and just did it. Didn't do any blocking at all, just to see if those dark T-junction corridor parts would catch light now. To exclude eventual other bugs. And... yes! A lot more light everywhere. As you say, the corridors quickly close in, blocking light on the higher mipmapped levels. But, just removing the opacity also leads to light leaks (and much longer rays = slower) of course.

 

Unless there is another smart trick for this, the only way to fix this is by providing more info to the voxels. In my case, the environment typically consists of multiple rooms and corridors close to each other. Voxels could tell from which room they are. So if the ray is suddenly sampling values coming from another room while the occlusion factor is already high, you know you probably skipped a wall. But yet, this sounds like one of those half-working solutions.

 

 

>> The red carpet receives some reflected light from the object in the center

True, but would that really result in that much light? Maybe my math is wrong, but each pixel in my case launches 9 rays, and the result is the average of them. In this particular scenario, the carpet further away may only hit the object with 1 or 2 rays, while the carpet beneath is hits it much more times. In my case the distant carpet would probably either be too dark, or the carpet below the object too bright. In the shot the light spreads more equally (realistically) though.

 

 

@Frenetic Pony

Thanks for the OpenGL demo, though it doesn't run on this computer hehe. Trying to dig out the shaders but asides from some common and mipmapping shader, I couldn't find the ray marching part.

 

Probably Unreal4 GI lighting solution isn't purely VCT indeed, but all in all, it seems to be good enough for true realtime graphics. One of the ideas I have is to make a 2 bounce system. Sure, my computer is too slow to even do 1 bounce properly, but I could make a quality setting that toggles between 0, 1 or 2 realtime bounces. In case I only pick 1, the first bounce is baked (using the static lights only) into the geometry. Not 100% realtime then, but hence none of the solutions in nowadays games are. A supercomputer could eventually toggle to 2 realtime bounces.

 

And yes, some extra's like SSAO, secundary pointlights or manually overriding the coloring of some parts should stay there. In the horror game I do, we don't always necessarily want realistic lights! Horror scenario's often have large contrasts between bright and dark.

 

Not sure if I get the Pixar method right... You mean they just add some color to all voxels in the scene?

 

Cheers

1

Share this post


Link to post
Share on other sites
If I may ask, how do your textures cover the world (texture count, resolution, cubic cm coverage per pixel)? I suppose you use a cascaded approach like they do in LPV, thus having multiple sized textures following the camera. Do you really mipmap anything, or just interpolate between the multiple textures?

The biggest limitation of my implementation is precisely the world coverage which is currently very limited. I'm using a 128x128x128 volume represented by six 3D textures (6 textures for the anisotropic voxel representation that Crassin uses) that covers an area of 30x30x30 meters.
I'm planning on implementing the cascaded approach very soon which should only require small changes to the cone tracing algorithm. Essentially, when the cone exits the smaller volume it should start sampling immediately from the bigger volume, I think it's not necessary to interpolate between the two volumes when moving from one volume to the other, in particular for the diffuse GI effect which tends to smooth everything out, but if somekind of seam or artifacts appears then an interpolation scheme like the one used for LPVs can be used for this too.

Anyhow, you said your solution didn't show banding errors. How did you manage that? Even with more steps and only using the finest mipmap level (which is smooth as you can see in the shots above), banding keeps occuring beceause of the sampling coordinate offsets.

I didn't have to do anything, the only banding I get is in the specular reflection which is smooth as seen in the UE4 screenshot I showed you. For the diffuse GI you shouldn't get any banding whatsoever because the tracing with wide cone angles smooths everything out.
What do you mean with the banding being caused by the sampling coordinate offsets?

>> The red carpet receives some reflected light from the object in the center
True, but would that really result in that much light? Maybe my math is wrong, but each pixel in my case launches 9 rays, and the result is the average of them. In this particular scenario, the carpet further away may only hit the object with 1 or 2 rays, while the carpet beneath is hits it much more times. In my case the distant carpet would probably either be too dark, or the carpet below the object too bright. In the shot the light spreads more equally (realistically) though.

Now that you mention it, probably it shouldn't receive that much light. From what I've seen from my implementation, the light bleeding with VCT tends to be a bit excessive probably because no distance attenuation is applied to the cone tracing (another thing for my TODO list). I'm not sure if UE4 uses distance attenuation or not, their GDC presention doesn't mention anything about it and I believe Crassin's paper doesn't either. That's definitely something that we should investigate.

Probably Unreal4 GI lighting solution isn't purely VCT indeed, but all in all, it seems to be good enough for true realtime graphics. One of the ideas I have is to make a 2 bounce system. Sure, my computer is too slow to even do 1 bounce properly, but I could make a quality setting that toggles between 0, 1 or 2 realtime bounces. In case I only pick 1, the first bounce is baked (using the static lights only) into the geometry. Not 100% realtime then, but hence none of the solutions in nowadays games are. A supercomputer could eventually toggle to 2 realtime bounces.

A few days ago I implemented a 2 bounce VCT by voxelizing the scene once with direct lighting and then voxelizing the scene again with direct lighting and diffuse GI (generated by tracing the first voxel volume). The results are similar to the single bounce GI with the difference that surfaces that were previously too dark because they couldn't receive bounced light are now properly lit thus resulting in a more uniform lighting. Edited by jcabeleira
1

Share this post


Link to post
Share on other sites
I've been making 3D textures with the world directly injected into them as well, using 3 grids. So I can compare with VCT. I did that a few times before, although I didn't really make use of "cone sampling" concept, leading to serious undersampling issues. Instead I just fired some rays on a fine grid, and repeated the whole thing on a more coarse grid and lerped between the results based on distance.


However, I'm running into some old enemies again. Probably you recognize those (and hopefully fixed them as well :) ).

* MipMapping
Probably I should do it manually, because when simply calling "glGenerateMipmap( GL_TEXTURE_3D )", the framerate dies directly. Instead I could loop through all mipmap levels, and re-inject all voxels for each level. Injecting is more costly, but there are way less voxels than pixels in a 128^3 texture (times 6, and 2 or 3 grids).


* Injecting multiple voxels in the same pixel
The voxels are 25 cm3 in my case, so when inserting them in a bigger grid, or when thin walls/objects are close to each other, it happens that multiple voxels inject themselves in the same pixel. Additive Blending leads to too bright values. Max filtering works good for the finest grid, but does not allow to partially occlude a cell (for example, you want at least 16 voxels to let a 1m3 cell fully occlude).

I should be averaging, eventually by summing up the amount of voxels being inserted in a particular cell (thus additive blend first, then divide through its value). But there is a catch, the values are spread over 6 directional textures, so it could happen you only insert half the occlusion of a voxel into a cell for a particular side. How to average that?


* edit
Still superslow due my lazy mipmapping approach so far, but the results look much better than I had with VCT. Indeed no banding except for specular reflections using a very narrow cone. And it seems the light spreads further as well. Yet, fixing the problems stated above will become a bitch. As well as the occlusion problem. Making the walls occlude as they should, block light in narrow corridors. Reducing the occlusion on the other hand gives leaks. I guess the only true solution on that is using more, less wide rays. I'm curious what the framerate will do. If its higher than with VCT, I can spend a few more rays maybe, though I'm more interested in adding a bounce eventually.

As for the limited size, right now I'm making 2 grids. One 128^3 texture covering 32 m3 (thus 25cm3 per pixel), and a second grid covering 128 m3 (thus 1m3 per pixel). Far enough for my indoor scenes mostly. Outdoor scenes or really bigass indoor areas should switch over the coarser grids. Well, having flexible sizes is not impossible to implement, we could eventually fade over to a larger or smaller grid when walking from area into another. May lead to some weird flickers during transition though...


Merry Christmas btw! Edited by spek
1

Share this post


Link to post
Share on other sites

[quote name='spek' timestamp='1356369029' post='5013972']
Probably I should do it manually, because when simply calling "glGenerateMipmap( GL_TEXTURE_3D )", the framerate dies directly. Instead I could loop through all mipmap levels, and re-inject all voxels for each level. Injecting is more costly, but there are way less voxels than pixels in a 128^3 texture (times 6, and 2 or 3 grids).
[/quote]

 

Yeah, I can confirm that glGenerateMipmap(GL_TEXTURE_3D) kills the framerate, not sure why but it seems the driver performs the mipmapping on the CPU.

I think the best alternative is to simply create a shader that computes each mipmap level based on the previous one because it runs fast and is fairly easy to do. Re-injecting the voxels into each mipmap level as you suggest seems overkill and may not give you the desired results because you need the information about the empty voxels of the previous mipmap level to obtain partially transparent voxels on the new level. Are you doing this?

 

[quote name='spek' timestamp='1356369029' post='5013972']
I should be averaging, eventually by summing up the amount of voxels being inserted in a particular cell (thus additive blend first, then divide through its value). But there is a catch, the values are spread over 6 directional textures, so it could happen you only insert half the occlusion of a voxel into a cell for a particular side. How to average that?
[/quote]

 

Yeah, averaging is the right thing to do. Regarding the directional textures, you should propably average them as usual too. Just ensure you don't increment the counter when the voxel does not contribute for the radiance (when the weight of the voxel for that particular directional texture is <= 0.0).

 

PS: Merry Christmas to you too and to everyone else reading this ;D

1

Share this post


Link to post
Share on other sites

I think mipmapping is so damn slow because it goes through all pixels, several times, for 6 textures. Replaced it with a manual shader now. It injects the voxels again (as points), so those points perform a simple box filter only at the places where it should be. The results are slight different than mipmap (can't say worse or better, varies a bit), probably because my shader is a bit different and because the plotting & sampling coordinates aren't 100% the same. Getting those right is a bitch with Cg and 3D textures. I haven't tried to skip the voxel injection and perform a simplified mipmapping yet... I can render one long horizontal quad (as a 2D object) to catch all layers at once. Far less draw calls, but more useless pixels to filter.

 

Anyhow, the framerate raised from 3 to 15 fps, which is not bad at all for my old nVidia 9800M craptop card! For the info,

- framerate was already pretty low due lots of other effects (somewhere around ~24)

- GI effect includes a upscale filter that brings the 1/4 GI buffer back to full size, polishing the jagged edges

- Only 1 grid used so far (128 ^3 texture, each pixel covering 25 cm3)

- 9 diffuse rays, 1 specular ray

- With VCT, the framerate was ~5. Both the construction & the raymarching goes a lot faster with simple texturing

 

And more important, the results finally look sort of satisfying. Maybe I can show a Christmas shot today or tomorrow hehe. Yet I still think a second bounce is needed if you really want to let a single light illuminate a corridor "completely". But more important for now is to implement the second grid first. And to make some baking options so that the produced GI can be stored per vertex or in a lightmap for older videocards that can't run this technique realtime properly. That would also allow to bake a first bounce (with static lights only) and do a second bounce realtime...

 

 

Averaging

Right now some of the corners appear as brigther spots in the result. Probably because they got a double dose of light indeed. But summing & averaging... For example, I have 2 RED voxels being inserted in the same pixel. One faces exactly to the +Z direction, another only a little bit. The injection code would look like this:

[code]<<enable additive blending>> ... float3 ambiCube;    ambiCube.x = dot( float3( +1, 0,0 ), voxelNormal );    ambiCube.y = dot( float3(  0, +1,0 ), voxelNormal );    ambiCube.z = dot( float3(  0, 0,+1 ), voxelNormal );    ambiCube = abs( ambiCube ); // Insertion if ( voxelNormal.x > 0 ) {     outputColor_PosX.rgba = ambiCube.xxxx * float4( voxelLittenColor.rgb, 1 );     ++outputCounter_PosX; } ...and so on for the 5 other directions[/code]

So, the result could be rgba{1,0,0,1} + rgba{ 0.1, 0,0, 0.1 } = rgba{ 1.1, 0,0, 1.1 }

 

When dividing through an integer count (2 in this case), I get a dark result. If the other voxel would have rotated slightly further (not contributing to +X axis), the result would have been bright red though. Dividing through its own occlusion sum (1.1) would give a correct result in this particular example, but not if I would have inserted only the second voxel. In that case it would get too bright as well. rgba{0.1, 0,0, 0.1} / 0.1 = rgba{1, 0, 0, 1}

 

That's why I couldn't find a good way yet. I had the same problem with plenty of other similiar GI techniques btw (LPV for example). In case the voxels are as big as your cells, I would just use Max filtering instead of averaging, But that doesn't work too well when inserting the voxels in a much coarser grid though.

 

 

Thanks for helping,

Ciao!

1

Share this post


Link to post
Share on other sites

About the red carpet being too lit, I think some UE4 talk mentioned a system for a coarse multi bounce tracing based on some idea I couldn't grasp. So they might very well have that carpet lit by the ceiling, or even by volumetric scattering, the air in that sample looks very foggy so they can have some kind of light scattering technique (energy diffusor). Or some locals light probes to complement lightings coming from some large areas light the sky.

the ground lighting outside the borders of a direct light that lit the ground (and this the ceiling indirectly in 1 bounce solutions) was an artefact that was happening with my implementation of LPV, for an unknown reason, I always supposed incrementally amplified errors in the propagation due to strong blurring due to SH encoding.

in VCT it could come from the lowest resolutions voxels mipmaps that got everything too mixed up and finally are just emitting a vague blur of energy everywhere.

but when the sources are mostly the ground facing upward... I agree that this shot looks weird because of that.

 

About the distance attenuation, it should not be done. we always suppose air as a non participant media that does not attenuate energy with distance. So the question to that interrogation of earlier has to be answered with simple radiance and flux concepts. A voxel emits a given radiance for a given surface, this amounts to a given total energy, that is the flux. the resulting irradiance on a given distant surface patch is calculable thanks to the solid angle, and that only should be considered.

(along with the "virtual" surface of the patch if it were being rotated to be made facing the source of radiance, which is what the lambert term stands for and why all radiance formula has it.)

well, my 2 cents. I hope I am not missing the point too much :)

0

Share this post


Link to post
Share on other sites
You never know what kind of tricks they use hehe. But I'm pretty sure VCT (or LPV, or any other realtime GI technique) isn't enough to make the lighting truly "good". In my implementation, I have the option to manually override by storing multiplier colors per vertex. So, if the GI results suck (or leak light, or too bright/dark, or...) the artist can still polish a bit with the magic hand.
 
Using the low(est) mipmap levels for an overal blur might not be a bad idea, especially to lit particles and such. I liked the idea of having GI "everywhere" in the LPV approach. VCT or the textured variant I'm trying right now does not provide this by default, unless you grab it from the higher mipmapped levels. Just succeeded into baking a first bounce into the voxels. The engine will have a quality setting where you can toggle the GI between
"Suicide" = (slow) 2 realtime bounces
"Smart" = (medium) First bounce baked into the voxels (using static lights only), 2nd bounce done realtime
"Fake" = (fast) both bounces baked per vertex. Not realtime at all, but fast and not too different from the realtime variant
 
One of my main issues with changing and trying GI all the time, is that my lighting would be messed up each time. It requires carefull tweaking of the scene to get it look right. But once the GI technique changed, it was all wrong again. So having the ability to toggle is kinda important for me. Hopefully this will be the last time (at least for the next few years) to implement GI :D
post-80126-0-78925500-1356733118_thumb.j
 
Attenuation can indeed be skipped I think. Due the mipmapping nature, distant litten surfaces already have less influence as they get mixed with other surfaces. The reason we apply attenuation on stuff like a pointlight is not because its realistic, but because the whole lighting method of it works different (read "fakish").
 
Cheers!
0

Share this post


Link to post
Share on other sites

And the nice specular that reflects the lit white wall, this effect is underrated. We never had world aware speculars before. classic lighting only gives specular of point sources :) good job

0

Share this post


Link to post
Share on other sites

That particular reflection is made with RLR, not with GI raymarching :) Though I have that effect implemented too, its only useful for situations where RLR isn't. RLR produces high quality reflections, but only for stuff present on the screen, as its a screen-space effect. Glossy "GI Reflections" produces reflections for any situation, but only vague (unless you make an awfull detailed VCT octree or 3D texture resolution size). It's still an useful effect, but it has to be used in combination with other reflection techniques I think.

0

Share this post


Link to post
Share on other sites

Here a glossy reflection shot. I got a feeling there is some inaccuresy as the vertical wall bends to the right when looking at the floor reflection. But it doesn't matter that much for glossy reflections anyway :p

0

Share this post


Link to post
Share on other sites

Neat screenshots!

Just wondering, did you solve the issues you were having? And were those screenies generated with the 3D texture or with the SVO approach?

0

Share this post


Link to post
Share on other sites

Hey!

The shots are using the 3D texture method, thus probably sort of the same technique you are using. It solved all the "blocky artifacts" (due better mipmapping probably), and it runs quite a lot faster as well. But, of course, I'm restricted to a limited area around the camera as well. The 128^3 texture covers 32m3. There is a second coarse grid, but I haven't implemented it further in the raymarcher yet.

 

The results now are acceptable. As said, the worst artifacts are gone, though there are some weird bandings here and there, and in narrow corners I would get leaks or intercollisions. This is solved by multiplying the result with sort of AO (meaning if the rays collide nearby already, the color gets darkened).

 

As for the "T-junction" issue, for some reason I catch more light. Maybe I'm using slight different cone angles here. But also, to get things light, I cheat a bit by letting the rays penetrate walls here and there a bit as you suggested. This can lead to receiving false light, but most of the time, you won't really notice. And the "maximum occlusion factor"  is adjustable per room. So if a room suffers from leaking really, this factor can be brought down so the rays don't cheat (or less).

 

 

Correctly inserting multiple voxels in the same cell has not been solved yet. Instead, I just apply a max-filter so the colors won't add up. This is acceptable, though when going to the bigger grid, it gets more difficult as a single cell always gets filled by a lot of smaller voxels. One possible solution is to simply produce a second array of larger voxels that are already "mixed". And small objects would be skipped then.

 

All in all, I'm quite happy with it. But... I got to see it working in some more complicated environments first. I only have a few testrooms so far, and it often happens with these things it totally sucks in another environment. So, still being a bit sceptic ;)

0

Share this post


Link to post
Share on other sites

Have you been able to implement soft shadowing using cone tracing? According to Crassin in his thesis, soft shadowing can be done through the use of the opacity (alpha channel) values. Because these should already have been calculated through the GI cone tracing process, it should just be a matter of accumulating the opacity values through another cone trace in the fragment shader. I'm currently trying to work out how to do this.

0

Share this post


Link to post
Share on other sites

This topic seems to have died shortly after my last post. So hopefully my latest attempt at voxel cone tracing can keep this discussion alive:

 

[attachment=13583:vct0.jpg][attachment=13584:vct1.jpg]

0

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!


Register a new account

Sign in

Already have an account? Sign in here.


Sign In Now
Sign in to follow this  
Followers 0