Another reason for the transparency is that you are sampling the voxels along the view ray but none of the samples hits the exact center of an opaque voxel so they never achieve full opacity. This is because the voxel volume texture is configured to use bilinear filtering so whenever you sample an opaque voxel without hitting its center you get a partially transparent result that is the interpolation between the opaque voxel and the neighboring transparent ones.
jcabeleiraMember Since 24 Sep 2003
Offline Last Active Feb 24 2014 11:53 AM
- Group Members
- Active Posts 239
- Profile Views 2,709
- Submitted Links 0
- Member Title Member
- Age Age Unknown
- Birthday Birthday Unknown
Posted by jcabeleira on 30 September 2013 - 10:13 AM
What I described is an alternative to picking the nearest cubemap in a forward pass, like you mentioned. The environment probes that I'm talking about are exactly like deferred lights except they don't do point-lighting in the pixel shader and instead do cubemap lighting. This is the exact same technique Crytek uses for environment lighting (in fact, much of the code is shared between them and regular lights). So, in short, they do work like deferred lights. Just not like deferred point lights.
You're right, I didn't know they used light probes like that. It's a neat trick, useful when you want to give GI to some key areas of the scene.
Posted by jcabeleira on 26 September 2013 - 04:23 AM
Well, dispite what the article says, Mantle is not really revolutionary. It's simply taking a step backward trading portability by performance. I'm not saying it is bad, I'm just saying that it's a classic tradeoff choice.
I'm not even sure it will bring that much performance benefits because it may speed up draw calls but it won't speed up graphics that are bound by fill rate. Once again, it's a tradeoff that may prove useful for some developers but it sure isn't a one-size-fits-all option.
Posted by jcabeleira on 10 September 2013 - 02:58 AM
It is in a way not as bad as before but not really great. I could play with the scale factors and so on.
However you seem also too agree that the base problem is there and that it might be difficult or impossible too overcome it without "cheating" a little bit.
Other ideas someone?
Actually it looks good, that's pretty much how LPVs look like when applied to a scene alone. Maybe your second volume should be larger to extend the reach of the GI, ensure that you have a setup of volumes that gets you GI for the whole room (no black walls). Once you have that, you can try to use a more "decent" scene and add direct lighting to see how the GI looks on it, you may be surprised with the results. ;)
Then for rendering I took (3*Cascade1Value+2*Cascade2Value+Cascade3Value)/6 for a position in the finest cascade. I know that summing up something there is physically completly wrong, just tried it to see the visual result.
It's not physically correct but then again neither is computing the GI in 2 different volumes. We only do this due to technical limitations, ideally we would want a single high-resolution volume covering the entire scene. Since we can't get it we use several volumes with different coverage areas to get a fairly decent estimate of the scene radiance. The effect takes some hand tuning so play with the volumes weights until you get something that looks good.
Posted by jcabeleira on 09 September 2013 - 10:46 AM
Yes, the light will travel more distance on the larger volumes. But I think you're missing the point. I assume you are using the cascades like this:
if distance < 5 then
radiance = sampleFromCascade1()
else if distance < 20 then
radiance = sampleCascadeCascade2()
radiance = 0
However, they are not mutually exclusive, they should be combined like this:
radiance = sampleFromCascade1() + sampleFromCascade2() <- (you can weight each cascade to tune the effect)
Why? Because larger cascades are used to provide global illumination effect for a large area. With it, two distant walls can bleed light between each other because the larger cascades can cover the area between them. However the detail of the illumination won't be great because the resolution is too low for the covered area.
The smaller cascades are used refine the detail of the global illumination where it's important: near the viewer. In this case you're using a small propagation volume to simulate the interaction between smaller objects that are near the viewer.
By combining the two you are complementing the rough global illumination of the large cascade with fine detail of the smaller one. You'll get rough light bouncing for the entire scene and rough + fine light bouncing for nearby objects.
Of course, with this approach you might be adding some duplicate bounces but in general that is not noticeable and the whole effect looks very good.
BTW, the gradient of the lighting I can seen on the walls in your screenshots looks exactly as expected. Your implementation seems to be on the right track.
Posted by jcabeleira on 26 August 2013 - 09:38 AM
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
This should be done after the texture was created and bound.This is a texture state so it needs an active texture to be applied to. ;)
Posted by jcabeleira on 01 July 2013 - 07:23 AM
In fact at E3 I noticed that Knack has dynamic reflections and soft shadows, which looked like they might be generated using voxelization.
If you look at the video you can see that it's screen space reflections because the reflection disappears when the reflected object goes out of screen, pay attention to the glowing lights on the wall on the right: http://www.youtube.com/watch?feature=player_detailpage&v=iG98LuaYj_g#t=267s.
IMO, soft shadows are probably done with a common shadow mapping technique.
Posted by jcabeleira on 14 May 2013 - 03:39 PM
Hello, guys. Please forgive me for the shameless self promotion, I'd just like to share and discuss the results I got with my Voxel Cone Tracing implementation. So, in my last post I presented an early version of my own implementation of global illumination using voxel cone tracing, since then I've done a lot of improvements and ended up creating a demo that you can find here to show it. Very soon I'll be writing a set of articles explaining the implementation in more detail but for now I'll outline the most important aspects:
My implementation uses two 128x128x128 3D textures to store the voxelized scene, a small texture and a larger one, where the small one covers a 30x30x30m volume of the scene while the larger one covers a 120x120x120m volume. Together they provide a voxelized representation of the scene with enough quality and range for the GI processing. The volumes are fixed at the world origin, they do not follow the camera because that makes the lighting unstable, I know this is impractical for a video game but it's enough to cover the smalll scene of my demo.
No Sparse Voxel Octree is used, this is because I concluded from my early experiments that traversing the tree takes a huge toll on performance probably because the divergent code paths and incoherent memory accesses are not very GPU friendly. So, In general volume textures seem to provide much better performance that octrees at the expense of huge memory waste (because of empty voxels). I haven't profiled one method against the other, my choice was based simply on experiments so please feel free to argue against these arguments and provide your own insight on the subject.
The scene is voxelized in real-time using a similar version of the technique described by Cyrril Crassin in the free OpenGL Insights chapter that is available here. The voxelization is quite fast and can be performed in real-time without a great impact on performance which allows to make the lighting dynamic, if any object or light source moves you'll see the lighting change accordingly.
Once the voxel volume is ready it is possible to simulate a second bounce of GI by revoxelizing the scene to a second volume while using the first volume to perform cone tracing for each fragment that is inserted into the volume. Note however that this is very expensive and doesn't compensate the reduced improvement in visual quality so it's disabled by default in the demo.
The diffuse lighting is calculated in groups of 3x3 pixels to improve the performance and a edge refinement is done later to fix the difficult cases. For each group of 3x3 pixels 16 cones are traced in all directions with 32 samples per cone. The tracing of each cone is done pretty much like Crassin describes in his paper, for each sample inside the cone we read the corresponding mipmap level of the volume texture and accumulate both the color and the opacity of the voxel information that was read.
The specular lighting is calculated by tracing a cone along the reflection vector. Voxel cone tracing gives the flexibility to generate sharp and glossy reflections which is really neat. In general both look good but the lack of resolution of the voxel volume prevents the accurate rendering of glass and mirror materials. Sharp reflections are problematic also because they require an insane amount of samples per cone (200 in the demo) to avoid missing scene features like thin walls. The tracing is optimized to skip large portions of empty space by sampling a lower resolution mipmap level of the voxel volume and checking its opacity (similar to sphere tracing), if the opacity is zero then there are no objects nearby and we can skip that space altogether. This is essentially an GPU friendly approximation of what is done with a Sparse Voxel Octree.
So this concludes the overview of the implementation. Please feel free to leave your comments.
Don't forget to try the demo (I recommend running it on a Nvidia GTX 680 or similar, it's untested for AMD hardware so I have now idea how it runs on it).
Here is some eye candy :
Posted by jcabeleira on 04 April 2013 - 03:40 AM
That happens because of the mipmapping and the lack of correct derivatives. In forward rendering pixels are processed in 2x2 groups, even if some of the pixels don't fall inside the primitve they are still processed as if they were just to provide you with the screen space derivatives that you need to perform mipmapping.
In deferred rendering you haven't such derivatives, you're processing pixels in 2x2 groups but you're reading your data from a buffer so at the edges of objects some pixels may fall inside of a different primitive than the one you'd expect.
The simplest solution for that is to disable the mipmapping for that texture or supply your g-buffer with the screen space derivatives and use them when sampling from the texture.
Posted by jcabeleira on 25 December 2012 - 05:43 PM
Probably I should do it manually, because when simply calling "glGenerateMipmap( GL_TEXTURE_3D )", the framerate dies directly. Instead I could loop through all mipmap levels, and re-inject all voxels for each level. Injecting is more costly, but there are way less voxels than pixels in a 128^3 texture (times 6, and 2 or 3 grids).
Yeah, I can confirm that glGenerateMipmap(GL_TEXTURE_3D) kills the framerate, not sure why but it seems the driver performs the mipmapping on the CPU.
I think the best alternative is to simply create a shader that computes each mipmap level based on the previous one because it runs fast and is fairly easy to do. Re-injecting the voxels into each mipmap level as you suggest seems overkill and may not give you the desired results because you need the information about the empty voxels of the previous mipmap level to obtain partially transparent voxels on the new level. Are you doing this?
I should be averaging, eventually by summing up the amount of voxels being inserted in a particular cell (thus additive blend first, then divide through its value). But there is a catch, the values are spread over 6 directional textures, so it could happen you only insert half the occlusion of a voxel into a cell for a particular side. How to average that?
Yeah, averaging is the right thing to do. Regarding the directional textures, you should propably average them as usual too. Just ensure you don't increment the counter when the voxel does not contribute for the radiance (when the weight of the voxel for that particular directional texture is <= 0.0).
PS: Merry Christmas to you too and to everyone else reading this ;D
Posted by jcabeleira on 23 December 2012 - 06:49 PM
If I may ask, how do your textures cover the world (texture count, resolution, cubic cm coverage per pixel)? I suppose you use a cascaded approach like they do in LPV, thus having multiple sized textures following the camera. Do you really mipmap anything, or just interpolate between the multiple textures?
The biggest limitation of my implementation is precisely the world coverage which is currently very limited. I'm using a 128x128x128 volume represented by six 3D textures (6 textures for the anisotropic voxel representation that Crassin uses) that covers an area of 30x30x30 meters.
I'm planning on implementing the cascaded approach very soon which should only require small changes to the cone tracing algorithm. Essentially, when the cone exits the smaller volume it should start sampling immediately from the bigger volume, I think it's not necessary to interpolate between the two volumes when moving from one volume to the other, in particular for the diffuse GI effect which tends to smooth everything out, but if somekind of seam or artifacts appears then an interpolation scheme like the one used for LPVs can be used for this too.
Anyhow, you said your solution didn't show banding errors. How did you manage that? Even with more steps and only using the finest mipmap level (which is smooth as you can see in the shots above), banding keeps occuring beceause of the sampling coordinate offsets.
I didn't have to do anything, the only banding I get is in the specular reflection which is smooth as seen in the UE4 screenshot I showed you. For the diffuse GI you shouldn't get any banding whatsoever because the tracing with wide cone angles smooths everything out.
What do you mean with the banding being caused by the sampling coordinate offsets?
>> The red carpet receives some reflected light from the object in the center
True, but would that really result in that much light? Maybe my math is wrong, but each pixel in my case launches 9 rays, and the result is the average of them. In this particular scenario, the carpet further away may only hit the object with 1 or 2 rays, while the carpet beneath is hits it much more times. In my case the distant carpet would probably either be too dark, or the carpet below the object too bright. In the shot the light spreads more equally (realistically) though.
Now that you mention it, probably it shouldn't receive that much light. From what I've seen from my implementation, the light bleeding with VCT tends to be a bit excessive probably because no distance attenuation is applied to the cone tracing (another thing for my TODO list). I'm not sure if UE4 uses distance attenuation or not, their GDC presention doesn't mention anything about it and I believe Crassin's paper doesn't either. That's definitely something that we should investigate.
Probably Unreal4 GI lighting solution isn't purely VCT indeed, but all in all, it seems to be good enough for true realtime graphics. One of the ideas I have is to make a 2 bounce system. Sure, my computer is too slow to even do 1 bounce properly, but I could make a quality setting that toggles between 0, 1 or 2 realtime bounces. In case I only pick 1, the first bounce is baked (using the static lights only) into the geometry. Not 100% realtime then, but hence none of the solutions in nowadays games are. A supercomputer could eventually toggle to 2 realtime bounces.
A few days ago I implemented a 2 bounce VCT by voxelizing the scene once with direct lighting and then voxelizing the scene again with direct lighting and diffuse GI (generated by tracing the first voxel volume). The results are similar to the single bounce GI with the difference that surfaces that were previously too dark because they couldn't receive bounced light are now properly lit thus resulting in a more uniform lighting.
Posted by jcabeleira on 21 December 2012 - 08:11 AM
Let's walk through the issues. But first, there might be bugs in my implementation that contribute to these errors. Then again, ALL raymarch / 3D-texture related techniques I tried so far are showing the same problems, not VCT in particular. So maybe I'm always making the same mistakes.
Then you must be doing something wrong, I have an implementation working with 3D textures that gives flawless results for the diffuse GI (smooth and reallistic lighting, no artifacts nor banding).
1- Light doesn't spread that far
In Tower22, the environments are often narrow corridors. If you shine a light there, the opposite wall, floor or ceiling would catch light, making an "area" around a spotlight. But that's pretty much it. Not that I expect too much from a single bounce, but in Unreal4, the area is noticably affected by incoming light. Even if it's just a narrow beam falling through a ceiling gap. The light gradually fades in or not, nost just pops in a messy way on surfaces that suddenly catch a piece of light.
Try to remove the opacity calculation from the cone tracing and see if the light spreads further. I'm saying this because I've seen the voxel opacity causing too much blocking of the light and cause some of the simptoms you describe. In your case the light has to go through corridors which is problematic because in the mipmapped representation of the scene the opacity of the walls is propagated to the empty spaces of the corridor thus causing unwanted occlusion.
Assuming the light only comes from the topleft corner, then how does the shadowed parts of the red-carpet compute light if they only use 1 bounce?? The first bounce would fire the light back into the air, explaining the ceiling, spheres and walls receiving some "red" from the carpet. But the floor itself should remain black mostly.
The red carpet receives some reflected light from the object in the center of the room which is directly lit from the sun.
This has to do with mipmapping problems, see picture. The higher mipmapped levels aren't always smoothed, making them look MineCrafted. This is because when I sample from the brick corners in their child nodes, those locations may not be filled by geometry at that exact location (thus black pixels). Ifso, the
mipmapper samples from the brick center instead to prevent the result turning black. Well, difficult story.
You'll need to have your mipmapping working perfectly for the technique to work, taking shortcuts like this will hurt the quality badly. Make sure the octree mipmmapping gives the same results as a mipmapped 3D texture.
My previous post explained it with pictures pretty well, and the conclusion was that you will always have banding errors more or less. Unless doing truly crazy tricks. In fact, user Jcabeleira showed me a picture of Unreal4 banding artifacts. However, those bands looked way less horrible than mines. Is it because they apply very strong blurring afterwards? I think their initial input already looks smoother. Also in the Crassin video, the glossy reflections look pretty smooth. Probably he uses a finer octree, more rays, more steps, and so on. Or is there something else I should know about?
The banding artifacts from the Unreal 4 demo are smooth because their mipmapping is good, I'm convinced that most of your problems are caused by the fact that your mipmapping isn't right yet. And yes, Crassing doesn't show the banding probably because he used a high resolution (which is only possible because his test scene is really small).
Posted by jcabeleira on 13 December 2012 - 05:20 AM
>> I've seen these problems appear in the video of Unreal Engine 4 Elemental demo
In a strange way, that sounds like a relief. If the smart guys over there didn't solve it yet, then I don't have to shame myself hehe. And more important, it indicates with some blurring or other post-enhancements, the artifact is probably not that noticable. At least, I didn't see it when watching that movie.
The artifact is actually very noticeable, but in the video they barely show VCT being used for sharp/glossy reflections which is why you haven't noticed it. In the screenshot bellow taken from their video you can see the smooth banding effect I told you. In general, VCT reflections look like if the reflected objects are made of aligned neon lights, which is why they don't look very good even for glossy reflections.
Doing different averaging when mipmapping isn't easy, at least not in the way how I construct the whole thing. I had to make some crazy workarounds, as OpenCL on my computer doesn't allow to write pixels in a 3D texture.Then again, "vacuum" nodes are sort of ignored, as they don't have an "occlusion" factor either. If in the image above the ray samples on the empty side of a brick, it won't directly stop. However, in order to prevent skipping the wall I should at least take 2 (or 4?) steps inside a node. And maybe increase the occlusion value for "geometry pixels" on higher levels to assure the ray stops in time. Anyone knows if the guys that made the VCT techniques are doing such tricks as well?
If you do a bottom-up approach for the mipmap generation, going from leaf nodes no the root node, you should be able to choose whatever mipmapping scheme you want right?
One other thing I could try -but I'm afraid it leads to other errors- is doing a "inflate" filter: all non-filled brick pixels will copy the color (and maybe occlusion) from their neighbors that actually are filled.
That could avoid missing the walls but it would also make the reflected scene look..well..inflated, which be particularly bad for small objects which would get deformed.
Yeah, man, I feel ya.
Global Illumination.... argh!
Posted by jcabeleira on 12 December 2012 - 08:21 AM
Your analysis of the problem is pretty accurate. You'll always get rays leaking through geometry with cone tracing due to the fact that a voxel averages the opacity of the underlying geometry. Moreover, when doing reflections you must ensure that the distance between samples is small enough so that you don't miss the walls and if possible ensure that you hit the center of the voxels to obtain maximum opacity and immediately kill the rest of the ray.
Regarding the banding, you'll always get a banding effect when using cone tracing to render reflections due to the fact that the voxels are not sampled at their center. The problem is that the reflection rays for neighbour pixels will intersect the same voxel at slightly different positions which will yield different opacities depending on the linear interpolation of the texel and give a banding effect. The strange thing is that this banding should be smooth while yours shows harsh transitions.
Unfortunately, none of the above problems is easy to solve. I've seen these problems appear in the video of Unreal Engine 4 Elemental demo so I assume they also suffer from them to some extend. I can also tell you from my experience that in general, glossy and sharp reflections rendered through voxel cone tracing yield poor quality due to these and other limitations.
Regarding the color darkening, you should be careful when creating the mipmaps. Instead of just averaging the voxels you should probably add a weighting factor so that empty voxels are ignored. You may even need to apply a different approach regarding color and opacity, intuitively I'd say that empty voxels should be ignored when calculating the color average to avoid darkening (because empty = black) while for opacity you should take the average of the voxels. Of course, to do this you'd need a custom mipmap creation shader.
Posted by jcabeleira on 11 October 2012 - 02:49 PM
The indirect light does indeed get shadowed due to the cone tracing, although not perfectly due to the approximations introduced by the voxelizations and the tracing itself. At the SIGGRAPH presentation they mentioned that they were still using SSAO to add some small-scale AO from features that weren't adequately captured by the voxelization, but I think that's a judgement call that you'd have to make for yourself.
The shadowing is better that you may think, it can even be used for direct lighting. A friend of mine has been exploring the use of area lights with voxel cone tracing and has obtained very promising results. He was able to inject area lights into the voxel volume and get very reallistic lighting and soft shadows running perfectly in real-time. The whole thing looked like it was rendered offline even with a modest configuration of cone and sample count.
UE4 still needs AO and deferred ? that´s a pretty curious statement. cone tracing´s second step (the actual cone tracing) should be done from a deferred target otherwise you are risking suiciding your framerate. Secondly, AO is the base effect that cone tracing can achieve, thirdly they have SSAO to complement like MJP says.
They only need SSAO because they're using a small number of cones for the sake of performance. If they decided to use a higher number of cones, from my experiments 16 cones would be enough, they would get the excelent quality AO for free (at the cost of frame rate, of course) for both large and small scale details.