Jump to content

  • Log In with Google      Sign In   
  • Create Account

#Actualjcabeleira

Posted 23 December 2012 - 06:53 PM

If I may ask, how do your textures cover the world (texture count, resolution, cubic cm coverage per pixel)? I suppose you use a cascaded approach like they do in LPV, thus having multiple sized textures following the camera. Do you really mipmap anything, or just interpolate between the multiple textures?

The biggest limitation of my implementation is precisely the world coverage which is currently very limited. I'm using a 128x128x128 volume represented by six 3D textures (6 textures for the anisotropic voxel representation that Crassin uses) that covers an area of 30x30x30 meters.
I'm planning on implementing the cascaded approach very soon which should only require small changes to the cone tracing algorithm. Essentially, when the cone exits the smaller volume it should start sampling immediately from the bigger volume, I think it's not necessary to interpolate between the two volumes when moving from one volume to the other, in particular for the diffuse GI effect which tends to smooth everything out, but if somekind of seam or artifacts appears then an interpolation scheme like the one used for LPVs can be used for this too.

Anyhow, you said your solution didn't show banding errors. How did you manage that? Even with more steps and only using the finest mipmap level (which is smooth as you can see in the shots above), banding keeps occuring beceause of the sampling coordinate offsets.

I didn't have to do anything, the only banding I get is in the specular reflection which is smooth as seen in the UE4 screenshot I showed you. For the diffuse GI you shouldn't get any banding whatsoever because the tracing with wide cone angles smooths everything out.
What do you mean with the banding being caused by the sampling coordinate offsets?

>> The red carpet receives some reflected light from the object in the center
True, but would that really result in that much light? Maybe my math is wrong, but each pixel in my case launches 9 rays, and the result is the average of them. In this particular scenario, the carpet further away may only hit the object with 1 or 2 rays, while the carpet beneath is hits it much more times. In my case the distant carpet would probably either be too dark, or the carpet below the object too bright. In the shot the light spreads more equally (realistically) though.

Now that you mention it, probably it shouldn't receive that much light. From what I've seen from my implementation, the light bleeding with VCT tends to be a bit excessive probably because no distance attenuation is applied to the cone tracing (another thing for my TODO list). I'm not sure if UE4 uses distance attenuation or not, their GDC presention doesn't mention anything about it and I believe Crassin's paper doesn't either. That's definitely something that we should investigate.

Probably Unreal4 GI lighting solution isn't purely VCT indeed, but all in all, it seems to be good enough for true realtime graphics. One of the ideas I have is to make a 2 bounce system. Sure, my computer is too slow to even do 1 bounce properly, but I could make a quality setting that toggles between 0, 1 or 2 realtime bounces. In case I only pick 1, the first bounce is baked (using the static lights only) into the geometry. Not 100% realtime then, but hence none of the solutions in nowadays games are. A supercomputer could eventually toggle to 2 realtime bounces.

A few days ago I implemented a 2 bounce VCT by voxelizing the scene once with direct lighting and then voxelizing the scene again with direct lighting and diffuse GI (generated by tracing the first voxel volume). The results are similar to the single bounce GI with the difference that surfaces that were previously too dark because they couldn't receive bounced light are now properly lit thus resulting in a more uniform lighting.

#2jcabeleira

Posted 23 December 2012 - 06:53 PM

If I may ask, how do your textures cover the world (texture count, resolution, cubic cm coverage per pixel)? I suppose you use a cascaded approach like they do in LPV, thus having multiple sized textures following the camera. Do you really mipmap anything, or just interpolate between the multiple textures?

The biggest limitation of my implementation is precisely the world coverage which is currently very limited. I'm using a 128x128x128 volume represented by six 3D textures (6 textures for the anisotropic voxel representation that Crassin uses) that covers an area of 30x30x30 meters.
I'm planning on implementing the cascaded approach very soon which should only require small changes to the cone tracing algorithm. Essentially, when the cone exits the smaller volume it should start sampling immediately from the bigger volume, I think it's not necessary to interpolate between the two volumes when moving from one volume to the other, in particular for the diffuse GI effect which tends to smooth everything out, but if somekind of seam or artifacts appears then an interpolation scheme like the one used for LPVs can be used for this too.

Anyhow, you said your solution didn't show banding errors. How did you manage that? Even with more steps and only using the finest mipmap level (which is smooth as you can see in the shots above), banding keeps occuring beceause of the sampling coordinate offsets.

I didn't have to do anything, the only banding I get is in the specular reflection which is smooth as seen in the UE4 screenshot I showed you. For the diffuse GI you shouldn't get any banding whatsoever because the tracing with wide cone angles smooths everything out.
What do you mean with the banding being caused by the sampling coordinate offsets?

>> The red carpet receives some reflected light from the object in the center
True, but would that really result in that much light? Maybe my math is wrong, but each pixel in my case launches 9 rays, and the result is the average of them. In this particular scenario, the carpet further away may only hit the object with 1 or 2 rays, while the carpet beneath is hits it much more times. In my case the distant carpet would probably either be too dark, or the carpet below the object too bright. In the shot the light spreads more equally (realistically) though.

Now that you mention it, probably it shouldn't receive that much light. From what I've seen from my implementation, the light bleeding with VCT tends to be a bit excessive probably because no distance attenuation is applied to the cone tracing (another thing for my TODO list). I'm not sure if UE4 uses distance attenuation or not, their GDC presention doesn't mention anything about it and I believe Crassin's paper doesn't either. That's definitely something that we should investigate.

Probably Unreal4 GI lighting solution isn't purely VCT indeed, but all in all, it seems to be good enough for true realtime graphics. One of the ideas I have is to make a 2 bounce system. Sure, my computer is too slow to even do 1 bounce properly, but I could make a quality setting that toggles between 0, 1 or 2 realtime bounces. In case I only pick 1, the first bounce is baked (using the static lights only) into the geometry. Not 100% realtime then, but hence none of the solutions in nowadays games are. A supercomputer could eventually toggle to 2 realtime bounces.

A few days ago I implemented a 2 bounce VCT by voxelizing the scene once with direct lighting and then voxelizing the scene again with direct lighting and diffuse GI (generated by tracing the first voxel volume). The results are similar to the single bounce GI with the difference that surfaces that were previously too dark because they couldn't receive bounced light are now properly lit thus resulting in a more uniform lighting.

#1jcabeleira

Posted 23 December 2012 - 06:49 PM

If I may ask, how do your textures cover the world (texture count, resolution, cubic cm coverage per pixel)? I suppose you use a cascaded approach like they do in LPV, thus having multiple sized textures following the camera. Do you really mipmap anything, or just interpolate between the multiple textures?

The biggest limitation of my implementation is precisely the world coverage which is currently very limited. I'm using a 128x128x128 volume represented by six 3D textures (6 textures for the anisotropic voxel representation that Crassin uses) that covers an area of 30x30x30 meters.

I'm planning on implementing the cascaded approach very soon which should only require small changes to the cone tracing algorithm. Essentially, when the cone exits the smaller volume it should start sampling immediately from the bigger volume, I think it's not necessary to interpolate between the two volumes when moving from one volume to the other, in particular for the diffuse GI effect which tends to smooth everything out, but if somekind of seam or artifacts appears then an interpolation scheme like the one used for LPVs can be used for this too. 

 

Anyhow, you said your solution didn't show banding errors. How did you manage that? Even with more steps and only using the finest mipmap level (which is smooth as you can see in the shots above), banding keeps occuring beceause of the sampling coordinate offsets. In the shots above, you'll see the banding on the wall. The finest brown bands on the wall are sampled from smoothed bricks (mipmap level 0). For the info:
* the smallest octree nodes are 25 cm3
* the ray takes about 4 steps in each node (slightly less, as the travel distance increases each step, depending on the cone angle)

Of course, there still could be a bug in the sample coordinates, but I'd say bandless sampling is just impossible. At least not in the way how I push the ray forwards.

 

I didn't have to do anything, the only banding I get is in the specular reflection which is smooth as seen in the UE4 screenshot I showed you. For the diffuse GI you shouldn't get any banding whatsoever because the tracing with wide cone angles smooths everything out.

What do you mean with the banding being caused by the sampling coordinate offsets?

 

>> The red carpet receives some reflected light from the object in the center
True, but would that really result in that much light? Maybe my math is wrong, but each pixel in my case launches 9 rays, and the result is the average of them. In this particular scenario, the carpet further away may only hit the object with 1 or 2 rays, while the carpet beneath is hits it much more times. In my case the distant carpet would probably either be too dark, or the carpet below the object too bright. In the shot the light spreads more equally (realistically) though.

 

Now that you mention it, probably it shouldn't receive that much light. From what I've seen from my implementation, the light bleeding with VCT tends to be a bit excessive probably because no distance attenuation is applied to the cone tracing (another thing for my TODO list). I'm not sure if UE4 uses distance attenuation or not, their GDC presention doesn't mention anything about it and I believe Crassin's paper doesn't either. That's definitely something that we should investigate.

 

Probably Unreal4 GI lighting solution isn't purely VCT indeed, but all in all, it seems to be good enough for true realtime graphics. One of the ideas I have is to make a 2 bounce system. Sure, my computer is too slow to even do 1 bounce properly, but I could make a quality setting that toggles between 0, 1 or 2 realtime bounces. In case I only pick 1, the first bounce is baked (using the static lights only) into the geometry. Not 100% realtime then, but hence none of the solutions in nowadays games are. A supercomputer could eventually toggle to 2 realtime bounces.

 

 A few days ago I implemented a 2 bounce VCT by voxelizing the scene once with direct lighting and then voxelizing the scene again with direct lighting and diffuse GI (generated by tracing the first voxel volume). The results are similar to the single bounce GI with the difference that surfaces that were previously too dark because they couldn't receive bounced light are now properly lit thus resulting in a more uniform lighting.


PARTNERS