Jump to content

  • Log In with Google      Sign In   
  • Create Account

We're offering banner ads on our site from just $5!

1. Details HERE. 2. GDNet+ Subscriptions HERE. 3. Ad upload HERE.


#Actualgboxentertainment

Posted 20 March 2013 - 06:34 AM

Any idea what's causing that slowdown between a hundred spheres and a thousand spheres?

 

I'm pretty sure that this is mostly to do with actually rendering the spheres themselves and not the cone-tracing. I've yet to test it though. What I would like to do next is to implement some sort of compute shader particle system that is globally lit by the vct and see if there is better performance.

 

 

Somewhere in my head there's this idea of implementing blocks with an alpha value to cover cases where a mesh is partly in a voxel's space but not enough to register in solid. As in, when the Buddha model is moving back and forth instead of it's voxelization popping from 0 to 1 in a binary fashion from voxel space to voxel space, there'd be a smooth transition, filling in an alpha value of that voxel space as more of the mesh takes it up.

 

Very interesting idea! I think it may be possible now that you mention it in that way. You are correct, there is no gradual transition of moving each triangle between voxel spaces. Now that I think about it, the semi-transparency of the alpha values are calculated during the mip-mapping stage and filtered in the vct pass. If I can bring this filtering back to the voxelization pass, I may just be able to get what you describe - and by pre-filtering I may be able to keep the same cost.

 

[EDIT]

Now that I think about it a bit more, the only way to get a gradual transition of moving each triangle between voxels is to calculate voxel occupancy percentage. As this technique uses the hardware rasterizer to split the triangles into voxels, I can't imagine that there is a way to do this efficiently - unless someone can suggest a way to do this. However, if this could be done easily, then general anti-aliasing wouldn't be such a hassle as it is today. Supersampling seems to be the most effective method quality-wise - which in voxel cone tracing would just be increasing the voxel resolution. Its possible that I can take the concepts of an antialiasing technique such as msaa, smaa or fxaa and apply this to the voxelization stage.


#1gboxentertainment

Posted 19 March 2013 - 02:55 PM

Any idea what's causing that slowdown between a hundred spheres and a thousand spheres?

 

I'm pretty sure that this is mostly to do with actually rendering the spheres themselves and not the cone-tracing. I've yet to test it though. What I would like to do next is to implement some sort of compute shader particle system that is globally lit by the vct and see if there is better performance.

 

Somewhere in my head there's this idea of implementing blocks with an alpha value to cover cases where a mesh is partly in a voxel's space but not enough to register in solid. As in, when the Buddha model is moving back and forth instead of it's voxelization popping from 0 to 1 in a binary fashion from voxel space to voxel space, there'd be a smooth transition, filling in an alpha value of that voxel space as more of the mesh takes it up.

 

Very interesting idea! I think it may be possible now that you mention it in that way. You are correct, there is no gradual transition of moving each triangle between voxel spaces. Now that I think about it, the semi-transparency of the alpha values are calculated during the mip-mapping stage and filtered in the vct pass. If I can bring this filtering back to the voxelization pass, I may just be able to get what you describe - and by pre-filtering I may be able to keep the same cost.


PARTNERS