Voxel Cone Tracing Experiment - Progress

Started by
27 comments, last by gboxentertainment 10 years, 10 months ago

The specular reflections look great! Any idea what's causing that slowdown between a hundred spheres and a thousand spheres? As in, any idea what's costing the most between the two? Increase in tighter specular cones being traces, increased voxelization, etc.

As for what I meant by anti-aliasing, I realized I'm not sure what I meant. Or rather I'm not sure how you'd go about implementing it. With voxelization in the way everyone does it you're always implementing solid voxel blocks. Somewhere in my head there's this idea of implementing blocks with an alpha value to cover cases where a mesh is partly in a voxel's space but not enough to register in solid. As in, when the Buddha model is moving back and forth instead of it's voxelization popping from 0 to 1 in a binary fashion from voxel space to voxel space, there'd be a smooth transition, filling in an alpha value of that voxel space as more of the mesh takes it up.

I'm just not sure how to do it, there's already holes here, and thin geometry and etc. is already a problem. In my head's it's making cone traced shadows a lot smoother in terms of movement though.

Advertisement

Any idea what's causing that slowdown between a hundred spheres and a thousand spheres?

I'm pretty sure that this is mostly to do with actually rendering the spheres themselves and not the cone-tracing. I've yet to test it though. What I would like to do next is to implement some sort of compute shader particle system that is globally lit by the vct and see if there is better performance.

Somewhere in my head there's this idea of implementing blocks with an alpha value to cover cases where a mesh is partly in a voxel's space but not enough to register in solid. As in, when the Buddha model is moving back and forth instead of it's voxelization popping from 0 to 1 in a binary fashion from voxel space to voxel space, there'd be a smooth transition, filling in an alpha value of that voxel space as more of the mesh takes it up.

Very interesting idea! I think it may be possible now that you mention it in that way. You are correct, there is no gradual transition of moving each triangle between voxel spaces. Now that I think about it, the semi-transparency of the alpha values are calculated during the mip-mapping stage and filtered in the vct pass. If I can bring this filtering back to the voxelization pass, I may just be able to get what you describe - and by pre-filtering I may be able to keep the same cost.

[EDIT]

Now that I think about it a bit more, the only way to get a gradual transition of moving each triangle between voxels is to calculate voxel occupancy percentage. As this technique uses the hardware rasterizer to split the triangles into voxels, I can't imagine that there is a way to do this efficiently - unless someone can suggest a way to do this. However, if this could be done easily, then general anti-aliasing wouldn't be such a hassle as it is today. Supersampling seems to be the most effective method quality-wise - which in voxel cone tracing would just be increasing the voxel resolution. Its possible that I can take the concepts of an antialiasing technique such as msaa, smaa or fxaa and apply this to the voxelization stage.

Looking at Crassin's video of the animated hand in the Sponza scene, I see that he was able to move this hand without any of the artifacts present in my engine. Even if I turn up my voxelization to 256x256x256 (which for my small scene, it would be a much higher resolution than the 512x512x512 that Crassin uses in the large Sponza scene), I still get the flickering artifacts in parts of my models when different voxels are being interchanged due to the change in location of each poly. This is most apparent with the Buddha model, where flickering between light and dark occurs most at the hands.

I found a clue on this site of how to possibly solve this problem: http://realtimevoxels.blogspot.com.au/

The author mentions a similar problem when he tries to implement a cascaded view-centred voxel approach, as an alternative to octrees. His solution is:

"Any slight movement of the camera causes different fragments to be rendered, which causes different voxels to be created. We can fix this by clamping the axis-aligned orthographic projections to world-space voxel-sized increments."

Here is how I apply my orthographic projections in the geometry shader:


#version 430

layout(triangles) in;
layout(triangle_strip, max_vertices=3) out;

in VSOutput
{
	vec4 ndcPos;
	vec4 fNorm;
	vec2 fTexCoord;
	vec4 worldPos;
	vec4 shadowCoord;
} vsout[];

out GSOutput
{
	vec4 ndcPos;
	vec4 fNorm;
	vec2 fTexCoord;
	vec4 worldPos;
	vec4 shadowCoord;
	mat4 oProj;
} gsout;

uniform mat4 voxSpace;

void main()
{
	gsout.oProj = mat4(0);
	vec4 n = normalize(vsout[0].fNorm);

	for(int i = 0; i<gl_in.length(); i++)
	{
		gsout.fNorm = n;
		
		float maxC = max(abs(n.x), max(abs(n.y), abs(n.z)));
		float x,y,z;
		x = abs(n.x) < maxC ? 0 : 1;
		y = abs(n.y) < maxC ? 0 : 1;
		z = abs(n.z) < maxC ? 0 : 1;

		vec4 axis = vec4(x,y,z,1);
		
		if(axis == vec4(1,0,0,1))
		{
			gsout.oProj[0] = vec4(0,0,-1,0);
			gsout.oProj[1] = vec4(0,-1,0,0);
			gsout.oProj[2] = vec4(-1,0,0,0);
			gsout.oProj[3] = vec4(0,0,0,1);
		}	
		else if(axis == vec4(0,1,0,1))
		{
			gsout.oProj[0] = vec4(1,0,0,0);
			gsout.oProj[1] = vec4(0,0,1,0);
			gsout.oProj[2] = vec4(0,-1,0,0);
			gsout.oProj[3] = vec4(0,0,0,1);
		}
		else if(axis == vec4(0,0,1,1))
		{
			gsout.oProj[0] = vec4(1,0,0,0);
			gsout.oProj[1] = vec4(0,-1,0,0);
			gsout.oProj[2] = vec4(0,0,-1,0);
			gsout.oProj[3] = vec4(0,0,0,1);
		}

		gl_Position = gsout.oProj*voxSpace*gl_in.gl_Position;
		gsout.ndcPos = gsout.oProj*voxSpace*vsout.ndcPos;
		gsout.fTexCoord = vsout.fTexCoord;
		gsout.worldPos = vsout.worldPos;
		gsout.shadowCoord = vsout.shadowCoord;

		EmitVertex();
	}
	
}

Where vsout.ndcPos is worldSpace*vPos from the vertex shader and the "voxSpace" world to voxel space transformation is obtained from:


sceneBounds[0] = glm::vec3(-2.5f, -0.005f, -3);
sceneBounds[1] = glm::vec3(2.5f, 2, 2);
glm::vec3 sceneScale = 1.0f/(sceneBounds[1] - sceneBounds[0]);
voxScale = min(sceneScale.x, min(sceneScale.y, sceneScale.z));
voxOff = -sceneBounds[0];

models->voxSpace = glm::translate(glm::mat4(1.0f), glm::vec3(-1.0f, -1.0f, -1.0f))*glm::scale(glm::mat4(1.0f), glm::vec3(2.0f))*glm::scale(glm::mat4(1.0f), glm::vec3(voxScale))*glm::translate(glm::mat4(1.0f), voxOff);

So I've done the transformation of gsout.oProj*voxSpace*vsout.ndcPos, so can anyone elaborate on what "clamping to world-space voxel-sized increments" actually means?

So I've done the transformation of gsout.oProj*voxSpace*vsout.ndcPos, so can anyone elaborate on what "clamping to world-space voxel-sized increments" actually means?

I don't think that trick applies to you. The realtimevoxels use a voxel volume that follows the camera to provide GI around the viewer for arbitrarly large scenes.

The clamping they suggest is to be applied to the volume position, not the voxels/fragments/triangles or whatsoever. That is, instead of the volume following the camera position exactly, it will instead snap to the nearest voxel position.

And although in theory it's a good idea, in practice it doesn't fix the flickering because as the volume moves a slightly different portion of the scene is included in the volume which is enough to change the entire lighting in a noticeable way.

That is, instead of the volume following the camera position exactly, it will instead snap to the nearest voxel position.

Ah, that makes sense now thanks.

I have also been trying to implement some sort of cascaded approach as a faster alternative to octrees. I'm sure that if I make the base volume large enough, and the second volume the next mip-level up, then it may be feasible, but I guess I will just have to get it working first to find out.

As for what I meant by anti-aliasing, I realized I'm not sure what I meant. Or rather I'm not sure how you'd go about implementing it. With voxelization in the way everyone does it you're always implementing solid voxel blocks. Somewhere in my head there's this idea of implementing blocks with an alpha value to cover cases where a mesh is partly in a voxel's space but not enough to register in solid. As in, when the Buddha model is moving back and forth instead of it's voxelization popping from 0 to 1 in a binary fashion from voxel space to voxel space, there'd be a smooth transition, filling in an alpha value of that voxel space as more of the mesh takes it up.

I figured out how to do voxel "anti-aliasing" by getting the vertex position (transformed to inverse voxel space - each voxel is one unit) of each dominant axis and subtracting the integer version of that value, during the voxelization stage in the geometry shader. i.e. if a vertex's dominant axis is x and its x position is 32.4 (out of 64x64x64 maximum position), then it is 0.4 filled of voxel 32. I transfer this value into the fragment shader as a density value and multiply it by the final alpha value.

The problem is, it doesn't seem to help at all, in some cases, making it worse.

As for what I meant by anti-aliasing, I realized I'm not sure what I meant. Or rather I'm not sure how you'd go about implementing it. With voxelization in the way everyone does it you're always implementing solid voxel blocks. Somewhere in my head there's this idea of implementing blocks with an alpha value to cover cases where a mesh is partly in a voxel's space but not enough to register in solid. As in, when the Buddha model is moving back and forth instead of it's voxelization popping from 0 to 1 in a binary fashion from voxel space to voxel space, there'd be a smooth transition, filling in an alpha value of that voxel space as more of the mesh takes it up.

I figured out how to do voxel "anti-aliasing" by getting the vertex position (transformed to inverse voxel space - each voxel is one unit) of each dominant axis and subtracting the integer version of that value, during the voxelization stage in the geometry shader. i.e. if a vertex's dominant axis is x and its x position is 32.4 (out of 64x64x64 maximum position), then it is 0.4 filled of voxel 32. I transfer this value into the fragment shader as a density value and multiply it by the final alpha value.

The problem is, it doesn't seem to help at all, in some cases, making it worse.

sad.png What's making it worse? I just visualized the alpha filling in, but as I've said I've not done it myself, nor have I seen anyone else even suggesting it. Any videos to show what's going on?

What's making it worse? I just visualized the alpha filling in, but as I've said I've not done it myself, nor have I seen anyone else even suggesting it. Any videos to show what's going on?

I guess I've stated my assumption in the wrong manner. What I have meant to say was that the way I was implementing it was making the flickering worse. Something to do with the way I accumulate my lighting. I envision that by having a one voxel offset from the surface may provide what I want. Anyhow, lately I haven't had time to think about this and I have decided to put this on hold until a later stage.

In the meanwhile, I am trying to get better filtering for my voxels.

Original 2x2x2 bricks:

[attachment=15278:giboxemunfilt.jpg]

New 3x3x3 bricks (additional border):

[attachment=15279:giboxemfilt.jpg]

no noticeable performance loss at all.

However, for optimal filtering, I'm going to need to use octrees instead of directly using 3D textures because 3D textures do not allow me to sample at offsets of less than 1 texel.

Here's an update after some tweaking of some parameters (increasing diffuse cone trace to 1.0 from 0.3 and adjusting bounce attenuation). I've also turned off the ugly cone traced shadows - I need to work out a better way of getting shadows:

[attachment=16355:giboxv2-0.png][attachment=16356:giboxv2-1.png]

[attachment=16357:giboxv2-2.png][attachment=16358:giboxv2-3.png]

[attachment=16359:giboxv2-4.png][attachment=16360:giboxv2-5.png]

As you can see, the diffuse is a lot more realistic and consistent - especially with emissive object lighting.

The last image demonstrates multiple bounces with emissive lighting.

Now I'm hoping to work out a way of getting accurate soft-shadows that also occur with emissive object lighting. It seems the fastest way would be to assign all emissive objects as lights and switch to deferred rendering for deferred shadow mapping. The darkness of the shadowing would be a function of the emissivity parameter. But with point-light sources, I would need some sort of dual-paraboloid shadow mapping, which would be slow.

I wonder if there is another way such as ray-tracing shadows. I've already seen the artifacts of cone-tracing shadows, which are very apparent with the sphere - they show up very voxelized (unless someone has an idea of how to filter these better during sampling).

The biggest limitation of cone tracing shadows is that I have to trace a cone in the direction of every light - meaning that performance is highly dependent on the number of light sources I have in the scene.

I was thinking another way would be to implement some sort of ambient occlusion, which is similar to tracing a shadow cone, but instead of tracing one in the direction of every light source, I would just have a set number of multiple cones that capture averaged visibility.

This topic is closed to new replies.

Advertisement