# Voxelization cracks

## Recommended Posts

[maxvertexcount(3)]
void GS(triangle GSInputPositionNormalTexture input[3],
inout TriangleStream< PSInputPositionNormalTexture > output_stream) {

const float3 abs_n = abs(cross(input[1].p_view - input[0].p_view,
input[2].p_view - input[0].p_view));
const uint dir_xy = (abs_n.x > abs_n.y)       ? 0u : 1u;
const uint dir    = (abs_n.z > abs_n[dir_xy]) ? 2u : dir_xy;

PSInputPositionNormalTexture output[3];

// Project the triangle in the dominant direction for rasterization (p),
// but not for lighting (p_view).
[unroll]
for (uint i = 0u; i < 3u; ++i) {

[flatten]
switch (dir) {
case 0u:
output[i].p.xy = input[i].p_view.yz;
break;
case 1u:
output[i].p.xy = input[i].p_view.zx;
break;
default:
output[i].p.xy = input[i].p_view.xy;
break;
}

// [m_view] * [voxels/m_view] -> [voxels]
output[i].p.xy  *= g_voxel_inv_size;
...
}

// For each projected triangle, a slightly larger bounding triangle ensures
// that any projected triangle touching a pixel will necessarily touch the
// center of this pixel and thus will get a fragment emitted by the rasterizer.
const float2 delta_10 = normalize(output[1].p.xy - output[0].p.xy);
const float2 delta_21 = normalize(output[2].p.xy - output[1].p.xy);
const float2 delta_02 = normalize(output[0].p.xy - output[2].p.xy);
// [voxels] * [2 m_ndc/voxels] -> [-1,1]
const float voxel_to_ndc = 2.0f * g_voxel_grid_inv_resolution;
// Move vertices for conservative rasterization.
output[0].p.xy = (output[0].p.xy + normalize(delta_02 - delta_10)) * voxel_to_ndc;
output[1].p.xy = (output[1].p.xy + normalize(delta_10 - delta_21)) * voxel_to_ndc;
output[2].p.xy = (output[2].p.xy + normalize(delta_21 - delta_02)) * voxel_to_ndc;

...
}

I tried to voxelize the scene into a regular 3D voxel grid centered around the camera. This results in some odd cutouts and temporal disappearances of triangles which seems to dependent on the camera angles.

With conservative rasterization:

output[0].p.xy = (output[0].p.xy) * voxel_to_ndc;
output[1].p.xy = (output[1].p.xy) * voxel_to_ndc;
output[2].p.xy = (output[2].p.xy) * voxel_to_ndc;

Without conservative rasterization:

output[0].p.xy = (output[0].p.xy + normalize(delta_02 - delta_10)) * voxel_to_ndc;
output[1].p.xy = (output[1].p.xy + normalize(delta_10 - delta_21)) * voxel_to_ndc;
output[2].p.xy = (output[2].p.xy + normalize(delta_21 - delta_02)) * voxel_to_ndc;

This should shift the voxel-space vertex coordinates outwards over one voxel. Though, if I scale this (by a factor 2 for example), the problems remain. So I wonder if this is still somehow related to conservative rasterization or not?

It even gets more extreme, since complete planes disappear depending on the camera angles:

float g_voxel_size = 0.1f; // The size of a voxel in meters (view space)
float g_voxel_inv_size = 10.0f;
uint g_voxel_grid_resolution = 128u; // The resolution of the voxel grid
float g_voxel_grid_inv_resolution = 0.0078125f;

Edited by matt77hias

##### Share on other sites

Back face culling or that
? :
stuff.

I just read about conservative rendering. Are you rasterizing twice? I thought voxels are generated by marching cube. Why would I rasterize a triangle into voxel space?

##### Share on other sites

@matt77hias What do you mean by disappearing voxels for some camera angles? They should not depend on camera angle, are they not captured in world space?

@arnero You can use rasterization to determine which triangle overlaps which voxels and build a voxelization entirely on the GPU.

##### Share on other sites
Just now, turanszkij said:

@matt77hias What do you mean by disappearing voxels for some camera angles? They should not depend on camera angle, are they not captured in world space?

I guessed something similar. Are you sure you use orthographic projection fitting the voxel volume? Seems there is some kind if perspective projection that messes things up... maybe

##### Share on other sites

Or maybe you mean you try to visualize the voxels with ray-marching and that's where you get the errors? In that case, you should refine your steps because stepping along the ray can sometimes completely miss voxels (because of large steps or shooting at the voxel edge)

##### Share on other sites

My voxelization is constructed in view space, not in world space. I evaluate the rendering equation in view space as well. So I can avoid some extra transforms this way. Furthermore, I will eventually use a clip map (sort of cascaded grid) instead of a regular grid, which is for lod purposes better and easier when centered around the camera. This explains the dependence on the camera angles and especially the x axis, which changes the slope of the ground plane with regard to the camera.

I do not really use an explicit projection matrix while voxelizing on the GPU. I use an explicit orthographic projection matrix on the CPU for view frustum culling before voxelization and an implicit orthographic projection matrix in this geometry shader.

The visualization is another pass that converts the view space position in a similar way as the voxelization pass to retrieve a location inside a 3D texture. So this is merely just for debugging (no cone tracing and thus no ray marching, just one texture sample).

Edited by matt77hias

##### Share on other sites

So you do something similar that Crysis 2, which created a incomplete voxelization from screen space for GI?

But then, why the switch on dominant direction, why GS and why don't you just voxelize based on the Z buffer?

##### Share on other sites
1 hour ago, JoeJ said:

But then, why the switch on dominant direction, why GS

I want to avoid cracks (how ironic looking at my renderings) in the voxelization of solid triangles. You can always get cracks independent of your coordinate system. Therefore, I select the most detailed rasterization out of the three primary axes inside the GS. https://developer.nvidia.com/content/basics-gpu-voxelization

1 hour ago, JoeJ said:

why don't you just voxelize based on the Z buffer?

Like clustered shading for light sources (i.e. generate the depth map, calculate the max depths per tile and create n equidistant bins for each tile based on the max depth of that tile)? Currently, I just want to try the easiest approach: regular grid (i.e. the same equidistant bins everywhere). It is also easier to transfer the voxel data from a structured buffer, representing a regular 3D grid, to a 3D texture.

##### Share on other sites
2 hours ago, turanszkij said:

Or maybe you mean you try to visualize the voxels with ray-marching and that's where you get the errors?

I think it is possible to step over a voxel, since I just use one sample for the visualization (i.e. no marching based on opacity). But I basically use the same calculations and data for calculating the indices during the voxelization and visualization passes.

Voxelization PS

// Valid range: [-R/2,R/2]x[R/2,-R/2]x[-R/2,R/2]
const float3 voxel = input.p_view * g_voxel_inv_size * float3(1.0f, -1.0f, 1.0f);
// Valid range: [0,R)x(R,0]x[0,R)
const  int3 s_index = floor(voxel + 0.5f * g_voxel_grid_resolution);
const uint3   index = (uint3)s_index;

[branch]
if (any(s_index < 0 || index >= g_voxel_grid_resolution)) {
return;
}

...

const uint flat_index = FlattenIndex(index, g_voxel_grid_resolution);

Voxelization to Texture CS

[branch]
return;
}

const uint flat_index = FlattenIndex(thread_id, g_voxel_grid_resolution);

...

voxel_texture[thread_id] = ...;

Visualization PS

// Valid range: [-R/2,R/2]x[R/2,-R/2]x[-R/2,R/2]
const float3 voxel = p * g_voxel_inv_size * float3(1.0f, -1.0f, 1.0f);
// Valid range: [0,R)x(R,0]x[0,R)
const int3 s_index = floor(voxel + 0.5f * g_voxel_grid_resolution);
return g_voxel_texture[s_index].xyz;

1 hour ago, JoeJ said:

incomplete voxelization from screen space for GI

It is not incomplete and covers more than the "screen space" of the lighting.

It is basically just the same as world space voxelization, only the rotation of your grid will be based on the camera orientation and the camera will always occupy the center voxel. So you will not favor certain directions more or less when calculating the indirect illumination contribution.

Edited by matt77hias

##### Share on other sites
1 hour ago, matt77hias said:

It is basically just the same as world space voxelization, only the rotation of your grid will be based on the camera orientation and the camera will always occupy the center voxel. So you will not favor certain directions more or less when calculating the indirect illumination contribution.

Ah ok, i was confused by the term 'viewspace'.

I would download the voxelized data and visualize a cube for each voxel to be sure the bug is not just in your visualization.

I assume aligning the grid to view direction is no good idea because voxels will pop like crazy. Probably you end up using world space orientation and also snapping position to world space grid to get consistent results.

To avoid an advantage for corner directions, clipping triangles and other processing by a sphere would be correct, but that makes only sense if this turns out to speed things up as well. Maybe when using cascades, but i doubt it.

## Create an account

Register a new account

1. 1
2. 2
3. 3
4. 4
Rutin
15
5. 5

• 14
• 9
• 9
• 9
• 10
• ### Forum Statistics

• Total Topics
632912
• Total Posts
3009191
• ### Who's Online (See full list)

There are no registered users currently online

×

## Important Information

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!