Voxelization cracks

Started by
24 comments, last by cowcow 6 years ago
2 minutes ago, JoeJ said:

I would download the voxelized data and visualize a cube for each voxel to be sure the bug is not just in your visualization.

The voxels are occupied. Though, the filled voxels for a single plane occur in multiple consecutive slices (due to the slope of the plane)

 

4 minutes ago, JoeJ said:

I assume aligning the grid to view direction is no good idea because voxels will pop like crazy.

I guess that the values stored in the voxels can indeed start fluctuating while moving and changing the camera orientation. Thought, indirect illumination is generally much smoother and low-frequency than direct illumination. But I am curious at the final results too.

For the current debug visualization, the popping is quite strange, since it becomes black. I wouldn't be surprised when the color changes, but black, however, indicates that the voxel is empty.

🧙

Advertisement

I switched to world space and visualized the voxels, but somehow I still see a camera dependence on the floor and some popping at the walls?

screenshot-2018-02-23-21-34-49.thumb.png.4e9e4165f0fe2c4650d52269f437e1d4.png

I also used a slightly different encoding as WickedEngine (https://turanszkij.wordpress.com/2017/08/30/voxel-based-global-illumination/):


static const float g_max_length = 1.0f;

uint EncodeRadiance(float3 L) {
    const float L_length     = length(L);
    const float L_inv_length = 1.0f / L_length;

    // [0, inf) -> [0,1] -> [0,255]
    const uint length = uint(255.0f * saturate(L_length / g_max_length));
    // [0, L_inv_length)^3 -> [0,1]^3 -> [0,255]^3
    const uint3 color = uint3(255.0f * L * L_inv_length);
    
    // |........|........|........|........|
    // | length |  red   |  green |  blue  |
    const uint encoded_L = (length  << 24u)
                         | (color.x << 16u) 
                         | (color.y <<  8u) 
                         |  color.z;
    return encoded_L;
}

float3 DecodeRadiance(uint encoded_L) {
    static const float u_to_f = 1.0f / 255.0f;
    
    // [0,2^32-1] -> [0,255]^4
    const uint4 uL = 0xFF & uint4(encoded_L >> 24u,
                                  encoded_L >> 16u,
                                  encoded_L >>  8u,
                                  encoded_L);
    // [0,255]^4 -> [0,1]^4
    const float4 fL = uL * u_to_f;
    
    // [0,1] -> [0, g_max_length]
    const float L_length = fL.x * g_max_length;
    // [0,1]^3 -> [0, L_length]^3
    const float3 L = fL.yzw * L_length;
    
    return L;
}

 

🧙

Seems the visualization itself is wrong, because some voxels don't have 6 sides but less. Wrong volume data alone can't cause this.

(Look at the single black voxel at the base of a column, near the center of the left half of the image.)

About the encoding, i think it would be better to use more bits for luminance. Something like YUV for example, 16 bits for brightness and 2*8 for color: http://softpixel.com/~cwright/programming/colorspace/yuv/

This way you would have enough range to support area lights made of voxels.

15 hours ago, JoeJ said:

About the encoding, i think it would be better to use more bits for luminance. Something like YUV for example, 16 bits for brightness and 2*8 for color: http://softpixel.com/~cwright/programming/colorspace/yuv/

You mean Y: 16 bits and UV: 16 bits?

🧙

15 hours ago, JoeJ said:

Seems the visualization itself is wrong, because some voxels don't have 6 sides but less. Wrong volume data alone can't cause this.

Culling? I am confused about the culling terminology of D3D. If the near face of a cube is parallel to the camera near/far plane and inside the camera frustum, and the cube faces have a clockwise winding order:

 

camera far plane

 

far face front

--------------

far face back

the void of the cube

near face back

--------------

near face front

 

camera near plane

 /\

  |

eye

Rasterizer state:

CullMode                         Back
FrontCounterClockwise    FALSE

 

Is there a possibility that fragments will be generated for the near face front and the far face front (though the normal is pointing away from the camera)? Or does the culling not involve the orientation of the geometric normal at all?

🧙

14 minutes ago, matt77hias said:

You mean Y: 16 bits and UV: 16 bits?

Yes. Maybe some exponential mapping for Y, or just a half float.

(But i'm still using float4 myself and have not tried anything of this yet...)

5 minutes ago, matt77hias said:

Or does the culling not involve the orientation of the geometric normal at all?

In OpenGL culling only depends on winding order, IIRC

 

4 minutes ago, JoeJ said:

In OpenGL culling only depends on winding order, IIRC

Then it should be the visualization.

 

5 minutes ago, JoeJ said:

(But i'm still using float4 myself and have not tried anything of this yet...)

It will probably be better anyway, since the g_max_length seriously caps my radiance.

🧙

A stripped triangle cube:


0,1,1
1,1,1
0,1,0
1,1,0
1,0,0
1,1,1
1,0,1
0,1,1
0,0,1
0,1,0
0,0,0
1,0,0
0,0,1
1,0,1

This is a cube centered at [0,0,0] with a size of [1,1,1]. I multiply this with my voxel size and add it to the left/lower/near corner


const float3 offset  = OffsettedUnitCube(i) * g_voxel_size;
const float3 p_world = input[0].p_world + offset;

inside a GS. Finally, I transform to camera and projection space.

input[0].p_world is generated in the VS (executed for resolution^3 points)


const uint3 index = UnflattenIndex(vertex_index, uint3(g_voxel_grid_resolution, g_voxel_grid_resolution, g_voxel_grid_resolution));
output.p_world = VoxelIndexToWorld(index);

Transforming between voxel indices and world space:


int3 WorldToVoxelIndex(float3 p_world) {
    // Valid range: [-R/2,R/2]x[R/2,-R/2]x[-R/2,R/2]
    const float3 voxel = (p_world - g_voxel_grid_center) 
                       * g_voxel_inv_size * float3(1.0f, -1.0f, 1.0f);
    // Valid range: [0,R)x(R,0]x[0,R)
    return floor(voxel + 0.5f * g_voxel_grid_resolution);
}
float3 VoxelIndexToWorld(uint3 voxel_index) {
    // Valid range: [-R/2,R/2]x[R/2,-R/2]x[-R/2,R/2]
    const float3 voxel = voxel_index - 0.5f * g_voxel_grid_resolution;
    return voxel * g_voxel_size * float3(1.0f, -1.0f, 1.0f) + g_voxel_grid_center;
}

 

As a side not, I am not really sure if I should round or floor. If you floor you will look straight at four voxels otherwise just one voxel (for a very low voxel grid resolution). But flooring is a bit more symmetrical I guess.

 

The y direction seems to be severely wrong.

screenshot-2018-02-24-16-04-56.thumb.png.20d08bf3abb17f5f170d42e07f4d23b1.png

 

🧙

On 24-2-2018 at 2:31 PM, matt77hias said:

Is there a possibility that fragments will be generated for the near face front and the far face front (though the normal is pointing away from the camera)? Or does the culling not involve the orientation of the geometric normal at all?

Of course only fragments will be generated for the near face front which is facing towards the camera.

So no culling already results in a large improvement :)

🧙

This topic is closed to new replies.

Advertisement