So far I made the octree, the bricks, and mipmapped the whole thing. There are some differences with the original CVT technique. For one thing, I'm making 2x2x2 pixel bricks instead of 3x3x3. I can still smoothly interpolate the bricks over the geometry without getting a tiled view.

However, the actual raymarching is penetrating the walls (thus sampling light behind an obstacle) too often. I know why (see the picture), but how to solve it? In short, when doing linear sampling in a brick, the occlusion value I get is often too small to stop the ray. So the result will partially pick from geometry further behind. Also another problem is that my results are too dark in some cases, if the ray samples from a higher mipmapped level that was mixed with black pixels (no geometr-vacuum octree nodes). In the image, you can see that the results really depend on at which point I sample within a brick. When MipMapping =blurring with unfulled neighbor nodes, all these problems get worse, One extra little annoying problem is that this also created "banding" artifacts.

There are 2 simple things I can do A: sample with "nearest" instead of "linear" filtering and B: take a lot more (smaller) steps to assure you sample the same node multiple times. However, solution A will lead to "MineCraft" results, and B makes the already heavy technique even slower. And still doesn't guarantee rays penetrating unless I take awfully much samples.

As for the (compute) shader code, let's illustrate the path of a single ray:

rayPos = startPoint + smallOffset occluded= 0 color = (0,0,0) radius = 0.25 // my smallest nodes start at the size of 0.25 m3 while (occluded < 1) { // Get a node from the octree. The deepest level // depends on the current cone radius node = traverseOctree( rayPos, radius ) // Check if there might be geometry (thus nodes) // at the current cone size level if (node.size <= radius) { // Sample brick // localOffset depends on the ray position inside // the localOffset = absToLocalPosition( rayPos, node.worldPos ) texCoord3D = node.brickCoord + localOffset colorAndOcclusion = sampleBrick( texCoord3D, rayDirection ) // Add occluded += colorAndOcclusion.w color += colorAndOcclusion.rgb } // increase cone size so we take bigger steps // but also sample from higher (more blurry) mipmapped nodes radius += coneAngleFactor // March! rayPos += rayDirection * radius }

So, to put it simple, the ray keeps moving until the occlusion value gets 1 or higher. When there might be geometry at the ray position, we add the values we sample from a brick, stored as 2x2x2 pixels in a 3D textures. Probably important to know as well, the color and occlusion we sample also depends on the ray direction, and in which way the voxels were facing.

// Using 6 textures to store bricks. // Colors & Occlusion get spread over the 6 axis (-X, +X, -Y, ...) float3 dotValues; dotValues.x = dot( float3( 1,0,0 ), rayDirection ); dotValues.y = dot( float3( 0,1,0 ), rayDirection ); dotValues.z = dot( float3( 0,0,1 ), rayDirection ); dotValues = abs( dotValues ); if (rayDirection.x > 0 ) colorX = tex3D( brickTex_negativeX, texcoord ); else colorX = tex3D( brickTex_positiveX, texcoord ); if (rayDirection.y > 0 ) colorY = tex3D( brickTex_negativeY, texcoord ); else colorY = tex3D( brickTex_positiveY, texcoord ); if (rayDirection.z > 0 ) colorZ = tex3D( brickTex_negativeZ, texcoord ); else colorZ = tex3D( brickTex_positiveZ, texcoord ); float4 result = colorX * dotValues.xxxx + colorY * dotValues.yyyy + colorZ * dotValues.zzzz ;That means when the ray travels almost parallel to a wall, it only gets a bit occluded by the wall (which makes sense I'd say).

Well, anyone experience with this?

Greets,

Rick