How to get voxel opacity during gpu surface voxelization

Started by
16 comments, last by JoeJ 7 years, 6 months ago

HI everyone, recently I am working for my svo-alike gi system and I have a big problem. That is HOW to get the correct opacity (instead of binary occupancy) for voxel during my gpu voxelization, which is crucial for the quality of volumetric cone tracing.

I am using a traditional 6 main-axis voxelization method with multi-sampling, and I am wondering if i can get the non-binary opacity per voxel by fragment voxelization.

I also notice that the GDC talk of Tomrrow's children has introduced some interesting methods about voxelization. But unfortunately they didn't give the details about the calculation of voxel's occupancy.

Any suggestions would be appreciated?

Advertisement

Isn't the opacity determined by your geometry's opacity?? I'm simply using my transparency from the diffuse texture (the alpha component) and it works well for voxelization. Don't forget that you have to take account on the opacity during filtering later on as well.

EDIT: And don't forget that you have to alphablend multiple fragments that fall on one voxel as well somehow.

Guess you could use coverage from multisampling?

Isn't the opacity determined by your geometry's opacity?? I'm simply using my transparency from the diffuse texture (the alpha component) and it works well for voxelization. Don't forget that you have to take account on the opacity during filtering later on as well.

EDIT: And don't forget that you have to alphablend multiple fragments that fall on one voxel as well somehow.

Yes. Just like a traditional way of svo I used the diffuse tex transparency for the voxel opacity, and I also implemented the 6-face aniso-mipmaping. But the cone tracing result is still not good enough... I already realized that I should use associated color (pre-alpha-multipled) to do these works, but I don't know what is the exactly meaning of 'alpha-blend multiple fragments that fall on one voxel'. Would you kindly enough to show me more details? :)

I also notice that tomorrow's children used a 6-face surface voxelization method, which considered the mesh itself as a aniso-voxelized (instead of iso-voxelized) volume. They also pointed that the solid voxelization would be greatly helpful for the cone tracing. Is there anybody has related experiences about this way?

Guess you could use coverage from multisampling?

Yes, I do realized that maybe there exists some relationships between the coverage of MS and the real occupancy of the voxel. But the question is: HOW to implement this? In fact, I have used the 8x multi-sampling for my GPU voxelization implementation. But until now I have not work out an effective way to induce the correct opacity (density) of voxels from its 2D sub-pixel coverage.

In fact, what I really concern is TO IMPROVE the quality of voxel cone tracing. If a traditional gpu voxelization is all the best that I can do, I would go with it. Maybe I should focus on the cone tracing instead? How do you think? :ph34r:

I don't know about technical MSAA details, but if you can read the number of covered samples in pixel shader, density would be simply surfaceAlpha * coveredSamples / 8.0.

If this works you can accumulate density weighted color samples: targetVoxel += vec4(surfaceRGB * density, density),

and in a final pass 'normalize' all voxels: voxel.xyz /= voxel.w; voxel.w = min(1.0, voxel.w);

This should give a good average if the voxel is covered by multiple surface samples, and also a transparent value if it's only partially covered.

The quality improvement would be less popping on dynamic geometry (of course you still would get better results from the binary approach by increasing voxel resolution).

You would need to describe (or show) your kind of 'big problem' with more detail.

There are issues with voxel tracing for GI - i don't think it will become a generic practical solution suited for every kind of game.

I don't know about technical MSAA details, but if you can read the number of covered samples in pixel shader, density would be simply surfaceAlpha * coveredSamples / 8.0.

If this works you can accumulate density weighted color samples: targetVoxel += vec4(surfaceRGB * density, density),

and in a final pass 'normalize' all voxels: voxel.xyz /= voxel.w; voxel.w = min(1.0, voxel.w);

This should give a good average if the voxel is covered by multiple surface samples, and also a transparent value if it's only partially covered.

hmmm... This idea sounds like coverage-to-alpha. Sorry for my late reply. I will have a quick try and post an more detailed description about the problem and the result here. :)

Isn't the opacity determined by your geometry's opacity?? I'm simply using my transparency from the diffuse texture (the alpha component) and it works well for voxelization. Don't forget that you have to take account on the opacity during filtering later on as well.

EDIT: And don't forget that you have to alphablend multiple fragments that fall on one voxel as well somehow.

Yes. Just like a traditional way of svo I used the diffuse tex transparency for the voxel opacity, and I also implemented the 6-face aniso-mipmaping. But the cone tracing result is still not good enough... I already realized that I should use associated color (pre-alpha-multipled) to do these works, but I don't know what is the exactly meaning of 'alpha-blend multiple fragments that fall on one voxel'. Would you kindly enough to show me more details? :)

I also notice that tomorrow's children used a 6-face surface voxelization method, which considered the mesh itself as a aniso-voxelized (instead of iso-voxelized) volume. They also pointed that the solid voxelization would be greatly helpful for the cone tracing. Is there anybody has related experiences about this way?

Guess you could use coverage from multisampling?

Yes, I do realized that maybe there exists some relationships between the coverage of MS and the real occupancy of the voxel. But the question is: HOW to implement this? In fact, I have used the 8x multi-sampling for my GPU voxelization implementation. But until now I have not work out an effective way to induce the correct opacity (density) of voxels from its 2D sub-pixel coverage.

In fact, what I really concern is TO IMPROVE the quality of voxel cone tracing. If a traditional gpu voxelization is all the best that I can do, I would go with it. Maybe I should focus on the cone tracing instead? How do you think? :ph34r:

Like with traditional rendering, you can have multiple objects that cover shared pixels in the end on the screen. In the case of vct, you end up having a voxel that is occupied by more than one objects - so which object's color do you store in the voxel? The answer is, that you store both. You have to take account of their coverage of the voxel, or at least do a uniform distribution for all objects falling into one voxel. That means if two objects fall into one voxel, the voxel volor is objecta.rgb*objecta.a + objectb.rgb*objectb.a - and afterwards normalizing the result. Since alpha blending would need information about the current state of your voxel texture in the fragment shader, you would sadly have to sample the texture you are currently rendering to - which is not possible. If you use other instructions, like imageStore, you could do a moving average, for example take a look at this: https://rauwendaal.net/2013/02/07/glslrunningaverage/ . Please tell me, if you need further advice.

Wouldn't it be faster to use just atomic_add operations, without a spinlock?

Downside is the need for uvec4 (instead just RGBA8) voxelization data, so four times the memory.

But that memory could be packed to RGB8 at the final normalization stage and eventually reused for the next main axis.

(Additionally, RGB8 is too less anyways if you want to use emmissive surfaces or multibounce integration)

n the case of vct, you end up having a voxel that is occupied by more than one objects - so which object's color do you store in the voxel? The answer is, that you store both

You suggest the same thing as i did, but i would not call this a 'true' answer.

Averaging by area will give nice filtered results, but the 'correct' thing would be to occlude along the slected main axis, so the closer surface wins.

I don't know if this could be done per subpixel with MSAA samples, and probably it's not worth anyways, but just to keep in mind.

Another reason for popping artefacts for rotating objects is the binning to one of 6 main axis - we would need to add weighted samples to 3 closest main axis to prevent this.

High cost, but if we use a large enough number of MSAA samples, all popping artefacts would be fixed.

And then we're left with the unsolveable problems of voxel cone tracing:

If two close thin walls fall in a single voxel, how to calculate lighting between them? Let's increase voxel resolution.

But even then, a traced cone becomes wider with distance, it might not fit between the walls and will sample occluded space - because it uses mipmaps, it does not profit from our increased voxel resolution. We have tho increase cone count and marching density as well - we reach the point where traditional ray tracing becomes faster.

So by using voxel cone tracing practically we get restrictions both on performance and level design - guess that's the reasons it's not used that often yet.

Thanks for clearing up JoeJ. Do you talk from experience, did you implement voxelization with multisampling enabled?

Of course I don't want to give "the only true answer" - because there are already so many possible solutions for all the downsides of vct. From my experience, vct is so ressource-hungry, that you want to try the cheapest solutions first. The cheapest solution would probably be to use this approach with the moving average: https://www.seas.upenn.edu/~pcozzi/OpenGLInsights/OpenGLInsights-SparseVoxelization.pdf (take a look at page 310). What JoeJ described in terms of vector packing, is given in glsl code in this article.

No, i've experimented with voxel based GI many years ago (CPU only), but i found it too slow and gave up in favor of a tree of disc shaped samples covering just the surface.

According to the paper going lock free is 2-20(!) times faster, but this means the cheapest version does NOT use moving average but simple accumulation instead.

For a moving average you need the spinlock anyways independently of data (to calculate the average after every update),

accumulation is just adding all stuff up and divide to average afterwards.

This topic is closed to new replies.

Advertisement