Questions about surfel

Started by
6 comments, last by zhangdoa 4 years, 9 months ago

Hi everyone here,

Hope you just had a great day with writing something shining in 60FPS:)

I've found a great talk about the GI solution from Global Illumination in Tom Clancy's The Division, a GDC talk given by Ubisoft's Nikolay Stefanov. Everything looks nice but I have some unsolvable questions that, what is the "surfel" he talked about and how to represent the "surfel"?

So far as I searched around there are only some academic papers which look not so close to my problem domain, the "surfel" those papers talked about is using the point as the topology primitives rather than triangled mesh. Are these "surfel"s the same terminology and concept?

From 10:55 he introduced that they "store an explicit surfel list each probe 'sees'", that literally same as storing the surfel list of the first ray casting results that from the probe to the certain directions (which he mentioned just a few minutes later). So far I have a similar probe capturing stage during the GI baking process in my engine, I would get a G-buffer cubemap at each probe's position with facing 6 coordinate axes. But what I stored in the cubemap is the rasterized texel data of the world position normal albedo and so on, which is bounded by the resolution of the cubemap. Even I could tag some kind of the surface ID during the asset creation to mimic "surfel", still, they won't be accurately transferred to the "explicit surfel list each probe 'sees'" if I keep doing the traditional cubemap works. Do I need to ray cast on CPU to get an accurate result?

Thanks for any kinds of help.

Advertisement

A "surfel" is just a sample point that's located on the surface of a mesh. Basically imagine just picking a bunch of points that lie on the triangles that make up a mesh, and that's a surfel. Since they're on a surface a surfel can have an associated position, normal, albedo, roughness, etc. which you can determine by interpolating the vertices of the triangle and/or sampling texture maps. 

In the case of what the Division is talking about, they're generating surfels by rasterizing cubemaps. Imagine each probe shoots out a bunch of rays in all directions, and wherever each ray hits something you spawn a surfel at that point. That's basically what's happening, except you're rasterizing to cubemap faces so the "rays" will be aligned to 6 grids. If you're rasterizing depth/normal/albedo/etc. then you already have the attributes of the surfel available for you in your G-Buffer. 

The extra part they add on top of that is that they de-duplicate surfels that are seen by multiple probes. Basically if two probes see the same exact point in 3D space, they collapse that down to 1 surfel. This saves computation since they don't have to calculate lighting for a surfel multiple times. Because of that deduplication they end up with the list of surfels per-probe, where the list elements are surfel indices.

 
 
 
 
 
 
8
 Advanced issues found
 
 
5
1 hour ago, MJP said:

A "surfel" is just a sample point that's located on the surface of a mesh. Basically imagine just picking a bunch of points that lie on the triangles that make up a mesh, and that's a surfel. Since they're on a surface a surfel can have an associated position, normal, albedo, roughness, etc. which you can determine by interpolating the vertices of the triangle and/or sampling texture maps. 

In the case of what the Division is talking about, they're generating surfels by rasterizing cubemaps. Imagine each probe shoots out a bunch of rays in all directions, and wherever each ray hits something you spawn a surfel at that point. That's basically what's happening, except you're rasterizing to cubemap faces so the "rays" will be aligned to 6 grids. If you're rasterizing depth/normal/albedo/etc. then you already have the attributes of the surfel available for you in your G-Buffer. 

The extra part they add on top of that is that they de-duplicate surfels that are seen by multiple probes. Basically if two probes see the same exact point in 3D space, they collapse that down to 1 surfel. This saves computation since they don't have to calculate lighting for a surfel multiple times. Because of that deduplication they end up with the list of surfels per-probe, where the list elements are surfel indices.

Thanks for the explanation MJP,

If surfel is a sample point, then how to eliminate the sample rate/accuracy problem if using cubemap sampling? Since a texture must have limited resolution, thus if two probes see the same geometry in the different distance, they could "see the same exact point" but also would have possibilities that produce different surfels for the same region/triangle of the geometry. Or I shouldn't worry about the surfel duplication at this level?

Surfel de-duplication is definitely a good idea, shading becomes nigh half the cost here. There's another paper that builds a bit on that first one, though I forget if it mentions exactly how it does de-duplication or if it's just glossed over:

https://morgan3d.github.io/articles/2019-04-01-ddgi/

Either way it also has a neat hack for reducing lightleak, which is otherwise a big concern when using probes for GI. And since I'm on a roll, it's better to store the resulting probes using ambient dice:

https://www.ppsloan.org/publications/AmbientDice.pdf

Which provides better results for mem/perf than other basis, and can be used for specular approximation without costly raytracing:

https://torust.me/2019/06/25/ambient-dice-specular.html

12 hours ago, Frantic PonE said:

Surfel de-duplication is definitely a good idea, shading becomes nigh half the cost here. There's another paper that builds a bit on that first one, though I forget if it mentions exactly how it does de-duplication or if it's just glossed over:

https://morgan3d.github.io/articles/2019-04-01-ddgi/

Either way it also has a neat hack for reducing lightleak, which is otherwise a big concern when using probes for GI. And since I'm on a roll, it's better to store the resulting probes using ambient dice:

https://www.ppsloan.org/publications/AmbientDice.pdf

Which provides better results for mem/perf than other basis, and can be used for specular approximation without costly raytracing:

https://torust.me/2019/06/25/ambient-dice-specular.html

Thanks Frantic PonE,

I've heard about DDGI before but since it depends on GPU ray-tracing so I didn't evaluate more. And for the Ambient Dice it looks quite a nice alternative and advance of the traditional SH9 or HL2 approach, and I'd give it a try when I finished the GI pipeline in the future. Thanks for the references!

On 7/23/2019 at 4:34 AM, zhangdoa said:

Thanks Frantic PonE,

I've heard about DDGI before but since it depends on GPU ray-tracing so I didn't evaluate more. And for the Ambient Dice it looks quite a nice alternative and advance of the traditional SH9 or HL2 approach, and I'd give it a try when I finished the GI pipeline in the future. Thanks for the references!

Yeah, the way it uses raytracing is a bit silly and arbitrary, like they had to fit it in somehow because the research is sponsored by Nvidia.

But thinking about it, flat surfel list/G-buffer list could just be done by "dilating" the scene texels, just choose super low res mip maps so sample points are far likelier to overlap. Well that's the lazy, hacky way to do it, I'm sure there's some much more clever neighborhood sorting thing to do.

On 7/25/2019 at 6:47 AM, Frantic PonE said:

Yeah, the way it uses raytracing is a bit silly and arbitrary, like they had to fit it in somehow because the research is sponsored by Nvidia.

But thinking about it, flat surfel list/G-buffer list could just be done by "dilating" the scene texels, just choose super low res mip maps so sample points are far likelier to overlap. Well that's the lazy, hacky way to do it, I'm sure there's some much more clever neighborhood sorting thing to do.

So far as I've implemented all the geometry data baking process the original talk introduced, I've observed that the overlap rate of the surfels is quite tightly coupled with the probe location and density, but the surfel de-duplication process has an acceptable time if we only eliminated the duplication in the finite area. And I have implemented a (part of) SVOGI module before, it just somehow inspired me that using some sort of voxelization maybe is also good for the offline geometry data baking, if we've already gazed at the surfel approach for some days.

This topic is closed to new replies.

Advertisement