Hi guys .
As you know translucency is a big topic, and can be really expensive to simulate, therefore we prebake certain things, like a 'Local Thickness Map'. Which basically contains the distance from the current point to the one, well, behind, so the thickness, that can be locally mapped on a mesh, like this one:
The offline approach described by the paper reads as following:
To streamline this, we rely on a normal-inverted computation of Ambient Occlusion (AO), which can be done offline (using your favorite modeling and rendering software) and stored in a texture. Since ambient occlusion determines how much environmental light arrives at a surface point, we use this information for the inside of the shape (since we flipped the normals), and basically averages all light transport happening inside the shape.
Though I'm not fully sure what they exactly mean, are they implying that it's a brute force AO ( raytracing ) with inverted normals?
So my question is really what approaches can I take at baking this map?
You reached the bottom, thanks for your time!