• Advertisement
Sign in to follow this  

Doing SubSurfaceScattering / backlight

This topic is 768 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

So we want to use SSS for materials like wax, candles, snow, jelly, but also thin objects like plant-leafs or curtains. And of course, skin.
 
Papers with lots of formula's are too hard for me, plus I don't have time to read them all in detail. But from what I understand, SSS in general boils down to:
 
* Blurring radiance maps (diffuse light)
** To get less harsh shadows,
** Either in screen- or texture space
** Result = blurred texture + specular
* And for backlighting, render an object inverted (reverse culling), and shade the backsides
** Compare light shadowmaps with eye-depthMap to get the theoretical distance a ray travels through an object
 
 
I got it... globally at least. But I miss/don't understand a couple of things, and in terms of actually implementing, especially backlight seems really expensive. In my case only few objects actually need SSS, and many of the lights don't have shadowmaps are or baked into a static lightmap / probes. Well, to the questions:
 
 
* When/where/how to blur?
I started gauss-blurring the diffuse-parts of the screen wherever there are "SSS pixels". The blur-radius depends on camera distance, which increases as we get closer. Then the final composition simply does
result := blurredDiffuse + specular(not blurred!).
 
Which obviously... sucks. It looks, well, blurry. Softer yes, but also less detailed. I could just as well blur the albedoMap, or soften the normals. 
 
Shouldn't we involve the normals / viewangles? To decide the blur-Radius, and/or to mix between the blurred & original(harsh) diffuseMap? Looking at Unreal where I just made a very basic SSS material, it doesn't seem to do any blurring (or so subtle that I can't see it). Instead, it looks more like some sort of partial view-angle dependant ambient term: unshaded parts will get greenish if I set the "SSS teint" to green. This generates a nice transition from litten to SSS-color (instead of black) at the edges. In code terms, something like this:
lightInfluence= saturate( dot( normal, lightVector ) );
lightDiffuse = lightInfluence * lightColor;

// Check if we are looking straight at the light
lookingAtSrc = saturate( dot( eyeToLightVector, -lightVector ) );
// Wrap it with some magic numbers
lookingAtSrc = 0.25 + lookingAtSrc * 0.75;
// Not sure if this really happens, but it seems translucency reduces when the light is on it
// ...or the light is just so much stronger that the SSS contribution becomes hardly visible
SSSresult = (translucentColorRGB * lookingAtSrc) * (1 - lightInfluence);

// Compose result
Result  = lightDiffuse + SSSresult;

But this is just some fake magic. My guts say Unreal is doing it more sophisticated.

 
 
 
* Color teints / Layers
Reading about skin, there seems to be a shift towards red, as that wavelength penetrates further. One way to fake that is to multiply the blurred diffuse with a (per material parameter) color. But again, when to do so? What are the parameters or values that tell to use more/less teint and/or blur?
 
 
* Fake backlight
Ears, finger tops, sides and nostrils are typically more reddish - because light penetrates here. But are games really calculating backlight / compare shadowmaps to see where rays travel through? Because its A: expensive, and B: doesn't involve ambient/indirect light. It may work if you have
the sun as 1 dominant light, but indoor scenes with lots of non-shadowcaster weak lights would be a nightmare.
 
Or do they simply do some sort of cheap Sheen effect? Or use an artist-made "thickness-texture", to mix between the original and teinted/blurred diffuse textures? I guess a mixture of both, but just checking.
 
 
* Better backlight
For thin materials like curtains or plant-leafs, backlight is extra important. I've seen how they did it cheap in Crysis(1) vegetation, but again, it didn't involve indirect light really. In my case an important portion of light is baked into lightmaps and probes.
 
How about:
1- Render translucent stuff inverted (backfaces)
2- Apply lighting on it, as usual, but diffuse-only (and you can use pre-calculated probes or lightmaps as well of course)
3- Write Z values of inverted geometry in a buffer, used for thickness calculations later on
...
Now you could simply add that the back-results to the front. But I guess its too bright, and also too sharp, as if it was glass. Again we need some blur here; the thicker the object, the more blur. Till a point light just can't penetrate.
 
4- Add back-result to the screen-diffuseBuffer (see first question) BEFORE BLURRING IT
5- Addition-strength depends on thickness comparison with front-depth. Thicker = less addition of the back / more internal scattering
6- Blur the buffer, apply your teint-colors
7- Run your final shaders to compose the end-result ( (translucent)blurredDiffuse + specular), as asked above
 
I think it might work, but rendering & lighting everything twice in a dense jungle... Smarter tricks here?
 

Share this post


Link to post
Share on other sites
Advertisement

Check out this paper & Demo

http://www.iryoku.com/separable-sss-released
https://www.cg.tuwien.ac.at/research/publications/2015/Jimenez_SSS_2015/Jimenez_SSS_2015-paper.pdf

For fake backlight you could precompute some textures to get object density.
At each texel position trace rays inside the model to find distances to backsides.
From that compute some kind of inverted bent normal (or SH) and density.
You then can sample the backlight a current pixel position, but in precumputed direction. The more density, the less light.

Edit: Here the details (slightly different)
http://colinbarrebrisebois.com/2011/03/07/gdc-2011-approximating-translucency-for-a-fast-cheap-and-convincing-subsurface-scattering-look/

The problem is when working with shadow maps, self occlusion will break it.
A fix would be to offset the sampling position by bent normal * precomputed backface distance and using multiple samples

 

There should be no problem when generating shadowmaps from back faces.

Edited by JoeJ

Share this post


Link to post
Share on other sites

The Head demo is very impressive, although I must say the ultra-quality of the mesh and textures itself are doing at least half the job. Even with all tricks disabled, it still looks good. And that's my main beef with these techniques: Big efforts for little results. Apart from cutscenes, how many times do you really face-to-face a (non-gasmask-wearing) that close, without dropping dead within the next 100 milliseconds, allowing you to count the pimples?

 

At this point I have a blurred version of the screen available. But question is if I really need it (in case simpler tricks can do the job as well), and ifso, how to mix between max-blurred and non-blurred pixels, giving a normal, eyevector, et cetera. I bet the papers explain, but I always have a hard time understanding them.

 

 

Personally I'm a bit more interested in environment usage. Curtains, plants, rubber hoses, chunks of ice, that kind of stuff. The Frostbite paper you showed (thanks!) seems to suit better for that, at a low cost. Yet there are still a few problems:

 

* Precomputed thickness map is per object & doesn't work (well) for animated / morphing objects

Since I also want to use it for environment meshes, it would mean each mesh may need an unique map, as each of them has an unique shape. A bit expensive. Could bake the thickness into vertices though...

 

 

Speaking of which, the (float) thickness value is an average over the pixel-normal hemisphere? Or just the distance a ray travels straight down to the other side of the mesh, using the inverted normal as a direction?

 

 

* Outer walls don't have a backside

Say I have a room made of snow/ice. Or wax, whatever. Geometry wise, there is nothing behind the walls, thus no backlight obviously. But still want to give them that "SSS look" somehow... Here you have to use internal scattering again. Which would bring me back to the original question - howto? I could look into blurring, and/or do the Unreal way, whatever that exactly is.

 

 

* Lightmaps / Ambient

Since ambient light is so important in my case, having backlight for realtime lights only would be a shame. Of course with some artist-smartness you can adapt the lights in your scene when there are translucent objects (which doesn't happen that often), but  having the ability to involve the backlight coming from the lightmap somehow, would be nice. Maybe it can be done like this:

 

- In the lighting pass (where you render spheres/cones to splat light on the screen - when using Deferred Rendering)...

- Render backsides of translucent stuff, as if they were light volumes as well

- Incoming color = lightmap pixel, incoming direction = inverted normal OR the global incoming direction baked in your lightmap, if you have it

- Perform the rest of the shader math, as described in the slideshow

 

 

 

Excuse me for the vague questions. I always write this on the fly, not really knowing where the real issue is ;)

Share this post


Link to post
Share on other sites

* Precomputed thickness map is per object & doesn't work (well) for animated / morphing objects

I don't think it's a problem for skinned meshes, and morph targets could store their own thickness vertex map.
For the generation i would trace many rays in a hemisphere, eventually narrowed to 120 degrees or less,
cosine weighted or uniform distribution... all adjustable in editor.
Summing up all rays and weighting by some constant / ray length gives the normal,
and averaging the weighted angles of all rays again to this normal (gives cone angle) can be used too to answer how much blur for this location.

For blur you can blur and mip map the the shadow map instead the screen.

I assume it's a faster and better quality approach than rendering backfaces and bluring, but i totally lack experience on the shader side smile.png

Hopefully you get some more answers...

Edited by JoeJ

Share this post


Link to post
Share on other sites

Thanks again guys!

 

Nice&clear presentation you posted there Styves! Maybe not perfect, but using a good old Lookup Texture / BRDF instead sounds a whole lot easier! Only little problem is that when using a Deferred pipeline like I do, you also have to store which BRDF to use, assuming that besides skin, we have would have a couple more profiles. You could put all possible BRDF LUT textures into a big texture, and use 1 gBuffer parameter as an index/offset.

 

For larger, not-so curvy surfaces, Pre-Integrated Shading won't do a whole lot, neither does it give us backlight. But for that I'd like to give the Frostbite approach a try. That probably means each vertex will get 2 extra attributes "Curvature" (For fake SSS) & "Thickness" (for backlight). Only thing that still won't work, is backlight coming out of a lightmap or probe... Oh well, let's just make & check this out first!

 

 

Well, I'm informed now, thanks again!

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement