Jump to content

  • Log In with Google      Sign In   
  • Create Account


Styves

Member Since 17 Dec 2009
Offline Last Active Today, 06:39 AM
-----

Topics I've Started

Real-time skin shader

15 October 2010 - 11:52 PM

Hey hey.

Thought I'd stop by and post something that I've been working on for the last while. A skin shader that aims to make realistic faces while maintaining high performance and avoiding texture/image space diffusion; I'm running this in CryEngine2 so I'm pretty limited in what I can do. I can't do lightmap passes so texture space diffusion is out of the question (and it's slow) and I can't make custom post-process stages for image space diffusion, thus making both methods nigh impossible.

So for those of you who are interested in a cheap alternative to diffusion approximations and the likes, read on!

I've been working around these limits by approximating the core concepts of various rendering models (basically just creating layers and faking blurs heh). Here's a render from a few days ago (a few things have changed but the shader is mostly the same):



Head model available here.

I'm using the Kelemen Szirmay-Kalos for my BRDF as proposed by NVIDIA. I'm using 4 terms for my specular model (instead of the 1 used on that page), using 4 beckmann distributions (saved to a texture) and weighing them at the at the end with a simple dot product.

Specularity:



For the actual subsurface scattering I do a mix of things. I use a multi-layer system (like those found in popular modeling programs like Maya or 3ds Max, for example the MentalRay FastSkin shader). I also use blended normals for local normal-based diffusion (Naughty Dog "Character Lighting and Shading") to get some nice scattering effects as well as some of my own little tweaks (a closer rgb relationship in blended normals to keep red bleeding lower for a pleasing look).

The texture maps for each layer (subdermal and epidermal) can be done by hand, or they can be generated by the shader (merely a few simple color operations on the diffuse texture, and sampling a mip-map for the subdermal texture for a blurry effect). The overall tint can be changed through the shader's settings. The approximated textures generated by the shader are good enough for regular use, but pre-made textures have the benefit of having more control and artistic freedom. The model used in the example images uses a generated epidermal map and a pre-made subdermal texture (for back scattering, see below).

Texture Maps (yes, my translucency texture sucks :P):


For backscattering (yes, the shader supports back scattering, although it's primitive and will probably need changing eventually) I simply light the model using a wrapped negative NdotL value (opposite of the light) and mask it using a translucency texture (placed in the alpha channel of the subdermal texture). It's not physically correct but it does the job just fine. I'd love to get translucent shadow maps working but limitations are stopping that from happening and I haven't found any other decent alternatives yet (if someone knows of any or has any ideas, please let me know! :D).

Back scattering (mind the jittery shadows):



For each layer I calculate a different lighting result. For the diffuse texture, I simply calculate blended normals as usual, using a standard NdotL. For the epidermal layer, I do the same but with a slightly more wrapped NdotL, and for the subdermal I use an even more wrapped NdotL. This is to simulate the layers being "blurred" so that when weighed together at the end of the shader, they create a proper subsurface effect. The weights for each layer can be controlled in the shaders properties.

To properly scatter shadows I do something very similar to blended normals. For each layer, after blending the normals, I blend between a hard shadow and a soft shadow, allowing a red-bleed to occur on the shadows. The strength of sharpening and softening for each channel is changed for each layer (diffuse is very hard, epidermal is softer and subdermal is very soft). Rather than performing blurs on the shadow maps which can result in poorer performance, I cheat... big time. I simply run pow() for each channel (green & blue are merged in both blended normals and blended shadows), giving a harder shadow. This basically hardens and shrinks the radius of the green/blue channels while the red channel remains nice and smooth. The result is desaturated a bit so it doesn't come off as too red. Since I'm doing this in CryEngine2 I decided to pre-blur the shadow maps since jittering can get ugly after hardening the shadows (it's merely a screen texture, so blurring it only needs to be done once: I'm doing a mere 4 tap blur (4 texture samples surrounding the original). I've had some good results using this method but it still needs a bit of work.

Bleeding shadows (100% diffuse vs 100% epidermal/subdermal (no diffuse))



The final effects are detail bump mapping (basically a tiled detail normal map layered over the original to give the impression of more detail), rim lighting and the option to change the skin's melanin amount. These are pretty standard effects really (the melanin shader is just lerping between 1 and the diffuse texture multiplied by it's luminance using a melanin value).

Here's a comparison between standard lambertian lighting with a physically based blinn-phong specular (Naty Hoffman), the skin shader without SSS and then with SSS.



Performance is great, it's ~the same as a standard fully-featured material shader, you probably wouldn't notice much impact on performance if you swapped it for say, a metal shader.

Compare that to the performance it takes to render texture-space diffusion which is orders of magnitude slower than this, I'd say I've hit a good mark. It's not as physically accurate as NVIDIA's Head Demo, but looking at the results and the simplicity of the shader, I think I'll be able to close the gap eventually. I hope to push forward and obtain something more correct, but I think this is a good step forward.

To anyone who read: thanks for reading. ;) Here's a last minute screen-capture.



[Edited by - Styves on October 18, 2010 2:22:46 AM]

Bloom Downscaling (D3D10)

07 September 2010 - 05:51 PM

Heyhey.

About a week or two ago I got into D3D programming, as soon I'll be having an interview over seas, and some D3D experience would be quite helpful (it'll increase my odds of getting the job).

Anyway, I've got some pretty good stuff done so far. The basics are mostly done (models, texturing, lighting/shading, sound, input, you name it). Hell, I even have some POM working as well as some other features. I just recently got some post-processing in too, namely, tonemapping/bloom. Which is where my problem comes in.

I'm having a problem with downscaling. I've tried cutting down the resolution of the bloom textures, but instead of rendering the full scene to a downscaled image, it's only rendering a small piece.

For example, let's say my scene is 1280x720, and I'm trying to downscale by 4x. Rather than having a 320x180 resolution texture of the scene, I'm getting a texture containing only the first 320x180 pixels of the original it. How do I go about solving this?

I'm rendering it all in full-screen for now, which suffice to say is pretty slow. I'm really hoping someone will have at least part of the answer as to why it's not working.

I think the issue might stem from the way I'm rendering the bloom. I think I'm doing the same thing as the HDRFormats sample in the DXSDK, but correct me if I'm wrong. Here's what I'm doing:

- Render the scene to a texture.
- Set to a new render target (so another texture - bright pass).
- Render a quad with the scene texture and apply the bright pass shader.

And for each layer of bloom, I do this:
- Set to another render target (bloom horizontal).
- Render a quad with the bright pass texture and apply the bloom shader.
- Set to another render target (bloom vertical).
- Render a quad with the horizontal bloom texture and apply the vertical bloom shader.

And then I finally render a quad using a tonemapping shader which combines the scene with the bloom results. I might be doing this wrong and it might be why I'm only getting a small chunk of the frame when trying to downscale, but I'm unsure.

So here are my final questions:

1. If my method is poor, what's a better way of doing it? Details, tutorial links, snipets, anything you have to help explain it would be hot (I'm still a noob, don't forget. ;P)

2. If my method isn't so poor, then what do you think is the cause of my problem?

Your help is greatly appreciated.
- Styves

NFAA - A Post-Process Anti-Aliasing Filter (Results, Implementation Details).

24 August 2010 - 05:21 PM

Helloooo people of GameDev.net! How ya been?

I've got something I'd like to share with you, along with implementation details and results. I'm sure some of you with a deferred renderer will appreciate this.

It's a new method of post-processing anti-aliasing that I finished today. I haven't thought of a name for it yet (if anyone has any ideas, I'd be infinitely grateful). - (I found a name, see bottom of post) -

Anyway, Here's how it works:

1. Gather samples around the current pixel for edge detection. I use 8, 1 for each neighboring pixel (up, down, left, right, diagonals), I initially used 3 but that wasn't enough.
2. Use them to create a normal-map image of the entire screen. This is the edge detection. How you do this is entirely up to you. Scroll down the page to see some sample code of how I do it.
3. Average 5 samples of the scene using the new normal image as offset coordinates (quincux pattern).

As simple as it sounds it actually works extremely well in most cases, as seen below. Images rendered with CryEngine2.

Left is post-AA, right is no AA. Image scaled by 200%.


Gif animation comparing the two. Also scaled by 200%.


Here's the heart of my algorithm, the edge detection.


Pros:
- Easily implemented: it's just a post effect.
- Pretty fast, mostly texture-fetch heavy (8 samples for edge detection, 5 for scene). < 1ms on a 9800GTX+ at 1920x800.
- Can solve aliasing on textures, lighting, shadows, etc.
- Flexible: in a deferred renderer, it can be applied to any buffer you want, so you can use real AA on the main buffer and post-AA on things like lighting.
- Works on shader model 2.0
- Since it's merely a post-process pixel shader, it scales very well with resolution

Cons:
- Can soften the image due to filtering textures (double-edged sword).
- Inconsistent results (some edges appear similar to 16x MSAA while others look like 2x).
- Can cause artifacts with too high of a normal-map strength/radius.
- Doesn't account for "walking jaggies" (temporal aliasing) and can't compensate for details on a sub-pixel level (like poles or wires in the distance), since that requires creating detail out of thin air.

Notes on the cons and how they might possibly be fixed:
-Image softening can be solved by using a depth-based approach instead.
-Inconsistent results and artifacts can probably be solved with higher sample counts.
-Last one will require some form of temporal aliasing (check Crytek's method in "Reaching the speed of light" if you want to do it via post-processing as well.)

That's it. Use the idea as you want, but if you get a job or something because of it, let 'em know I exist. ;)

Edit: New name = Normal Filter AA, because we're doing a normal filter for the edge detection. Less confusing than the previous name. :)

[Edited by - Styves on September 3, 2010 1:41:37 AM]

Lens "Diffraction"/Anamorphic Lens Flares

27 January 2010 - 07:00 PM

I've been working on a lens flare shader recently (done entirely in post-processing). So far it looks great, but there's one thing I'm curious about, and that's "diffraction" (that's what it was called on wikipedia anyway). Basically, this is what it looks like (image from wikipedia). Colors aren't important, but that's the basic idea. So here are my questions: What would it take to pull off this effect? Can it be done entirely in a pixel shader or do I also need to edit the vertex shader? Can it even be done in image-space? I think it can, but confirmation would be great. :) My lens flares are calculated the "Masaki Kawase" way, by taking a sample of a glow/bloom buffer and inverting it's coordinates with some scaling to produce a flare. I do this a few times to create the effect. I'm hoping I can do the same for this. I'd also like to know if there's a more performance-friendly way of achieving an anamorphic flare, using the same technique above (or something similar anyway). Something like this: Currently I'm using Masaki's technique for streaks, but that's both hard on performance (I have no source code access, so I can't change the resolution of the render target I'm using), and doesn't produce results that I'm pleased with. Thoughts? [Edited by - Styves on February 4, 2010 9:04:29 PM]

PARTNERS