FREE SOFTWARE GIVEAWAY
We have 4 x Pro Licences (valued at $59 each) for 2d modular animation software Spriter to give away in this Thursday's GDNet Direct email newsletter.
Read more in this forum topic or make sure you're signed up (from the right-hand sidebar on the homepage) and read Thursday's newsletter to get in the running!
StyvesMember Since 17 Dec 2009
Offline Last Active Yesterday, 11:36 AM
- Group Members
- Active Posts 135
- Profile Views 4,017
- Submitted Links 0
- Member Title Member
- Age Age Unknown
- Birthday Birthday Unknown
Topics I've Started
15 October 2010 - 11:52 PM
Thought I'd stop by and post something that I've been working on for the last while. A skin shader that aims to make realistic faces while maintaining high performance and avoiding texture/image space diffusion; I'm running this in CryEngine2 so I'm pretty limited in what I can do. I can't do lightmap passes so texture space diffusion is out of the question (and it's slow) and I can't make custom post-process stages for image space diffusion, thus making both methods nigh impossible.
So for those of you who are interested in a cheap alternative to diffusion approximations and the likes, read on!
I've been working around these limits by approximating the core concepts of various rendering models (basically just creating layers and faking blurs heh). Here's a render from a few days ago (a few things have changed but the shader is mostly the same):
Head model available here.
I'm using the Kelemen Szirmay-Kalos for my BRDF as proposed by NVIDIA. I'm using 4 terms for my specular model (instead of the 1 used on that page), using 4 beckmann distributions (saved to a texture) and weighing them at the at the end with a simple dot product.
For the actual subsurface scattering I do a mix of things. I use a multi-layer system (like those found in popular modeling programs like Maya or 3ds Max, for example the MentalRay FastSkin shader). I also use blended normals for local normal-based diffusion (Naughty Dog "Character Lighting and Shading") to get some nice scattering effects as well as some of my own little tweaks (a closer rgb relationship in blended normals to keep red bleeding lower for a pleasing look).
The texture maps for each layer (subdermal and epidermal) can be done by hand, or they can be generated by the shader (merely a few simple color operations on the diffuse texture, and sampling a mip-map for the subdermal texture for a blurry effect). The overall tint can be changed through the shader's settings. The approximated textures generated by the shader are good enough for regular use, but pre-made textures have the benefit of having more control and artistic freedom. The model used in the example images uses a generated epidermal map and a pre-made subdermal texture (for back scattering, see below).
Texture Maps (yes, my translucency texture sucks :P):
For backscattering (yes, the shader supports back scattering, although it's primitive and will probably need changing eventually) I simply light the model using a wrapped negative NdotL value (opposite of the light) and mask it using a translucency texture (placed in the alpha channel of the subdermal texture). It's not physically correct but it does the job just fine. I'd love to get translucent shadow maps working but limitations are stopping that from happening and I haven't found any other decent alternatives yet (if someone knows of any or has any ideas, please let me know! :D).
Back scattering (mind the jittery shadows):
For each layer I calculate a different lighting result. For the diffuse texture, I simply calculate blended normals as usual, using a standard NdotL. For the epidermal layer, I do the same but with a slightly more wrapped NdotL, and for the subdermal I use an even more wrapped NdotL. This is to simulate the layers being "blurred" so that when weighed together at the end of the shader, they create a proper subsurface effect. The weights for each layer can be controlled in the shaders properties.
To properly scatter shadows I do something very similar to blended normals. For each layer, after blending the normals, I blend between a hard shadow and a soft shadow, allowing a red-bleed to occur on the shadows. The strength of sharpening and softening for each channel is changed for each layer (diffuse is very hard, epidermal is softer and subdermal is very soft). Rather than performing blurs on the shadow maps which can result in poorer performance, I cheat... big time. I simply run pow() for each channel (green & blue are merged in both blended normals and blended shadows), giving a harder shadow. This basically hardens and shrinks the radius of the green/blue channels while the red channel remains nice and smooth. The result is desaturated a bit so it doesn't come off as too red. Since I'm doing this in CryEngine2 I decided to pre-blur the shadow maps since jittering can get ugly after hardening the shadows (it's merely a screen texture, so blurring it only needs to be done once: I'm doing a mere 4 tap blur (4 texture samples surrounding the original). I've had some good results using this method but it still needs a bit of work.
Bleeding shadows (100% diffuse vs 100% epidermal/subdermal (no diffuse))
The final effects are detail bump mapping (basically a tiled detail normal map layered over the original to give the impression of more detail), rim lighting and the option to change the skin's melanin amount. These are pretty standard effects really (the melanin shader is just lerping between 1 and the diffuse texture multiplied by it's luminance using a melanin value).
Here's a comparison between standard lambertian lighting with a physically based blinn-phong specular (Naty Hoffman), the skin shader without SSS and then with SSS.
Performance is great, it's ~the same as a standard fully-featured material shader, you probably wouldn't notice much impact on performance if you swapped it for say, a metal shader.
Compare that to the performance it takes to render texture-space diffusion which is orders of magnitude slower than this, I'd say I've hit a good mark. It's not as physically accurate as NVIDIA's Head Demo, but looking at the results and the simplicity of the shader, I think I'll be able to close the gap eventually. I hope to push forward and obtain something more correct, but I think this is a good step forward.
To anyone who read: thanks for reading. ;) Here's a last minute screen-capture.
[Edited by - Styves on October 18, 2010 2:22:46 AM]
07 September 2010 - 05:51 PM
About a week or two ago I got into D3D programming, as soon I'll be having an interview over seas, and some D3D experience would be quite helpful (it'll increase my odds of getting the job).
Anyway, I've got some pretty good stuff done so far. The basics are mostly done (models, texturing, lighting/shading, sound, input, you name it). Hell, I even have some POM working as well as some other features. I just recently got some post-processing in too, namely, tonemapping/bloom. Which is where my problem comes in.
I'm having a problem with downscaling. I've tried cutting down the resolution of the bloom textures, but instead of rendering the full scene to a downscaled image, it's only rendering a small piece.
For example, let's say my scene is 1280x720, and I'm trying to downscale by 4x. Rather than having a 320x180 resolution texture of the scene, I'm getting a texture containing only the first 320x180 pixels of the original it. How do I go about solving this?
I'm rendering it all in full-screen for now, which suffice to say is pretty slow. I'm really hoping someone will have at least part of the answer as to why it's not working.
I think the issue might stem from the way I'm rendering the bloom. I think I'm doing the same thing as the HDRFormats sample in the DXSDK, but correct me if I'm wrong. Here's what I'm doing:
- Render the scene to a texture.
- Set to a new render target (so another texture - bright pass).
- Render a quad with the scene texture and apply the bright pass shader.
And for each layer of bloom, I do this:
- Set to another render target (bloom horizontal).
- Render a quad with the bright pass texture and apply the bloom shader.
- Set to another render target (bloom vertical).
- Render a quad with the horizontal bloom texture and apply the vertical bloom shader.
And then I finally render a quad using a tonemapping shader which combines the scene with the bloom results. I might be doing this wrong and it might be why I'm only getting a small chunk of the frame when trying to downscale, but I'm unsure.
So here are my final questions:
1. If my method is poor, what's a better way of doing it? Details, tutorial links, snipets, anything you have to help explain it would be hot (I'm still a noob, don't forget. ;P)
2. If my method isn't so poor, then what do you think is the cause of my problem?
Your help is greatly appreciated.
24 August 2010 - 05:21 PM
I've got something I'd like to share with you, along with implementation details and results. I'm sure some of you with a deferred renderer will appreciate this.
It's a new method of post-processing anti-aliasing that I finished today. I haven't thought of a name for it yet (if anyone has any ideas, I'd be infinitely grateful). - (I found a name, see bottom of post) -
Anyway, Here's how it works:
1. Gather samples around the current pixel for edge detection. I use 8, 1 for each neighboring pixel (up, down, left, right, diagonals), I initially used 3 but that wasn't enough.
2. Use them to create a normal-map image of the entire screen. This is the edge detection. How you do this is entirely up to you. Scroll down the page to see some sample code of how I do it.
3. Average 5 samples of the scene using the new normal image as offset coordinates (quincux pattern).
As simple as it sounds it actually works extremely well in most cases, as seen below. Images rendered with CryEngine2.
Left is post-AA, right is no AA. Image scaled by 200%.
Gif animation comparing the two. Also scaled by 200%.
Here's the heart of my algorithm, the edge detection.
- Easily implemented: it's just a post effect.
- Pretty fast, mostly texture-fetch heavy (8 samples for edge detection, 5 for scene). < 1ms on a 9800GTX+ at 1920x800.
- Can solve aliasing on textures, lighting, shadows, etc.
- Flexible: in a deferred renderer, it can be applied to any buffer you want, so you can use real AA on the main buffer and post-AA on things like lighting.
- Works on shader model 2.0
- Since it's merely a post-process pixel shader, it scales very well with resolution
- Can soften the image due to filtering textures (double-edged sword).
- Inconsistent results (some edges appear similar to 16x MSAA while others look like 2x).
- Can cause artifacts with too high of a normal-map strength/radius.
- Doesn't account for "walking jaggies" (temporal aliasing) and can't compensate for details on a sub-pixel level (like poles or wires in the distance), since that requires creating detail out of thin air.
Notes on the cons and how they might possibly be fixed:
-Image softening can be solved by using a depth-based approach instead.
-Inconsistent results and artifacts can probably be solved with higher sample counts.
-Last one will require some form of temporal aliasing (check Crytek's method in "Reaching the speed of light" if you want to do it via post-processing as well.)
That's it. Use the idea as you want, but if you get a job or something because of it, let 'em know I exist. ;)
Edit: New name = Normal Filter AA, because we're doing a normal filter for the edge detection. Less confusing than the previous name. :)
[Edited by - Styves on September 3, 2010 1:41:37 AM]
27 January 2010 - 07:00 PM