• Content count

  • Joined

  • Last visited

Community Reputation

773 Good

About CryZe

  • Rank
  1. Metallic in UE 4

    Non-metallic materials are achromatic most of a time and thus don't cause the reflections to be colored in any way. In their implementation the more the material is metallic, the more its reflections are colored by the diffuse color. Also the more metallic the material is, the stronger it reflects light in a specular way.
  2. Deferred SSDO

    I had the same idea some time ago as well, but actually there's no need to encode the screen spaced approximated visibility function into SH basis. They do it because they need to pass over the function from one pass to the other (DSSDO -> Lighting). It would be way better, if you just sampled the screen space along the direction of the light and lerp the lights color with the occluders color based on how sure the algorithm is, that the occluder is actually occluding the light. This way there's no need for any spherical harmonics, as you have the full screen space approximated visibility function (technically it's more than just a visibility function, as you can get chromatic information) and you can use that and put it into your rendering equation.
  3. The way it works with the geometry shader, is, that you project each vertex inside the geometry shader onto the different sides and than output the resulting vertices with a SV_RenderTargetArrayIndex semantic which represents which side the projected vertex you are outputting is on.
  4. No need for random directions - just use spherical coordinates (and if your BRDF is isotropic, you only need the elevation). Every time you use dot(L, N) in your shader, use cos(theta), and so on. And then you can just integrate it numerically in Mathematica or something.   I don't have a source for this, but I *think* using this approach actually introduces a bias towards the poles. Assuming an unstratified sampling pattern. If you keep in mind following you can use it without any problem: [eqn]\int_{\Omega} (\mathrm{n}\cdot \omega) \; \mathrm{d} \omega=\int_\theta \int_\phi cos(\theta)sin(\theta) \; \mathrm{d} \phi \; \mathrm{d} \theta[/eqn]
  5. I guess that's the problem. The thing is that it should include the view direction as well, since fresnel's law applies as well, when the light is scattering out of the surface to the viewer's direction and not just when light is scattering into the surface.    Just take a look at section 5.3 in Their diffuse model is applying their modified fresnel 2 times, once for the view direction and once for the light direction.
  6. Not quite. If you assume that your surface is perfectly flat, your microfacet distribution, as you mentioned, returns 1 where N=H and 0 where N!=H, and thus your BRDF only returns a value other than 0, when N=H. But H is still calculated from L and V. So LDotH and VDotH are still the same (because that's always the case, since H is the halfway vector). But your BRDF requires H to be N to return a value other than 0. But if that's the case, LDotH is the same as LDotN, and VDotH is the same as VDotN. But since LDotH and VDotH are the same, LDotN and VDotN are the same as well. And thus, Helmholtz reciprocity still applies in this case.     Yeah, I don't know. Something is really weird in the situation Hodgman pointed out :(
  7. I think Helmholtz reciprocity doesn't apply to diffuse light at all, because diffuse light actually is the same as subsurface scattering, just in such a small scale, that one can approximate it by evaluating it at the entrance point. Diffuse light is the light scattering inside the surface, which is simply specular reflection thousands of times inside the surface. Helmholtz reciprocity isn't supposed to be correct for this process, because it's not just a single reflection. But it works for all the little specular reflections inside the material and the "macro" specular reflection on the surface.   And it works for cook torrance:   The halfway vector is the same whether you calculate it from (L+V)/length(L+V) or (V+L)/length(V+L), and thus the microfacet distribution function returns the same value, since it relies on NDotH. Fresnel relies on LDotH, but that's the same as VDotH. And the geometric term is the multiplication of the "sub geometric term", one time calculated for NDotV, and one time calculated for NDotL, and since scalar multiplication is commutative, the result is the same whether you switch NDotV and NDotL, or not. The same applies to the NDotL * NDotV in the denominator of cook torrance.
  8.   Yep, the BRDF needs both the PI and the division by NDotL, while the implementation in a standard shader doesn't need those.
  9. How to make a shiny effect

    Recent Mario Games make heavy use of Rim Lighting which also seems to be multiplied with a "rim reflectance texture". It looks like they might also make use of some environment maps to create this look.   I actually doubt, that they are using any traditional lighting methods at all.
  10. Cook-Torrance / BRDF General

    The only difference between mine and your GGX, is that I'm using the Walter GGX geometric term which is specifically calculated using Smith's "blackbox" function to convert any distribution function into a perfectly matching geometric term. The thing is, that the cook torrance geometric term is more or less completely absurd and unrealistic, because the shadowing and masking should be dependent on the microfacet distribution, but the cook torrance geometric term completely ignores that. That's why you get that unrealistic cut at 45 degrees. I'd recommend you take a look at Naty Hoffmann's presentation and Disney's presentation at You should also check out Disney's BRDF Explorer and the MERL database. With the BRDF explorer, you can validate how well your BRDF matches actual materials from the MERL database.    Also, your GGX is the correctly normalized version ;)
  11. Cook-Torrance / BRDF General

    I've approximated the diffuse transmittance integral and created a BRDF which is pretty lightweight but also pretty physically accurate. Use this instead of Lambert if you want to have proper energy conservation. It's based on GGX roughness though, so you might need to convert your roughness to GGX roughness:     It's actually just a single MAD instruction per light if you implement it, the rest can be done on a per pixel basis.
  12. Cook-Torrance / BRDF General

    You can improve this part of the code this way: float g_min = min(NdotV, NdotL); float G = saturate(2 * NdotH * g_min / VdotH);Also, don't ever use max(0, dot(a, b)). Instead use saturate(dot(a, b)) which compiles into a single instruction. It is, but is using the complement of Fspecular actually a good one? I don't think so (unless you're using Fdiffuse). I think someone should approximate a diffuse BRDF using the equation I posted in my post above. That would be a way better approximation. To get back to the original topic: The last one is the correct Cook-Torrance microfacet model. Sometimes you find (ns + 2) / (2 * pi) or (ns + 2) / (8 * pi) as the normalization factor for Blinn-Phong. The second one is already pre-multiplied with the 1/4 while the first one is the distribution function for the microfacet model. And this one is the correct Beckmann NDF: I wouldn't recommend the Beckmann NDF though. It's pretty damn slow in comparison to other NDFs because of 2 reciprocals and the exponential function. (Y u no use GGX xD) This is the BRDF I'm using: I'm using GGX as the distribution function, Schlick's approximation of fresnel as fresnel term and Walter's geometric term for the GGX distribution function. I color-coded everything for implementation details. The grey parts are just parts of the BRDF and don't need to be implemented. The green parts can be calculated once for every pixel. And the red parts are the only parts, that actually need to be calculated for every light.
  13. Cook-Torrance / BRDF General

    Tiago, that's an approximation, even with the real Fresnel equations, but is not really correct. Fresnel is dependent on the microfacets oriented towards the halfway vector, but diffuse actually is all the light that is not being reflected, not being absorbed and scattered back out, independent of the microfacets oriented towards the halfway vector. So simply the complement of a single fresnel value won't do it. You would need to solve the integral over all the microfacet orientations with a modified microfacet model: Your approximation might actually be worse than not having a factor for diffuse at all. If anything I'd use the macro surface normal instead of the halfway vector (just for diffuse though, for specular you should use the microfacet normal):
  14. Shading metals

    I know this website, I've used it quite some time before :)   I just wanted to let you know, that you need to work with all the 3 channels and the complex numbers. Like I said, you could do it per pixel, per vertex or per draw call on the CPU and simply input f0 and maybe also cf0 into the shader as constants. It depends on where you get your values. If n2Red, n2Green, n2Blue would be stored in a texture, you would have to do it per pixel, or you bake your f0 values into a texture and use the baked texture instead. But if you can do it per draw call, go ahead and calculate it on the cpu.   Yes, that should be enough. You should worry more about the microfacets though. Blinn-Phong is not such a good distribution term for metals and even more important is shadowing and masking of the microfacets: the geometry / visibility term.
  15. Shading metals

    Oh wait, you want to do metals with the fresnel equations? If that's the case, your formula from the other thread won't work. Metals usually have complex reflective indices (complex numbers). Fresnel's equations work with complex numbers though, your implementation just doesn't. You need complex multiplication, complex addition and the absolute value (which you didn't implement) needs to work with complex numbers as well. Also since metals have chromatic reflections you would have to calculate your fresnel term for all 3 color channels. I'd use Schlick's approximation, reduce most of it to constant time, and reduce other parts of the formula to scalar calculations, while only the necessary parts get calculated for all color channels.   Here's approximately how that code should look like. You should probably calculate the constant part per vertex or per draw call on the CPU, if possible:   float2 f0CmplxRed = cmplxDiv(cmplxSub(n1Red, n2Red), cmplxAdd(n1Red, n2Red)); float2 f0CmplxGreen = cmplxDiv(cmplxSub(n1Green, n2Green), cmplxAdd(n1Green, n2Green)); float2 f0CmplxBlue = cmplxDiv(cmplxSub(n1Blue, n2Blue), cmplxAdd(n1Blue, n2Blue)); float3 f0Sqrt = 0; f0Sqrt.r = cmplxAbs(f0CmplxRed); f0Sqrt.g = cmplxAbs(f0CmplxGreen); f0Sqrt.b = cmplxAbs(f0CmplxBlue); float3 f0 = f0Sqrt * f0Sqrt; float3 cf0 = 1 - f0; foreach (light) { float factor = pow(1 - dot(L, H), 5); float3 fresnel = f0 + cf0 * factor; }