• Create Account

# Chris_F

Member Since 04 Oct 2010
Online Last Active Today, 11:34 PM

### #5042506Fresnel equation

Posted by on 12 March 2013 - 05:18 PM

OK, I'm bumping this thread because I'm revisiting the Fresnel equation, this time using complex IOR values. I'm having a hard time converting this to complex numbers.

```float Fresnel(float CosThetaI, float n)
{
float CosThetaT = sqrt(max(0, 1 - (1 - CosThetaI * CosThetaI) / (n * n)));
float NCosThetaT = n * CosThetaT;
float NCosThetaI = n * CosThetaI;
float Rs = pow(abs((CosThetaI - NCosThetaT) / (CosThetaI + NCosThetaT)), 2);
float Rp = pow(abs((CosThetaT - NCosThetaI) / (CosThetaT + NCosThetaI)), 2);
return (Rs + Rp) / 2;
}
```

This is the basic formula, but I need to re-write it so that it looks like:

```float Fresnel(float CosThetaI, vec3 n, vec3 k)
{
...
}
```

Where n and k make up the complex IOR (n + ki). I've taken a few stabs at it, but it's gotten me nowhere. Here is my train wreck of an attempt:

```vec3 Fresnel(float CosThetaI, vec3 n, vec3 k)
{
float temp = 1 - CosThetaI * CosThetaI;

vec3 NKSqr_real = n * n - k * k;
vec3 NKSqr_imag = n * k * 2;

vec3 temp2_real = (temp * NKSqr_real) / (NKSqr_real * NKSqr_real + NKSqr_imag * NKSqr_imag);
vec3 temp2_imag = -(temp * NKSqr_imag) / (NKSqr_real * NKSqr_real + NKSqr_imag * NKSqr_imag);

temp2_real = 1 - temp2_real;
temp2_imag = -temp2_imag;

vec3 CosThetaT_real = sqrt((temp2_real + sqrt(temp2_real * temp2_real + temp2_imag * temp2_imag)) / 2);
vec3 CosThetaT_imag = sign(temp2_imag) * sqrt((-temp2_real + sqrt(temp2_real * temp2_real + temp2_imag * temp2_imag)) / 2);

vec3 NCosThetaT_real = n * CosThetaT_real - k * CosThetaT_imag;
vec3 NCosThetaT_imag = k * CosThetaT_real + n * CosThetaT_imag;

vec3 NCosThetaI_real = n * CosThetaI;
vec3 NCosThetaI_imag = k * CosThetaI;

vec3 CosThetaI_minus_NCosThetaT_real = CosThetaI - NCosThetaT_real;
vec3 CosThetaI_minus_NCosThetaT_imag = -NCosThetaT_imag;

vec3 CosThetaI_plus_NCosThetaT_real = CosThetaI + NCosThetaT_real;
vec3 CosThetaI_plus_NCosThetaT_imag = NCosThetaT_imag;

vec3 a, b, c, d;

a = CosThetaI_minus_NCosThetaT_real;
b = CosThetaI_minus_NCosThetaT_imag;
c = CosThetaI_plus_NCosThetaT_real;
d = CosThetaI_plus_NCosThetaT_imag;

vec3 Rs_real = (a * c + b * d) / (c * c + d * d);
vec3 Rs_imag = (b * c + a * d) / (c * c + d * d);

vec3 Rs = sqrt(Rs_real * Rs_real + Rs_imag * Rs_imag);
Rs = Rs * Rs;

vec3 CosThetaT_minus_NCosThetaI_real = CosThetaT_real - NCosThetaI_real;
vec3 CosThetaT_minus_NCosThetaI_imag = CosThetaT_imag - NCosThetaI_imag;

vec3 CosThetaT_plus_NCosThetaI_real = CosThetaT_real + NCosThetaI_real;
vec3 CosThetaT_plus_NCosThetaI_imag = CosThetaT_imag + NCosThetaI_imag;

a = CosThetaT_minus_NCosThetaI_real;
b = CosThetaT_minus_NCosThetaI_imag;
c = CosThetaT_plus_NCosThetaI_real;
d = CosThetaT_plus_NCosThetaI_imag;

vec3 Rp_real = (a * c + b * d) / (c * c + d * d);
vec3 Rp_imag = (b * c + a * d) / (c * c + d * d);

vec3 Rp = sqrt(Rp_real * Rp_real + Rp_imag * Rp_imag);
Rp = Rp * Rp;

return (Rs + Rp) / 2;
}
```

It would be so much easier if HLSL/GLSL had first class support for complex values.

EDIT:

Never mind. I managed to come up with this.

```vec2 CADD(vec2 a, vec2 b) {    return a + b; }
vec2 CSUB(vec2 a, vec2 b) {    return a - b; }
vec2 CMUL(vec2 a, vec2 b) {    return vec2(a.x * b.x - a.y * b.y, a.y * b.x + a.x * b.y); }
vec2 CDIV(vec2 a, vec2 b) {    return vec2((a.x * b.x + a.y * b.y) / (b.x * b.x + b.y * b.y), (a.y * b.x - a.x * b.y) / (b.x * b.x + b.y * b.y)); }
float CABS(vec2 a) { return sqrt(a.x * a.x + a.y * a.y); }
vec2 CSQRT(vec2 a) { return vec2(sqrt((a.x + sqrt(a.x * a.x + a.y * a.y)) / 2), sign(a.y) * sqrt((-a.x + sqrt(a.x * a.x + a.y * a.y)) / 2)); }

float _Fresnel(float _CosThetaI, vec2 n)
{
vec2 CosThetaI = vec2(_CosThetaI, 0);
vec2 CosThetaT = CSQRT(CSUB(vec2(1.0, 0), CDIV(CSUB(vec2(1.0, 0), CMUL(CosThetaI, CosThetaI)), CMUL(n, n))));
vec2 NCosThetaI = CMUL(n, CosThetaI);
vec2 NCosThetaT = CMUL(n, CosThetaT);
float Rs = pow(CABS(CDIV(CSUB(CosThetaI, NCosThetaT), CADD(CosThetaI, NCosThetaT))), 2);
float Rp = pow(CABS(CDIV(CSUB(CosThetaT, NCosThetaI), CADD(CosThetaT, NCosThetaI))), 2);
return (Rs + Rp) / 2;
}

vec3 Fresnel(float CosThetaI, vec3 n, vec3 k)
{
return vec3(
_Fresnel(CosThetaI, vec2(n.r, k.r)),
_Fresnel(CosThetaI, vec2(n.g, k.g)),
_Fresnel(CosThetaI, vec2(n.b, k.b))
);
}
```

### #5039318Baking Ambient Occlusion maps and dynamic lighting...

Posted by on 04 March 2013 - 09:25 PM

If you keep the ambient occlusion separate from other things (e.g. diffuse) and only use it in the ambient portion of your lighting calculation, then it will look just fine.

### #5038326Your preferred or desired BRDF?

Posted by on 02 March 2013 - 02:28 AM

I've been having an issue with different Cook-Torrance geometry terms though - most of these BRDF's are modulated by NdotV in some way, with the assumption that this can't be negative, else the fragment wouldn't be visible. However, in actual game scenes, this assumption doesn't hold! Simply picture a cube that's had normals generated without hard-edges / with every face in the same smoothing group (or alternatively, picture a sphere that's been LOD'ed super-aggressively into a cube - same thing). In this case there's a huge number of fragments where NdotV will be negative, but simply cutting off the lighting for these fragments looks really unnatural.
To get around these unnatural cut-offs in my game scenes, I've simply scaled/biased NdotV (and NdotL to maintain reciprocity) into the 0-1 range right before doing the specular calculations, which produces a "wrapped geometry term", instead of one that becomes zero at the horizon...
Has anyone else dealt with this issue?

Kelemen Szirmay-Kalos! Kelemen Szirmay-Kalos! Kelemen Szirmay-Kalos!

Okay, okay, I'll try and add some useful content later on. But seriously, it's designed to be a Cook-Torrance geometry term that doesn't suck. It succeeds.

The Kelemen Szirmay-Kalos visibility approximation is indeed very handy. My minimalistic Cook-Torrance uses it because it's dirt cheap and gives good results, but it suffers from the same shortcoming as the original Cook-Torrance geometry term, and that is that it does not take the roughness of the surface into account at all.

I'm still not sure about the way retroreflection is being handled. It seems to me that most natural materials display very little in the way of retroreflection, and mostly at grazing angles. This is captured by Oren-Nayar and GGX. Objects with very high levels of retroreflection are synthetic and consist of macroscopic corner reflectors, or similar. It would be nice if these could be modeled more accurately.

Path tracing of a flat surface made up of corner reflectors. L = V = ~45°

### #5036943Universal OpenGL Version

Posted by on 26 February 2013 - 07:06 PM

There is a tradeoff between features and audience size. Increasing the minimum system requirements gives you greater abilities but may potentially decreases your audience size. What is more important to you, graphics fidelity or broadest possible audience? If it's the former, go with OpenGL 1.1, if it's the latter, go with OpenGL 4.3, if it's somewhere in between... Nobody can tell you whats best for your game. Are you making a FarmVille or are you making a Crysis? What features do you feel you need to reach your artistic goals? Picking the minimum spec that gives you what you need is probably the best option.

### #5036603Your preferred or desired BRDF?

Posted by on 25 February 2013 - 10:58 PM

Wow, thanks! I notice that the math for that distribution is exactly equal to your previous GGX distribution when the two roughness parameters are equal too... does the original GGX paper define this aniso version?

I'm still going to need some kind of retro-reflection hack (or, alternative physical BRDF) in my game so I can boost the effect right up for certain bits of paint and signage and... actual retro-reflector devices (like you put on your bicycle). You're right that there is a bit inherently in this BRDF, but it's mostly only at a grazing angle which is lost to N.L.
A macro-scale retro-reflector like you put on your bike -- a collection of 45º angled "V" shaped mirrored facets -- will direct almost all of the incoming light back towards the incident ray when lit from overhead, but performs worse at glancing angles, and it's this kind of behaviour that I'd ideally like to be able to model.

Here is my own hack. I think it works similarly to yours. The assumption is that the retroreflectiveness decreases at glancing angles.

```analytic

::begin parameters
color Diffuse 1 0 0
color Specular 1 1 1
float DiffuseScale 0 1 0.5
float SpecularScale 0 0.999 .028
float RoughnessX 0.005 2 0.2
float RoughnessY 0.005 2 0.2
float RetroReflection 0 1 0
bool isotropic 1
::end parameters

float saturate(float x) { return clamp(x,0,1); }

vec3 BRDF( vec3 L, vec3 V, vec3 N, vec3 X, vec3 Y )
{
float PI = 3.1415926535897932;
vec3 Kd = Diffuse * DiffuseScale;
vec3 Ks = Specular * SpecularScale;

float ax = RoughnessX;
float ay = (isotropic) ? RoughnessX : RoughnessY;

vec3 H = normalize(L + V);
float NdotL = saturate(dot(N, L));
float NdotV = dot(N, V);
float NdotH = dot(N, H);
float LdotH = dot(L, H);
float LdotV = dot(L, V);
float HdotX = dot(H, X);
float HdotY = dot(H, Y);

float ax_2 = ax * ax;
float ay_2 = ay * ay;
float a_2 = (ax_2 + ay_2) / 2;
float NdotL_2 = NdotL * NdotL;
float NdotV_2 = NdotV * NdotV;
float NdotH_2 = NdotH * NdotH;
float LdotV_2 = LdotV * LdotV;
float HdotX_2 = HdotX * HdotX;
float HdotY_2 = HdotY * HdotY;
float OneMinusNdotL_2 = 1.0 - NdotL_2;
float OneMinusNdotV_2 = 1.0 - NdotV_2;

vec3 Fd = 1.0 - Ks;

float gamma = saturate(dot(V - N * NdotV, L - N * NdotL));
float A = 1.0 - 0.5 * (a_2 / (a_2 + 0.33));
float B = 0.45 * (a_2 / (a_2 + 0.09));
float C = sqrt(OneMinusNdotL_2 * OneMinusNdotV_2) / max(NdotL, NdotV);
float OrenNayar = A + B * gamma * C;

vec3 Rd = (Kd / PI) * Fd * OrenNayar;

float GGX_forward = 1.0 / (PI * ax * ay * pow(HdotX_2 / ax_2 + HdotY_2 / ay_2 + NdotH_2, 2.0));
float GGX_retro = a_2 / (PI * pow(LdotV_2 * (a_2 - 1.0) + 1.0, 2.0));

float G1_1 = 2.0 / (1.0 + sqrt(1.0 + a_2 * (OneMinusNdotL_2 / NdotL_2)));
float G1_2 = 2.0 / (1.0 + sqrt(1.0 + a_2 * (OneMinusNdotV_2 / NdotV_2)));
float G_Smith = G1_1 * G1_2;

float G_Retro = NdotV_2 * NdotL;

float DG = mix(GGX_forward * G_Smith, GGX_retro * G_Retro, RetroReflection);

vec3 Fs = Ks + Fd * exp(-6 * LdotH);

vec3 Rs = (DG * Fs) / (4 * NdotV * NdotL);

return Rd + Rs;
}

```

I hope to maybe figure out how to model retroreflection in a more physically accurate way, and to maybe explore if the Smith G can be tailored for the anisotropic version of the distribution.

### #5036404Your preferred or desired BRDF?

Posted by on 25 February 2013 - 12:50 PM

If you are interested in a good overview of the semi-standard lighting models, take a look in the Lighting section of Programming Vertex...

Sorry for intrusion on this thread. I have a question related to "cook_torrance" shader shown on that link.

```float NdotH = saturate( dot( normal, half_vector ) );

...
if( ROUGHNESS_LOOK_UP == roughness_mode )
{
// texture coordinate is:
float2 tc = { NdotH, roughness_value };

// Remap the NdotH value to be 0.0-1.0
tc.x += 1.0f;
tc.x /= 2.0f;

// look up the coefficient from the texture:
roughness = texRoughness.Sample( sampRoughness, tc );
}
```

See author comments in code. Is this a bug? Saturate already clamps value to 0.0 - 1.0 range?

This is indeed unnecessary, and it wouldn't be the first time I saw a mistake or oversight on gpwiki. In any case, I think you can probably do a lot better than a Beckmann lookup texture. The Beckmann distribution is not expensive to calculate and modern GPUs are limited by memory bandwidth, not instruction throughput. Lookup textures only make sense if you can use them to kill a lot of expensive instructions.

### #5036399Your preferred or desired BRDF?

Posted by on 25 February 2013 - 12:29 PM

The features that I think I need so far are: Non-lambertian diffuse, IOR/F(0º)/spec-mask, anisotropic roughness, metal/non-metal, retro-reflectiveness and translucency.

I took Chris_F's BRDF containing Cook-Torrence/Schlick/GGX/Smith and Oren-Nayar, and re-implemented it with hacked support for anisotropy (based roughly on Ashikhmin-Shirley) and retroreflectivity.

If both the roughness factors are equal (or if the isotropic bool is true), then the distribution should be the same as GGX, otherwise it behaves a bit like Ashikhmin-Shirley. Also, the distribution isn't properly normalized any more though when using anisotropic roughness.

The retro-reflectivity is a complete hack and won't be energy conserving. When the retro-reflectivity factor is set to 0.5, you get two specular lobes -- a regular reflected one, and one reflected back at the light source -- without any attempt to split the energy between them. At 0 you just get the regular specular lobe, and at 1 you only get the retro-reflected one.

BRDF Explorer file for anyone interested: http://pastebin.com/6ZpQGgpP

Thanks again for sending me on a weekend BRDF exploration quest, Chris and Promit

Actually, it's a lot easier to convert it to anisotropic than that.

```analytic

::begin parameters
color Diffuse 1 0 0
color Specular 1 1 1
float DiffuseScale 0 1 0.5
float SpecularScale 0 0.999 .028
float RoughnessX 0.005 2 0.2
float RoughnessY 0.005 2 0.2
bool isotropic 1
::end parameters

float saturate(float x) { return clamp(x,0,1); }

vec3 BRDF( vec3 L, vec3 V, vec3 N, vec3 X, vec3 Y )
{
float PI = 3.1415926535897932;
vec3 Kd = Diffuse * DiffuseScale;
vec3 Ks = Specular * SpecularScale;

float ax = RoughnessX;
float ay = (isotropic) ? RoughnessX : RoughnessY;

vec3 H = normalize(L + V);
float NdotL = saturate(dot(N, L));
float NdotV = dot(N, V);
float NdotH = dot(N, H);
float LdotH = dot(L, H);
float HdotX = dot(H, X);
float HdotY = dot(H, Y);

float ax_2 = ax * ax;
float ay_2 = ay * ay;
float a_2 = (ax_2 + ay_2) / 2;
float NdotL_2 = NdotL * NdotL;
float NdotV_2 = NdotV * NdotV;
float NdotH_2 = NdotH * NdotH;
float HdotX_2 = HdotX * HdotX;
float HdotY_2 = HdotY * HdotY;
float OneMinusNdotL_2 = 1.0 - NdotL_2;
float OneMinusNdotV_2 = 1.0 - NdotV_2;

vec3 Fd = 1.0 - Ks;

float gamma = saturate(dot(V - N * NdotV, L - N * NdotL));
float A = 1.0 - 0.5 * (a_2 / (a_2 + 0.33));
float B = 0.45 * (a_2 / (a_2 + 0.09));
float C = sqrt(OneMinusNdotL_2 * OneMinusNdotV_2) / max(NdotL, NdotV);
float OrenNayar = A + B * gamma * C;

vec3 Rd = (Kd / PI) * Fd * OrenNayar;

float D = 1.0 / (PI * ax * ay * pow(HdotX_2 / ax_2 + HdotY_2 / ay_2 + NdotH_2, 2.0));

vec3 Fs = Ks + Fd * exp(-6 * LdotH);

float G1_1 = 2.0 / (1.0 + sqrt(1.0 + a_2 * (OneMinusNdotL_2 / NdotL_2)));
float G1_2 = 2.0 / (1.0 + sqrt(1.0 + a_2 * (OneMinusNdotV_2 / NdotV_2)));
float G = G1_1 * G1_2;

vec3 Rs = (D * Fs * G) / (4 * NdotV * NdotL);

return Rd + Rs;
}

```

I left out the retro-reflection hack because this BRDF actually already exhibits a lot of retro-reflection. If you go to Image Slice in BRDF Explorer and look at the bottom edge, that is the retro part. This is probably a lot more physically plausible as far as retro-reflections go.

### #5036049Your preferred or desired BRDF?

Posted by on 24 February 2013 - 04:20 AM

If you want to compare fresnel approximations you can use this:

```analytic

::begin parameters
color Specular 1 1 1
float SpecularScale 0 0.999 .028
bool Schlick 0
::end parameters

vec3 Fresnel(float CosTheta, vec3 Ks)
{
vec3 n2 = (1.0 + sqrt(Ks)) / (1.0 - sqrt(Ks));
vec3 SinTheta = sqrt(1 - CosTheta * CosTheta);

vec3 SinThetaT = SinTheta / n2;
vec3 CosThetaT = sqrt(1 - SinThetaT * SinThetaT);

vec3 n2CosThetaT = n2 * CosThetaT;
vec3 n2CosTheta = n2 * CosTheta;

vec3 RsSqrt = (CosTheta - n2CosThetaT) / (CosTheta + n2CosThetaT);

vec3 RpSqrt = (n2CosTheta - CosThetaT) / (n2CosTheta + CosThetaT);
vec3 Rp = RpSqrt * RpSqrt;

return (Rs + Rp) / 2;
}

vec3 BRDF( vec3 L, vec3 V, vec3 N, vec3 X, vec3 Y )
{
vec3 Ks = Specular * SpecularScale;

float NdotV = dot(N, V);

vec3 Full = Fresnel(NdotV, Ks);

vec3 Fs;

if(Schlick)
Fs = Ks + (1 - Ks) * pow(1.0 - NdotV, 5);
else
Fs = Ks + (1 - Ks) * exp(-6 * NdotV);

return abs(Full - Fs);
}

```

### #5036024Your preferred or desired BRDF?

Posted by on 24 February 2013 - 02:04 AM

I've taken to Cook-Torrance with the GGX distribution and Smith geometry factor (thanks CryZe) for specular, and the qualitative version of Oren-Nayar for diffuse.

I was mucking about with this in the BRDF explorer, and the fresnel factor didn't seem to be behaving right; even at front-on angles (L==V) there would always be a highlight, even when Ks was 0. I replaced your exp(-6 * LdotH) with pow(1-LdotH, 5) and it seems more correct now.

To help me compare it with the other BRDF's that come with BRDF explorer, I also divided everything by PI, which I'm not sure is correct, but seemed to make it behave more like the other BRDF's, and I divided the final result by NdotL, so that I could let BRDF explorer multiply by NdotL itself.

http://pastebin.com/c36FtdX5

There's really not a lot of difference between the two fresnel approximations. I graphed the two side-by-side here:

The one that uses exp() essentially kicks in slightly sooner and is more gradual. For materials with an IOR value of ~1.4 (average dialectics) this seems to be slightly closer to the full fresnel equation, and I'm guessing it's not any more expensive to evaluate on modern GPUs.

As for PI and NdotL, I went ahead and rewrote the unoptimized version of the shader:

```analytic

::begin parameters
color Diffuse 1 0 0
color Specular 1 1 1
float DiffuseScale 0 1 0.5
float SpecularScale 0 0.999 .028
float Roughness 0.005 2 0.2
::end parameters

vec3 BRDF( vec3 L, vec3 V, vec3 N, vec3 X, vec3 Y )
{
float PI = 3.14159265358979323846;
vec3 Kd = Diffuse * DiffuseScale;
vec3 Ks = Specular * SpecularScale;

vec3 H = normalize(L + V);
float NdotL = clamp(dot(N, L), 0, 1);
float NdotV = dot(N, V);
float NdotH = dot(N, H);
float LdotH = dot(L, H);

float a_2 = Roughness * Roughness;
float NdotL_2 = NdotL * NdotL;
float NdotV_2 = NdotV * NdotV;
float NdotH_2 = NdotH * NdotH;
float OneMinusNdotL_2 = 1.0 - NdotL_2;
float OneMinusNdotV_2 = 1.0 - NdotV_2;

vec3 Fd = 1.0 - Ks;

float gamma = clamp(dot(V - N * NdotV, L - N * NdotL), 0, 1);
float A = 1.0 - 0.5 * (a_2 / (a_2 + 0.33));
float B = 0.45 * (a_2 / (a_2 + 0.09));
float C = sqrt(OneMinusNdotL_2 * OneMinusNdotV_2) / max(NdotL, NdotV);

vec3 Rd = Kd / PI * Fd * (A + B * gamma * C);

float D = a_2 / (PI * pow(NdotH_2 * (a_2 - 1.0) + 1.0, 2.0));

vec3 Fs = Ks + Fd * exp(-6 * LdotH);

float G1_1 = 2.0 / (1.0 + sqrt(1.0 + a_2 * (OneMinusNdotL_2 / NdotL_2)));
float G1_2 = 2.0 / (1.0 + sqrt(1.0 + a_2 * (OneMinusNdotV_2 / NdotV_2)));
float G = G1_1 * G1_2;

vec3 Rs = (D * Fs * G) / (4 * NdotL * NdotV);

return (Rd + Rs) * NdotL; //remove NdotL and let BRDF Explorer handle that
}

```

You can see there is a factor of PI located in the calculation of Rd. Kd over PI is essentially the Lambert BRDF. The factor of PI is necessary for energy conservation. A factor of PI also shows up in the calculating of D. This is part of the normalization of the GGX distribution. When calculating Rs you see the familiar Cook-Torrance equation. Finally, Rd and Rs are summed and then multiplied by NdotL. This NdotL is not a part of either the specular or diffuse BRDFs, but the lighting equation. The version I posted before is identical to this, only I have removes terms that cancel out in order to get rid of unnecessary shader instructions. I also removed PI from both diffuse and specular BRDFs, since it's not really necessary for video games. The only affect it has is that your lights appear to be PI times brighter.

At least that is my current understanding. I'm still very new to the concepts behind lighting.

Edit: So I suppose it would make sense to remove the final NdotL, since this shader represents only the BRDF and not the final pixel color. Presumably BRDF Explorer is automatically multiplying the result by NdotL.

### #5035652Your preferred or desired BRDF?

Posted by on 22 February 2013 - 08:18 PM

I've taken to Cook-Torrance with the GGX distribution and Smith geometry factor (thanks CryZe) for specular, and the qualitative version of Oren-Nayar for diffuse.

```analytic

::begin parameters
color Diffuse 1 0 0
color Specular 1 1 1
float DiffuseScale 0 1 0.5
float SpecularScale 0 0.999 .028
float Roughness 0.005 2 0.2
::end parameters

vec3 BRDF( vec3 L, vec3 V, vec3 N, vec3 X, vec3 Y )
{
vec3 Kd = Diffuse * DiffuseScale;
vec3 Ks = Specular * SpecularScale;

vec3 H = normalize(L + V);
float NdotL = clamp(dot(N, L), 0, 1);
float NdotV = dot(N, V);
float NdotH = dot(N, H);
float LdotH = dot(L, H);

float a_2 = Roughness * Roughness;
float NdotL_2 = NdotL * NdotL;
float NdotV_2 = NdotV * NdotV;
float NdotH_2 = NdotH * NdotH;
float OneMinusNdotL_2 = 1.0 - NdotL_2;
float OneMinusNdotV_2 = 1.0 - NdotV_2;

vec3 Fd = 1.0 - Ks;

float gamma = clamp(dot(V - N * NdotV, L - N * NdotL), 0, 1);
float A = 1.0 - 0.5 * (a_2 / (a_2 + 0.33));
float B = 0.45 * (a_2 / (a_2 + 0.09));
float C = sqrt(OneMinusNdotL_2 * OneMinusNdotV_2) / max(NdotL, NdotV);

vec3 Rd = Kd * Fd * (A + B * gamma * C) * NdotL;

float D = NdotH_2 * (a_2 - 1.0) + 1.0;

vec3 Fs = Ks + Fd * exp(-6 * LdotH);

float G1_1 = 1.0 + sqrt(1.0 + a_2 * (OneMinusNdotL_2 / NdotL_2));
float G1_2 = 1.0 + sqrt(1.0 + a_2 * (OneMinusNdotV_2 / NdotV_2));
float G = G1_1 * G1_2;

vec3 Rs = (a_2 * Fs) / (D * D * G * NdotV);

return Rd + Rs;
}

```

### #5033567standard cos/sin... still needed ?

Posted by on 17 February 2013 - 06:01 PM

I think you are making an assumption on how the C library is implemented and what type of code the compiler will generate from it. Your compiler may already be using SSE2 instructions.

### #5030389OpenGL and Mac: No D3D11 level functionality?

Posted by on 09 February 2013 - 09:19 AM

they run 20x slower and the CPU utilization goes to max on one core - so they're implemented in software!

Apparently only single threaded at that. The horror.

It's amazing that as bad as the GPU driver situation for Linux is, the state of OpenGL on Linux is still lightyears ahead of what Apple is offering.

### #5030270OpenGL and Mac: No D3D11 level functionality?

Posted by on 08 February 2013 - 09:14 PM

That is the case. Apple likes to handle the OpenGL implementation themselves rather than let graphics vendors do it. I believe it's because they want their software implementation and hardware implementation to match exactly, and currently there is no software implementation of OGL 4.x. It's not like Apple really considers OS X to be a gaming platform, so having driver support for high end graphics cards and support for the latest graphics API isn't necessary to them. Besides, there's always Bootcamp or Linux if you really want to get OpenGL 4 on a Mac.

### #5030212deferred rendering question(s)

Posted by on 08 February 2013 - 04:20 PM

Deferred lighting is probably more aptly named light pre-pass, since it's not as easily confused with deferred shading. There is no advantage to light pre-pass compared to deferred shading, the exception being when you want to bring deferred rendering to a platform that doesn't support MRT or simply doesn't have enough frame buffer memory to support a full G-buffer. In fact, light pre-pass will limit you some what.

Deferred shadowing is an orthonormal technique. You can use it even in a forward rendering engine, which I believe UE3 uses even in DX9.

Posted by on 31 January 2013 - 10:20 PM

What's the best way to go about geting realistic metals? I haven't seen it mentioned much. Currenly I am calculating my frenel term as a float3 with schlick's approximation, and different zero incident values based on the reflectance values you can get from here: (http://refractiveindex.info/)

Is this a close enough approximation? Are there better ways of handeling it? The colors seem about right to me, and the specular highlight turns white at glancing angles which seems right.

PARTNERS