PBR specular BRDF
I know this approach isn't new or anything but I don't understand why we still need to compute a specular highlight.
For metallic materials we use a linear interpolation of the environment map mip levels based on a roughness value and for dielectric materials, we simply blend in the albedo, still using the environment map for reflections albeit just the brighter parts of it.
I may have misunderstood PBR...?
There's a couple of reasons.
First of all, BRDFs talk about more than just reflections and blurriness. They also talk about normalization factors. Just grabbing different mipmap levels won't account for, say, the Geometry term, the computations must be done separately.
Also, from a more path-tracing perspective, you also have to take in all the other things that a microfacet model describes, like grazing reflections and stuff. These effects can't be covered as easily as just grabbing a reflection vector.
Anyways, hope this helped clear some stuff up!
That was the crux of the question yes - thanks. After thinking more about it, a static environment map with lights in it wouldn't work too well with dynamic lights either I guessSo if I understand what you're asking, you're wanting to know why you don't just calculate the specular reflection vector and grab the texel of the environment map that corresponds and use that as the specular value?
There's a couple of reasons.
First of all, BRDFs talk about more than just reflections and blurriness. They also talk about normalization factors. Just grabbing different mipmap levels won't account for, say, the Geometry term, the computations must be done separately.
Also, from a more path-tracing perspective, you also have to take in all the other things that a microfacet model describes, like grazing reflections and stuff. These effects can't be covered as easily as just grabbing a reflection vector.
Anyways, hope this helped clear some stuff up!
Are there any higher level papers anywhere that help explain the principles behind it for people not so blessed in the maths department?
Also, what did you mean by normalisation factors?
1. Computing analytical specular is generally going to be higher quality than anything you get from a cubemap. The current techniques commonly used for IBL specular have some heavy approximations. For instance, you don't get the "stretched highlights" look that you're supposed to get from microfacet BRDFs, since cubemaps don't store enough information for full view-dependent calculations. You also can end up with a lot of locality issues due to the fact that your cubemaps are generated at sparse locations throughout your scene. This leads to a lack of proper occlusion, and poor parallax. If you can represent your lighting source analytically, you can use the full BRDF and get correct behavior.
2. If you handle the light separately, then the light can move or change intensity.
3. If you handle the light separately, then you can generate shadow maps to give you dynamic occlusion.
I've been tinkering with specular highlights now for some time and I'm quite confused by an issue I'm having. I've got my BRDF looking quite nice but at grazing angles on my sphere, the flat specular highlight 'disc' I'd expect to see is actually angled incorrectly when the specular position is correct. If I change how I calculate my viewing vector in the shader, the specular reflection doesn't line up correctly with where the light is, but the 'disc' is correct.
In all the attached photos, the lightsource is facing directly down (0, -1, 0).
This photo shows the view vector computed correctly and a nice rough specular highlight:
https://imgur.com/a/4XCH2#3YIFWjY
This photo shows the view vector computed correctly and a nice shiny specular highlight:
https://imgur.com/a/4XCH2#qqYTbni
This photo shows the view vector computed correctly and an incorrect 'disc' at a grazing angle:
https://imgur.com/a/4XCH2#XjU4eXE
If I change the way I compute my viewing angle by basically switching the - for a + i.e., normalize(camPos - pixPos) I get a nice flattened disc, but obviously the specular highlight doesn't reflect in the right position - in the photo you can see it's in the wrong position:
https://imgur.com/a/4XCH2#XjU4eXE
Without posting the shader, should regular Blinn specular reflections be visually correctly at grazing angles? It's almost like the disc is rotated 90 degrees around the object's normal (in this case the sphere) at the point of the specular reflection. I'm not doing anything particularly different, just playing around with the fall off a little bit with some homemade calcs but nothing that would 'rotate' the disc.
My math isn't fantastic unfortunately...
Phong always produces circular highlights, which doesn't match the real world.
Blinn-Phong produces circular highlights at steep angles and elliptical highlights at glancing angles, which is actually much closer to the real world.
Think of lights reflecting on choppy water, or traffic lights on a wet road -- they're not circular dots on the water/road, they're usually extremely stretched (under typical viewing conditions, where you're looking at them at a glancing angle).
Blinn-Phong's stretched highlights are an important feature! All modern microfacet BRDF's will do the same thing.
Yes, that's what I'd expect, but at grazing angles I would imagine in the real world, the specular reflection would elongate into an eclipse with the longer side of the eclipse stretching around the outside of the sphere as you're looking at it (as in #4), not stretching from your eye into the screen (as in #3).Blinn-Phong produces circular highlights at steep angles and elliptical highlights at glancing angles, which is actually much closer to the real world.
Here's a shot of what I mean:
http://imgur.com/AgzZRB2
The specular highlight in this screenshot is starting to elongate around the sphere rather than from front to back (difficult to explain), this looks right to me.
I'm wondering if my screenshots are incorrectly numbered now! :) just to be sure, this is the one I think looks correct: http://imgur.com/ckL6wnS
Problem is, the highlight shape is correct but it's in the wrong place in relation to light source
since cubemaps don't store enough information for full view-dependent calculations.
My area has never been math, but I intuit that this could be overcome by a 2nd cubemap, could it not?
The first tap into a cubemap could hold data that would change how you tap the 2nd, allowing for the stretching etc. you described.
I leave it to someone more knowledgeable of the math behind the whole process to figure out what the 1st texture stores and how it affects your read from the original texture.
L. Spiro