PBR specular BRDF

Started by
22 comments, last by Hodgman 8 years, 8 months ago
I've been doing research on PBR and, not wanting to piggy back the other recent PBR thread, I was wondering if there's a reason why we still need a specular BRDF. If all light comes from what surrounds an object that needs lighting, can we not just include the light that would provide the specular highlight(s) into the environment map? And then in order to achieve roughness we simply have blurred versions of the environment map in the mipmap levels? I don't see why we need the specular computation, it would come for free with the reflection of the light would it?

I know this approach isn't new or anything but I don't understand why we still need to compute a specular highlight.

For metallic materials we use a linear interpolation of the environment map mip levels based on a roughness value and for dielectric materials, we simply blend in the albedo, still using the environment map for reflections albeit just the brighter parts of it.

I may have misunderstood PBR...?
Advertisement
So if I understand what you're asking, you're wanting to know why you don't just calculate the specular reflection vector and grab the texel of the environment map that corresponds and use that as the specular value?

There's a couple of reasons.

First of all, BRDFs talk about more than just reflections and blurriness. They also talk about normalization factors. Just grabbing different mipmap levels won't account for, say, the Geometry term, the computations must be done separately.

Also, from a more path-tracing perspective, you also have to take in all the other things that a microfacet model describes, like grazing reflections and stuff. These effects can't be covered as easily as just grabbing a reflection vector.

Anyways, hope this helped clear some stuff up!

I'm sorry about any spelling or grammar mistakes or any undue brevity, as I'm most likely typing on my phone

"Hell, there's more evidence that we are just living in a frequency wave that flows in harmonic balance creating the universe and all its existence." ~ GDchat

So if I understand what you're asking, you're wanting to know why you don't just calculate the specular reflection vector and grab the texel of the environment map that corresponds and use that as the specular value?
There's a couple of reasons.
First of all, BRDFs talk about more than just reflections and blurriness. They also talk about normalization factors. Just grabbing different mipmap levels won't account for, say, the Geometry term, the computations must be done separately.
Also, from a more path-tracing perspective, you also have to take in all the other things that a microfacet model describes, like grazing reflections and stuff. These effects can't be covered as easily as just grabbing a reflection vector.
Anyways, hope this helped clear some stuff up!

That was the crux of the question yes - thanks. After thinking more about it, a static environment map with lights in it wouldn't work too well with dynamic lights either I guess

Are there any higher level papers anywhere that help explain the principles behind it for people not so blessed in the maths department?

Also, what did you mean by normalisation factors?
So it sounds like you're asking why you would calculate the specular contribution from analytical light sources, when you could just include them in a pre-integrated environment map that's used for IBL specular. There's three main reasons for this:

1. Computing analytical specular is generally going to be higher quality than anything you get from a cubemap. The current techniques commonly used for IBL specular have some heavy approximations. For instance, you don't get the "stretched highlights" look that you're supposed to get from microfacet BRDFs, since cubemaps don't store enough information for full view-dependent calculations. You also can end up with a lot of locality issues due to the fact that your cubemaps are generated at sparse locations throughout your scene. This leads to a lack of proper occlusion, and poor parallax. If you can represent your lighting source analytically, you can use the full BRDF and get correct behavior.

2. If you handle the light separately, then the light can move or change intensity.

3. If you handle the light separately, then you can generate shadow maps to give you dynamic occlusion.
Thanks both, everything's much clearer now

I may have misunderstood PBR...?

Yeah most of your post is about IBL :)

I've been tinkering with specular highlights now for some time and I'm quite confused by an issue I'm having. I've got my BRDF looking quite nice but at grazing angles on my sphere, the flat specular highlight 'disc' I'd expect to see is actually angled incorrectly when the specular position is correct. If I change how I calculate my viewing vector in the shader, the specular reflection doesn't line up correctly with where the light is, but the 'disc' is correct.

In all the attached photos, the lightsource is facing directly down (0, -1, 0).

This photo shows the view vector computed correctly and a nice rough specular highlight:

https://imgur.com/a/4XCH2#3YIFWjY

This photo shows the view vector computed correctly and a nice shiny specular highlight:

https://imgur.com/a/4XCH2#qqYTbni

This photo shows the view vector computed correctly and an incorrect 'disc' at a grazing angle:

https://imgur.com/a/4XCH2#XjU4eXE

If I change the way I compute my viewing angle by basically switching the - for a + i.e., normalize(camPos - pixPos) I get a nice flattened disc, but obviously the specular highlight doesn't reflect in the right position - in the photo you can see it's in the wrong position:

https://imgur.com/a/4XCH2#XjU4eXE

Without posting the shader, should regular Blinn specular reflections be visually correctly at grazing angles? It's almost like the disc is rotated 90 degrees around the object's normal (in this case the sphere) at the point of the specular reflection. I'm not doing anything particularly different, just playing around with the fall off a little bit with some homemade calcs but nothing that would 'rotate' the disc.

My math isn't fantastic unfortunately...

I'm not sure what the probpem with pic #3 is. Pic #4 is obviously wrong.

Phong always produces circular highlights, which doesn't match the real world.
Blinn-Phong produces circular highlights at steep angles and elliptical highlights at glancing angles, which is actually much closer to the real world.

Think of lights reflecting on choppy water, or traffic lights on a wet road -- they're not circular dots on the water/road, they're usually extremely stretched (under typical viewing conditions, where you're looking at them at a glancing angle).
Blinn-Phong's stretched highlights are an important feature! All modern microfacet BRDF's will do the same thing.

Blinn-Phong produces circular highlights at steep angles and elliptical highlights at glancing angles, which is actually much closer to the real world.

Yes, that's what I'd expect, but at grazing angles I would imagine in the real world, the specular reflection would elongate into an eclipse with the longer side of the eclipse stretching around the outside of the sphere as you're looking at it (as in #4), not stretching from your eye into the screen (as in #3).

Here's a shot of what I mean:
http://imgur.com/AgzZRB2

The specular highlight in this screenshot is starting to elongate around the sphere rather than from front to back (difficult to explain), this looks right to me.

I'm wondering if my screenshots are incorrectly numbered now! :) just to be sure, this is the one I think looks correct: http://imgur.com/ckL6wnS

Problem is, the highlight shape is correct but it's in the wrong place in relation to light source

since cubemaps don't store enough information for full view-dependent calculations.

My area has never been math, but I intuit that this could be overcome by a 2nd cubemap, could it not?
The first tap into a cubemap could hold data that would change how you tap the 2nd, allowing for the stretching etc. you described.

I leave it to someone more knowledgeable of the math behind the whole process to figure out what the 1st texture stores and how it affects your read from the original texture.


L. Spiro

I restore Nintendo 64 video-game OST’s into HD! https://www.youtube.com/channel/UCCtX_wedtZ5BoyQBXEhnVZw/playlists?view=1&sort=lad&flow=grid

This topic is closed to new replies.

Advertisement