Jump to content
  • Advertisement
Sign in to follow this  
RobMaddison

PBR specular BRDF

This topic is 1058 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I've been doing research on PBR and, not wanting to piggy back the other recent PBR thread, I was wondering if there's a reason why we still need a specular BRDF. If all light comes from what surrounds an object that needs lighting, can we not just include the light that would provide the specular highlight(s) into the environment map? And then in order to achieve roughness we simply have blurred versions of the environment map in the mipmap levels? I don't see why we need the specular computation, it would come for free with the reflection of the light would it?

I know this approach isn't new or anything but I don't understand why we still need to compute a specular highlight.

For metallic materials we use a linear interpolation of the environment map mip levels based on a roughness value and for dielectric materials, we simply blend in the albedo, still using the environment map for reflections albeit just the brighter parts of it.

I may have misunderstood PBR...?

Share this post


Link to post
Share on other sites
Advertisement

So if I understand what you're asking, you're wanting to know why you don't just calculate the specular reflection vector and grab the texel of the environment map that corresponds and use that as the specular value?
There's a couple of reasons.
First of all, BRDFs talk about more than just reflections and blurriness. They also talk about normalization factors. Just grabbing different mipmap levels won't account for, say, the Geometry term, the computations must be done separately.
Also, from a more path-tracing perspective, you also have to take in all the other things that a microfacet model describes, like grazing reflections and stuff. These effects can't be covered as easily as just grabbing a reflection vector.
Anyways, hope this helped clear some stuff up!

That was the crux of the question yes - thanks. After thinking more about it, a static environment map with lights in it wouldn't work too well with dynamic lights either I guess

Are there any higher level papers anywhere that help explain the principles behind it for people not so blessed in the maths department?

Also, what did you mean by normalisation factors?

Share this post


Link to post
Share on other sites

I've been tinkering with specular highlights now for some time and I'm quite confused by an issue I'm having.  I've got my BRDF looking quite nice but at grazing angles on my sphere, the flat specular highlight 'disc' I'd expect to see is actually angled incorrectly when the specular position is correct.  If I change how I calculate my viewing vector in the shader, the specular reflection doesn't line up correctly with where the light is, but the 'disc' is correct.

 

In all the attached photos, the lightsource is facing directly down (0, -1, 0).

 

This photo shows the view vector computed correctly and a nice rough specular highlight:

https://imgur.com/a/4XCH2#3YIFWjY

 

This photo shows the view vector computed correctly and a nice shiny specular highlight:

https://imgur.com/a/4XCH2#qqYTbni

 

This photo shows the view vector computed correctly and an incorrect 'disc' at a grazing angle:

https://imgur.com/a/4XCH2#XjU4eXE

 

If I change the way I compute my viewing angle by basically switching the - for a + i.e., normalize(camPos - pixPos) I get a nice flattened disc, but obviously the specular highlight doesn't reflect in the right position - in the photo you can see it's in the wrong position:

https://imgur.com/a/4XCH2#XjU4eXE

 

Without posting the shader, should regular Blinn specular reflections be visually correctly at grazing angles?  It's almost like the disc is rotated 90 degrees around the object's normal (in this case the sphere) at the point of the specular reflection.  I'm not doing anything particularly different, just playing around with the fall off a little bit with some homemade calcs but nothing that would 'rotate' the disc.

 

My math isn't fantastic unfortunately...

Share this post


Link to post
Share on other sites
I'm not sure what the probpem with pic #3 is. Pic #4 is obviously wrong.

Phong always produces circular highlights, which doesn't match the real world.
Blinn-Phong produces circular highlights at steep angles and elliptical highlights at glancing angles, which is actually much closer to the real world.

Think of lights reflecting on choppy water, or traffic lights on a wet road -- they're not circular dots on the water/road, they're usually extremely stretched (under typical viewing conditions, where you're looking at them at a glancing angle).
Blinn-Phong's stretched highlights are an important feature! All modern microfacet BRDF's will do the same thing.

Share this post


Link to post
Share on other sites

Blinn-Phong produces circular highlights at steep angles and elliptical highlights at glancing angles, which is actually much closer to the real world.

Yes, that's what I'd expect, but at grazing angles I would imagine in the real world, the specular reflection would elongate into an eclipse with the longer side of the eclipse stretching around the outside of the sphere as you're looking at it (as in #4), not stretching from your eye into the screen (as in #3).

Here's a shot of what I mean:
http://imgur.com/AgzZRB2

The specular highlight in this screenshot is starting to elongate around the sphere rather than from front to back (difficult to explain), this looks right to me.

I'm wondering if my screenshots are incorrectly numbered now! :) just to be sure, this is the one I think looks correct: http://imgur.com/ckL6wnS

Problem is, the highlight shape is correct but it's in the wrong place in relation to light source

Share this post


Link to post
Share on other sites

since cubemaps don't store enough information for full view-dependent calculations.

My area has never been math, but I intuit that this could be overcome by a 2nd cubemap, could it not?
The first tap into a cubemap could hold data that would change how you tap the 2nd, allowing for the stretching etc. you described.

I leave it to someone more knowledgeable of the math behind the whole process to figure out what the 1st texture stores and how it affects your read from the original texture.


L. Spiro

Edited by L. Spiro

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!