Jump to content

  • Log In with Google      Sign In   
  • Create Account


n00body

Member Since 20 Oct 2006
Offline Last Active May 26 2014 06:44 PM
-----

#5122507 [SOLVED] HDR PBR, exceeded FP16 range

Posted by n00body on 09 January 2014 - 10:00 PM

Background

I've been developing a physically-based shader framework for Unity3D. I am using the Normalized Blinn Phong NDF, and an approximation to the Schlick visibility function. I am using an effective specular power range of [4,8192] for direct illumination. I have also developed a translucent shader that uses premultiplied alpha to only make the diffuse translucent while preserving the specular intensity based on fresnel.

 

For all my testing, I am doing everything in Linear HDR mode which affords me an FP16 render target for my camera.

 

Situation

So this is a highly contrived scenario, but my team's artist managed to make it happen. Basically he has a scene with a directional light whose intensity is effectively 1.0 (0.5 input for Unity)  shining on a glass bottle surrounding a smooth metallic liquid. As a result, the two substances' highlights overlapped and their combined intensity seems to have exceeded the range of the FP16 render target. This resulted in weird artifacts where the the highest intensity color component went to black, while the other two just looked really bright. (see example image below).

 
ExceededPrescisionOfHDR.jpg

 

Upon further testing, I found I could remove the artifact by making the surface more rough, thus reducing the intensity of the highlight. However, I still found it having this visual error for even relatively rough overlapping materials.

 

Questions

1.) Is there any way to prevent this from happening programmatically without having to clamp the light values to an upper limit or otherwise harm the visual quality?

2.) Is it just something that falls to the artist to avoid doing?

3.) Even so, this means that I can't have multiple overlapping translucent objects or have to be careful about what objects pass behind them. Am I missing something here?

4.) Just for future reference, what is the actual upper limit value of FP16?

 

Thanks for any help you can provide.




#4393962 Why does normal mapping require Tangents and Binormals?

Posted by n00body on 01 February 2009 - 06:19 PM

____Normal maps are stored in tangent space that they can be remapped to any surface. If they were stored in world space, then they would be invalid if the model moved or rotated. If it were object space, then they would be invalid if the model deformed at all (say for skeletal animation). So the only way to allow normal maps to map to any surface, or allow the geometry to be transformed/deformed, is to store them in tangent space.
____However, to actually use them, you need to convert them from tangent space to a space relative to the geometry onto which it has been mapped. To do that, you need the Tangent, Normal, and Binormal of each vertex to make a matrix that can transform the tangent space normals.

Does that help?


PARTNERS