Jump to content

  • Log In with Google      Sign In   
  • Create Account


n00body

Member Since 20 Oct 2006
Offline Last Active May 26 2014 06:44 PM
-----

Topics I've Started

Toksvig AA and normal map precision issues

23 January 2014 - 08:23 PM

Background

I am working on a physically based shading system in Unity. Among the many components needed by this system is way to compensate for specular aliasing. Our chosen method involves calculating Toksvig AA from our normal maps, pre-correcting our specular powers, and storing the result for use at runtime.

 

Problem

We are dealing with ARGB32 precision normal maps as inputs, and are encountering the problem that some of our artists tools don't quite output unit length normals in this precision. Unfortunately, this has dramatic consequences when applied to our pre-correction of specular powers, even for "flat" normals. So materials that are supposed to be smooth end up getting dramatically more rough.

 

Questions

1.) Is there a way to deal with this besides just using higher precision storage for our normal maps? 

2.) One of our tactics to try to deal with this problem is offering the artist a bias factor that adjusts the calculated variance down slightly to compensate. Is this a valid solution, or will it break horribly down the line?

3.) Is there some other trick I could try that I have overlooked?

 

Thanks for any help you can provide.


[SOLVED] HDR PBR, exceeded FP16 range

09 January 2014 - 10:00 PM

Background

I've been developing a physically-based shader framework for Unity3D. I am using the Normalized Blinn Phong NDF, and an approximation to the Schlick visibility function. I am using an effective specular power range of [4,8192] for direct illumination. I have also developed a translucent shader that uses premultiplied alpha to only make the diffuse translucent while preserving the specular intensity based on fresnel.

 

For all my testing, I am doing everything in Linear HDR mode which affords me an FP16 render target for my camera.

 

Situation

So this is a highly contrived scenario, but my team's artist managed to make it happen. Basically he has a scene with a directional light whose intensity is effectively 1.0 (0.5 input for Unity)  shining on a glass bottle surrounding a smooth metallic liquid. As a result, the two substances' highlights overlapped and their combined intensity seems to have exceeded the range of the FP16 render target. This resulted in weird artifacts where the the highest intensity color component went to black, while the other two just looked really bright. (see example image below).

 
Attached File  ExceededPrescisionOfHDR.jpg   97.5KB   2 downloads

 

Upon further testing, I found I could remove the artifact by making the surface more rough, thus reducing the intensity of the highlight. However, I still found it having this visual error for even relatively rough overlapping materials.

 

Questions

1.) Is there any way to prevent this from happening programmatically without having to clamp the light values to an upper limit or otherwise harm the visual quality?

2.) Is it just something that falls to the artist to avoid doing?

3.) Even so, this means that I can't have multiple overlapping translucent objects or have to be careful about what objects pass behind them. Am I missing something here?

4.) Just for future reference, what is the actual upper limit value of FP16?

 

Thanks for any help you can provide.


[SOLVED] For Physical BRDFs, why use such high specular powers (eg. 8192)?

20 December 2013 - 06:02 PM

Background

So I have been looking into physically-based BRDFs for some time now, and have observed a perplexing trend. The companies that post the papers always seem to use a ridiculously high specular power (eg. Treyarch for COD BO uses 8192 for lights, 2048 for environment maps) when using a Normalized Blinn Phong NDF. This seems strange to me, especially since I have read that these powers are for extremely rare, super-polished mirror surfaces. Also, it means that they have to store a high-res mip level in their prefiltered environment maps that will take up a lot of space but almost never get used.

 

Up until now, I have usually settled for a maximum spec power of 1024 for my analytical lights, and 256 for my environment maps. For my bloom, I have been using a threshold of 1.0 and additive blending (what Unity provides). 

 

Problem

Whenever I test these specular powers myself, it causes the resulting specular highlight intensity to shoot up so high that my bloom goes crazy. In particular, as my camera moves I get very distracting flashes of bright spots on sharp model edges that appear and disappear in a very annoying way. 

 

Questions

1.) What is the incentive for using such high powers?

2.) Am I doing my bloom theshold wrong, and should I move it up?

3.) Is this a consequence of the Bloom blending mode?

 

Any help would be most appreciated.


Order of post-processing passes in pipeline?

20 February 2011 - 10:41 AM

Background:
For my rendering pipeline, I am using encoded RGBA8 buffers wherever possible. Currently, I am considering this order for all the passes:
  • G-Buffer prepass
  • Light accumulation
  • SSAO
  • Material (output motion vectors with MRT)
  • Refractive (written to a separate buffer, then overwrites material buffer)
  • EdgeAA
  • Depth of Field
  • Motion Blur
  • Translucency (accumulated in a separate buffer, applies its own fog)
  • Fog (also applies translucency buffer)
  • Generate Bloom
  • Apply Bloom, Tonemapping, & Color Correction

Problems:

Right now, I have concerns about ordering post-passes that involve blurring (Motion Blur, Depth of Field, EdgeAA, etc) or that overwrite other parts of the buffer (volume lighting, bloom, translucency, etc).
  • Blur passes introducing artifacts into each other
    • ex. Motion Blur then Depth of Field blurring the streaks
  • Overwrite passes happening before blur passes
    • ex. Translucency incorrectly blurred
    • ex. Fog blurring artifacts with EdgeAA
  • Overwrite passes happening after blurring passes
    • ex. Translucency being sharp on blurred background
Questions:
Any insights from prior work would be most appreciated.
  • Can you comment on my current proposed ordering of passes? Any potential problems, artifacts it will introduce?
  • Can you offer you own proposed ordering? Any potential problems, artifacts it will introduce?
Any help you can offer would be most appreciated. Thanks. ;)

Prefiltered IBL + Non-linear mappings

26 November 2010 - 08:04 AM

Background:
For a while now, I have been experimenting with prefiltered Image-Based Lighting (IBL). In particular, I have recently gotten into prefiltering environment maps for multiple specular powers in the range [0,255] so that they can be looked up at runtime. Naturally, I ran into the problem that storing an environment map for every specular power would be prohibitively expensive.

Unfortunately, I found out that you can't simply cut down the number to a few evenly-spaced specular powers and then interpolate between them. Doing this introduces noticeably incorrect results at lower specular powers, since they have a much higher rate of change than higher specular powers. So I resorted to using a non-linear mapping of specular powers, using the power function 255^n. This allowed me to cut down the number of maps from 256 to 8 while preserving the range of specular powers.

However, I am now grappling with a limitation inherent to this approach.

Links:
[1] GPU Gems, Ch 18. Spatial BRDFs (Example 18-2 for IBL code I am using)

Problem(s):
My non-linear mapping 255^n only allows me to represent values in the range [1,255]. This means I can't store specular power 0, which I need in order to get diffuse lighting. Now, I am trying to figure out how to keep the benefits of the non-linear mapping, while getting the range that I want.

Currently, I am considering a trick like this:
// In IBL generation code
SpecularPower = pow(255.0f + 1.0f, SpecularPowerLerp) - 1.0f;

// In runtime shader look up code
LookUpTexCoord = log(SpecularPower + 1.0f) / log(255.0f + 1.0f);


Questions:
1.) Will this trick produce "correct" results for the range [0,255]?
2.) Can anyone propose better alternatives that will solve my problem?
3.) What can I do to reduce runtime cost of calculating the lookup coordinates?

Thank you for any help you can provide. ;)

PARTNERS