Jump to content

  • Log In with Google      Sign In   
  • Create Account

n00body

Member Since 20 Oct 2006
Offline Last Active Apr 14 2016 03:26 PM

#5229055 How to calculate Lumens?

Posted by n00body on 14 May 2015 - 07:31 PM

I've seen the term being thrown around a lot lately when I read up on engines with their new PBR pipelines. Quite a few of them are saying that they now specify their light intensities in Lumens. I know that that is a term referring to output from real-world light source, but that's the extent of my knowledge. 

 

My questions are:

1.) How do they pertain to lights in shaders?

2.) Is there a specific formula for calculating the light intensity from them?

3.) Am I just over-thinking this, and Lumens is just another name for the light intensity scale value?

 

Any help would be most appreciated. Thank you.




#5201186 [SOLVED] Detail mapping + Parallax = Texture Swimming?

Posted by n00body on 01 January 2015 - 05:03 PM

Additionally, I've found out you need to divide the offset by the tiling of the base map to avoid swimming on detail maps when you tile the base map. Just keeps getting more complicated. :(




#5178719 [SOLVED] Area Light, Representative Point, Blinn-Phong

Posted by n00body on 07 September 2014 - 01:05 PM

Okay, I've sort of solved it myself. Basically, I found a way to apply Beckmann roughness to Blinn Phong. This modifies the normalization term to be the way the Epic described in their paper for GGX, so I suspect the same trick they used should now apply. For anyone who is interested, here is where I found my answer:
http://graphicrants.blogspot.com/2013/08/specular-brdf-reference.html




#5122507 [SOLVED] HDR PBR, exceeded FP16 range

Posted by n00body on 09 January 2014 - 10:00 PM

Background

I've been developing a physically-based shader framework for Unity3D. I am using the Normalized Blinn Phong NDF, and an approximation to the Schlick visibility function. I am using an effective specular power range of [4,8192] for direct illumination. I have also developed a translucent shader that uses premultiplied alpha to only make the diffuse translucent while preserving the specular intensity based on fresnel.

 

For all my testing, I am doing everything in Linear HDR mode which affords me an FP16 render target for my camera.

 

Situation

So this is a highly contrived scenario, but my team's artist managed to make it happen. Basically he has a scene with a directional light whose intensity is effectively 1.0 (0.5 input for Unity)  shining on a glass bottle surrounding a smooth metallic liquid. As a result, the two substances' highlights overlapped and their combined intensity seems to have exceeded the range of the FP16 render target. This resulted in weird artifacts where the the highest intensity color component went to black, while the other two just looked really bright. (see example image below).

 
ExceededPrescisionOfHDR.jpg

 

Upon further testing, I found I could remove the artifact by making the surface more rough, thus reducing the intensity of the highlight. However, I still found it having this visual error for even relatively rough overlapping materials.

 

Questions

1.) Is there any way to prevent this from happening programmatically without having to clamp the light values to an upper limit or otherwise harm the visual quality?

2.) Is it just something that falls to the artist to avoid doing?

3.) Even so, this means that I can't have multiple overlapping translucent objects or have to be careful about what objects pass behind them. Am I missing something here?

4.) Just for future reference, what is the actual upper limit value of FP16?

 

Thanks for any help you can provide.




#4672621 [SOLVED] Tonemapping Operators & Color Correction

Posted by n00body on 05 July 2010 - 09:38 AM

@InvalidPointer:
I have a specific reason for wanting to store raw RGB data in my light buffer(s). Basically, I was trying to apply SSAO the same way as CryEngine 3, where it does all ambient lighting, applies SSAO, and then does all direct lighting. This way, ambient lighting is correctly occluded, and direct lighting shows up in the darkened areas caused by AO.

This presents the problem of having enough precision for the light buffer(s), since I want to maintain compatibility with SM 3.0 hardware. So that's the reason why I have shied away from SM 4.0-specific float formats up to this point. Besides, I found that using sRGB on SM 4.0 hardware can alleviate most of the banding issues of low-precision formats (blog post). Though I suppose that if the solution is SM 4.0-only, I might as well use an fp10 format.

I suspect that if I am to continue using my Deferred Lighting renderer, I may have to use different rendering approaches between SM 3.0 & 4.0 GPUs to avoid artifacts. At this point, I will probably have to use one fp16 target for SM 3.0, and two fp10 targets for SM 4.0. So while it is annoying to lose correct specular lighting color, it is better than having banding artifacts.

On a side note, those are some nice results for your bloom effect. Any tips on using that technique, based on your experience implementing it?

@MJP:
Thanks for the info and your input. If I ever get serious about high-quality HDR & Tonemapping, I might give that book a look. ;)

@All:
At this point in time, I'm not looking to make my renderer filmic/photorealistic/etc. I just want to have a simple, robust tonemapping operator that preps values for color-correction, and doesn't need multiple passes to work. So for now, I think I will stick with the exponential operator, since it meets my needs and isn't horribly expensive.

Thanks for your input.


#4393962 Why does normal mapping require Tangents and Binormals?

Posted by n00body on 01 February 2009 - 06:19 PM

____Normal maps are stored in tangent space that they can be remapped to any surface. If they were stored in world space, then they would be invalid if the model moved or rotated. If it were object space, then they would be invalid if the model deformed at all (say for skeletal animation). So the only way to allow normal maps to map to any surface, or allow the geometry to be transformed/deformed, is to store them in tangent space.
____However, to actually use them, you need to convert them from tangent space to a space relative to the geometry onto which it has been mapped. To do that, you need the Tangent, Normal, and Binormal of each vertex to make a matrix that can transform the tangent space normals.

Does that help?


PARTNERS