Jump to content

  • Log In with Google      Sign In   
  • Create Account

Vincent_M

Member Since 16 Jan 2007
Online Last Active Today, 12:00 AM

Posts I've Made

In Topic: Transformation Hierarchy

Yesterday, 11:50 PM

What are you expecting vs. what you get?
I would expect that the world shrinks on the screen if the camera scales up and vice-versa. If that is what you are getting then your results are not thrown off at all.

If that is not what you want to get (which is different from it being the correct result), normalize the world matrix’s first 3 rows before getting its inverse for the view matrix.


L. Spiro

After a closer look, it looks like this is working correctly. I originally thought that it was affecting my perspective matrix, but I tested it out in another scenario, and it worked fine. The only weird thing is that I achieve correct results when I only perform the inverse of my camera node's world-space matrix. If I do an inverse-transpose of that matrix, nothing draws. At least, the single quad of geometry I am drawing to my scene doesn't display. I'll search for it with my FPS camera I've got setup with my gamepad, but no luck haha.

 

Anyway, I was adding onto my original post earlier, and things came up, so I couldn't save it. A huge question I've been trying to answer is how to get my object's position, orientation and scale from its final transform matrix. My Transform class does store position, rotation and scale, but that's only to calculate its local transform matrix that's used to calculate the transform's final transform matrix for that frame along with the parent's final transform.


In Topic: Current-Gen Lighting

Yesterday, 05:30 PM

@Hodgman, after dissecting your post, I don't think I have as much of a grasp on lighting as I thought. Before we begin, I just want to clarify:

Gourand Shading: calculating the diffuse component of light

Phone Shading: calculating the basic specular component of light

Blinn-Phong Shading: calculating the specular component of light with account to reflection and refraction

Normalized Blinn-Phong Shading: Blinn-Phong shading with energy conservation

 


With PBR you can still have these two kinds of maps, or another popular choice is the "metalness" workflow. This workflow is based on the observation that most real (physical) dielectrics have monochrome specular masks, all with almost the same value (about 0.03-0.04)... so there's not much point in having a map for them -- just hardcode 0.04 for non-metals!
Metals on the other hand, need a coloured specular mask, but at the same time, they all have black diffuse colours!
So you end up with this neat memory saving, as well as a simple workflow -- 
specPower = roughnessTexture;
if( metal )
  specMask = colorTexture;
  diffuseColor = 0;
else
  specMask = 0.04;
  diffuseColor = colorTexture
 

I like the idea of the metal/diaelectric workflow. It seems much simpler than the specular masks, and less maps are required. I remember seeing something online saying that metals really don't have any diffuse color, as illustrated above. Is specMask vector that's treated as a color where the RGB components are set to 0.04 when dielectric?

 


NdotL is the core of every lighting algorithm, basically stating that if light hits a surface at an angle, then the light is being spread over a larger surface area, so it becomes darker.

I have to keep telling myself this as I keep thinking that PBR is a much more complex collection of concepts. N*L is where everything starts, but then more coefficients start to be applied to that, it seems.

 


This bit of math is actually part of the rendering equation (not the BRDF).
The lambertian BRDF actually is just "diffuseColor".
The rendering equation says that incoming light is "saturate(dot(N,L))".

What exactly is the BRDF? I know it stands for Bi-directional Reflectance Distribution Function, and I always thought they were referring to the vector operations used to compute lighting such as NdotL, reflection, refraction, etc. Then, the rendering equation is the combination of all of that. For example, in the ad-hoc lighting model of last gen, ambient lighting was a constant color/texture, diffuse lighting was a color/texture multiplied by NdotL, and specular lighting was NdotL multiplied by an exponent channel in the spec map that was then multiplied by the the spec map's RGB channels that was used for the mask. If I understand this correctly, the lighting equation would be:

foreach light in scene:
     frag_color += ambientFactor + diffuseFactor + specularFactor;

Then, the BRDF would be the devil in the details of how ambientFactor, diffuseFactor and specularFactor were calculated:

ambientFactor = ambientColor;
diffuseFactor = diffuseColor * dot(n, l);
specularFactor = specularMask * dot(h, l) * dot(n, l);

This doesn't account for reflection/refraction, but the factors being calculated above would be the BDRFs. Is this correct?

 

You also talk about geometry visibility, and this is something I haven't gotten into yet. Is this linked with AO or occlusion querying? I haven't had the time to dive into these concepts just yet.

 


However, blinn-phong is not "energy conserving" -- with high specular power values, lots of energy just goes missing (is absorbed into the surface for no reason).
Normalized blinn-phong fixes this, so that all the energy is accounted for (an important feature of "PBR").
result = NdotL * pow( NdotH, specPower ) * specMask * ((specPower+1)/2*Pi)

Was energy conservation common back in the ad-hoc days, or is this a new-ish take on specular lighting?

 


Cook-Torrance has almost become like a BRDF framework with "plugins"  which takes the form:
result = nDotL * distribution * fresnel * geometry * visibility
 
e.g. with normalized blinn phong distribution, schlick's fresnel, and some common geometry/visibility formulas --
distribution = pow( NdotH, specPower ) * specMask * ((specPower+1)/2*Pi)
fresnel = specMask + (1-specMask)*pow( 1-NdotV, 5 );
geometry = min( 1, min(2*NdotH*NdotV/VdotH, 2*NdotH*NdotL/VdotH) )
visibility = 1/(4*nDotV*nDotL)

A few years ago, I used to think specular lighting was just a shininess factor. I learned that specular was much more than that... It's actually more about the very reflectance of light than just "highlights". It also plays an important role with how reflections work with environment maps, right? Does the Cook-Torrance BRDF only deal with specular lighting? As I learn more about PBR, it seems like specular is taking on a larger role than just highlights. In UE4's paper on shading, it seems like it! They also mention GGX.

 


I wrote a lengthy reply as well but had to reboot.

This actually happened to me twice today... I would have been happy to read it, but nonetheless, those articles come in handy. I remember seeing that Tri-Ace footage in either early 2013 or as far back as August 2012. After I wrote my response on Friday, I found this article on the Unreal Engine 4's PRB model. I see Tri-Ace's former ad-hoc lighting model also considered geometry visibility. It looks like I'm behind really laugh.png


In Topic: Current-Gen Lighting

22 January 2015 - 06:22 PM

A bit off-topic ... but I watched the Batman trailer linked by OP and can't say I saw any actual "gameplay" in it ... just a lot of cut scenes (which did look nice).

I thought the same thing, but Unreal Engine 4 can apparently deliver graphics of this caliber in real-time on PC's. You'd need a beefy GTX-class graphics card from the last 2 - 3 to make it happen at Full HD, but it's possible. It's also possible, on consoles, but some have speculated that effects and texture sizes have been cranked down to achieve a decent real-time framerate.

 

 

Physically based rendering has been a household word for many years now.

No, they do not use “last-generation but higher-resolution textures”.  Read up on physically based rendering, albedo, shininess, roughness, specular reflectance, etc.

 

 

L. Spiro

AND 

 

 

but I wasn't sure if even current consoles were capable of that yet.

Even PS3/360/mobile games do PBR  these days... just with more approximations.

 

Use a nice BRDF (start with cook torrence / normalized blinn-phong), use IBL for ambient (pre-convolved with an approximation of your BRDF, so that you end up with ambient-diffuse and ambient-specular), use gamma-decoding on inputs (sRGB->linear when reading colour textures), render to a high-precision target (Float16, etc) and tone-map it to gamma-encoded 8bit (do linear->sRGB / linear->Gamma as the last step in tone-mapping).

 

Ideally you'll do Bloom/DOF/motion-blur before tone-mapping, but on older hardware you might do it after (to get better performance, but with worse quality).

 

I've been reading up on this quite a bit lately, and it looks like a lot of these things are related. From what I've read, PBR is a pretty generic term that encompasses pretty much all of these terms. For example, Unity 5 and UE4's PBR solution (both powered by Enlighten, so of course they'll appear similar) is to reduce specular lighting down to a binary value that's either reflective or not (metal or dielectric). As for maps, there's a base color texture that shouldn't have any lighting baked into it to portray any type of depth. There's also roughness, which determines how reflective/shiny an object is. A Unity 5 demo explained that it makes use of light probes that dwell within the scene, which I believe has something to do with IBL. Speaking of which, I've done some research in that area where I have the free time. My main resource is this paper.

 

It seems that IBL is where I generate a and HDR-quality (float32 or float16 RGB, or 48 to 96-bit) high-resolution cubemap (probably at least 1024x1024) of my scene. Then, I use my shader to displace that reflection map based on the position of my scene (not sure how that works). Since it's in HDR, the contrast ratio in the pixels will be closer mimic real-life lighting (50,000:1), and it'll require me to texture my objects using a tone mapping algorithm to dither it down to something a screen can display. I've read tone mapping pixel formats should be RGB10A2. I'm not sure where gamma decoding fits in there, and there's plenty of holes in my knowledge currently. I also need sleep. Badly.

 

Wait, I think the "gamma-decode" phase is part of tone mapping (as said by Hodgman above). The HDR map uses gamma-decoding as part of the tone mapping implementation, so that fits in later on in the post-processing pipeline.

 


Use a nice BRDF (start with cook torrence / normalized blinn-phong)

Normalized blinn-phong is typical N * L lighting, right? Cook-torrence is how eye-to-normal half-vector used to compute last-gen's concept of specular lighting, right?

 

Anyway, it looks like Unity 5 and Unreal Engine 4 are both using Geomerics' Enlighten Global Illumination engine. Are they delivering a product that it'd be pretty difficult for one person to develop on their own? I'm not talking about AAA engine-quality implementations. I'd be pretty happy to get some basic PBR working.

http://www.geomerics.com/wp-content/uploads/2014/03/Enlighten_Brochure.pdf


In Topic: Ambient Occlusion for Deforming Geometry

20 January 2015 - 12:26 PM

Thanks guys! I'm new to AO, and haven't even implemented it yet, so this helps a lot.

 


Alternatively, instead of pre-baking AO from animated objects you can compute it at runtime, so that the animation is taken into account. Some games do this by attaching a small number of ellipsoids to the character's bones to act as a very low-detail proxy of the character's volume. You can then very cheaply ray-trace against this array of ellipsoids to calculate occlusion.

Does this occur once during load-time? I could see this being beneficial as it saves the artists time, and I can just have that resources available when I want the effect turned on. I'd only do this once though, right? If my model is animating, I'd never want to re-update my AO map, right?

 


I imagine that WoW and many other games are not prebaking AO but simply painting it into the diffuse/albedo maps by hand. That's something artists have done for decades. Centuries? Millenia?

When you say pre-baking, do you mean by calculating the AO offline, then multiplying the result into diffuse/albedo's color, therefore bypassing any type of dynamic AO calculations in the shader? That'd be like baking light maps into textured geometry before multi-texturing was possible.

 


I believe, that a cleverly baked AO map enchanced the appearance of animated characters more than that the potential artifacts are annoying

Once I have AO working, I'm really looking forward to testing animated characters with AO enabled and disabled to see the difference in appearance. I'll probably just be generating the maps in Blender using some sort of plugin at the beginning.


In Topic: Multiplatform API For Webcams

19 January 2015 - 06:06 PM

Flash has access with no platform specialization, assuming user consent.

Java and Python also have access but require a bit of setup for each platform.

I'm actually working in C and C++. Sorry for being vague :) I added that detail in my original post.

 

 

If all you need is to grab the frames, OpenCV has a VideoCapture class and is multiplatform. (and has C, C++ and Python APIs)

It is very basic though.

But maybe you need some of the fancy image processing it contains too smile.png

This looks like something I could use! I'm downloading it, and going to test it out shortly.


PARTNERS