Jump to content

  • Log In with Google      Sign In   
  • Create Account


Member Since 18 Sep 2012
Offline Last Active Mar 17 2016 07:51 AM

Posts I've Made

In Topic: Where is the cosine factor in extended LTE?

31 January 2016 - 07:25 AM


Honestly I do not understand whether you're asking why is the LTE considering the camera, or if you're asking why the LTE is not considering the camera.

Same for me, but i think a better example for the question is:
Taking a pictuture of a white wall, euqually lit over its entire area, why are the corners of the picture darker than the center?

I've found this wikipedia page about that: https://de.wikipedia.org/wiki/Cos4-Gesetz
But i don't know how to get the english version. (There is one about vignetting, but vignetting is the wrong term and has different reasons)


I think this cos4 law is the key to my questions.


My guess is that to avoid vignetting effect, the We factor (importance function) is proportion the inverse of cos4. And since the pdf of primary ray contains cos3, it just gets three of the four cos cancelled out. And the left one in the denominator gets cancelled with the cos factor in the LTE, hidden in the G(v_0 - v_1) component.


I've checked the pbrt-v3 implementation, it works this way.


In Topic: Where is the cosine factor in extended LTE?

25 December 2015 - 07:38 PM


Take a real world example here, you are watching a movie, which is displaying a uniform white image, let's say radiance of each ray is exactly 1, obviously the ones that hit the center of the screen will reflect more lights to the viewer, while the ones hit around the edge of the screen will be a little bit darker depending on the FOV of the projector, no matter how small it is, it should be there.

The edge receives less light because both the distance and the angle to the projector are larger than at the center.
Classic Rendering eq. defines that correctly, but i doupt that's comparable to the way a camerea captures the image to film.
Might have to do with optics of the lens.
Game devs typically use a vignette effect if thy care at all but most probably they don't care for physical correctness here.

pbrt book at the page of 760

What book?
What the hell is LTE?
And what is ray pdf? (Belongs to the other thread you started, but i could not resist)

What i mean is, you need to provide more information to get some answers ;)



Physically based rendering.

LTE stands for light transport equation, or rendering equation.

By ray PDF, I mean the probability density function value of a specific ray.


That's more of an offline rendering question than a game development one, :)


29 January 2014 - 01:06 AM

Thanks for all of the answers, they are very helpful.

In Topic: Perspective division in Vertex Shader? Is it allowed?

01 January 2014 - 08:58 PM

Thanks guys.


I think there are two issues if perspective division is done in vs.


 1. Since some points could be behind the eye, which means that the w component in clip space is negative. The output of Vertex shader is supposed to be in clip space. Of course you can do perspective division in VS and mathematically doing perspective division twice won't hurt anything. However, w will be 1 if perspective divide is done in VS. With w equaling to 1, the hardware is unable to reject any point behind the eye which will lead to wrong result of rasterization.

 2. Since w component of vertex output is used to interpolate attribute, you can't change it arbitrarily. Or there will be wrong attribute interpolation, in other words, perspective correction won't work for attribute.

In Topic: How does depth bias work in DX10 ?

27 August 2013 - 12:07 AM

It's weird.


1. The vertex with the maximum depth value in a primitive may be far away from the pixel to be shaded. It may be even outside the render target.

2. r is a integer , 23 , 10 or 52 , whatever. How does it relate to the maximum depth value in a primitive ?