Jump to content

  • Log In with Google      Sign In   
  • Create Account

David Neubelt

Member Since 01 Mar 2007
Offline Last Active Jun 26 2014 04:16 PM

Posts I've Made

In Topic: View matrix - explanation of its elements

31 December 2013 - 01:40 PM

 you can perform an optimized 4x4 matrix inverse by transposing the 3x3 portion and doing the dot products to get the last row.

 

To expand,

 

If you have a transformation that is a rotation and a translation, e.g. the 3d camera's position is t and it's rotation is R then it's world transform is,

 

x' = Rx + t,  where R is a 3x3 matrix, x is a 3x1 matrix(column vector) and t is a 3x1 matrix(column vector). The output is x', your transformed vector.

 

To get a view matrix you want to bring all vertices in the frame of reference of the camera so the camera sits at the origin looking down one of the Cartesian axis. You simply need to invert the above equation and solve for x.

 

x' = Rx + t

x' - t = Rx

R-1(x' - t) = (R-1)Rx

RT(x' - t) = x            // because a rotation matrix is orthogonal then its transpose equals its inverse

x = RTx' - RTt

 

So to invert a 4x4 camera transformation matrix, efficiently, you will want to transpose the upper 3x3 block of the matrix and for the translation part of the matrix you will want to negate and transform your translation vector by the transposed rotation matrix.

 

-= Dave


In Topic: Spherical Worlds using Spherical Coordinates

08 October 2013 - 02:27 PM

Geodesic grids work with minimal distortion. They work by starting with an icosahedron and subdividing.

 

http://kiwi.atmos.colostate.edu/BUGS/geodesic/text.html


In Topic: Deferred Shading Different Light Models

19 September 2013 - 08:54 PM

We use GGX for iso/anisotropic materials at Ready At Dawn.

 

You can see the details in the section Core Shading Model http://blog.selfshadow.com/publications/s2013-shading-course/rad/s2013_pbs_rad_notes.pdf

 

-= Dave


In Topic: Gaussian filter confusion

30 August 2013 - 02:11 PM

Hmm, I'd imagine the RTR is a typo because typically you don't want to scale and you want to be normalized to 1 when you integrate over the domain -\inf to \inf. That's what the term out in the front is for. I don't have the book on me at the moment to see in context what they are talking about.

 

-= Dave


In Topic: Radiometry and BRDFs

20 August 2013 - 10:40 PM

In other words: we are compensating physically implausible assumptions (perfect pinnhole) with more physically nonsense (infinetly high sensor sensibility) which still leads to implausible results (infinetly high measurements in the case of perfect reflections).

 
They aren't implausible assumptions otherwise we wouldn't make them to begin with! Hopefully,no infinities creep in to our answer otherwise we we would get flux of infinite values and that's not very useful.
 
Our end goal is to obtain  plausible results that are close to the ground truth. The only reason we use simplifying assumptions and simpler models is that we want simpler calculations. We know that with our simplified models we get results that are very close to the ground truth.
 
An example is to look at the general form of the radiation transfer equation (it describes how much flux is transfered between two surface elements). The general form requires double integrals over both surfaces which is computationally expensive. Sometimes its good enough to say that we can approximate the result as two point sources and ask the question how much energy is transfered between two point sources. These approximations will be valid and come to the right solution, for our purposes, if the two points are far enough away and the surfaces are small enough. 
Since the point-to-point radiation transfer equation gave us the same answer, within our acceptable tolerance, and with no integrals we are happy to use it. Additionally, with some mathematical foot work you can show that the point-to-point transfer equation is derived directly from the defitions of irradiance, intensity and solid angle so its mathematically sound with its feet firmly planted in the ground of physical plausability.
 
In the same vein, it's ok to use a pinhole model and if you do it correctly and you make some assumptions about your scene, light and camera then the result should be very similiar to if you had an aperature, integrated over sensor area, over time and all wavelengths.
 
For example, you could write a simple raytracer that did a montecarlo integration at every pixel with a very small aperature for one spherical area light very far away from the scene and it would come very close to a rasterizer that used the point light source and a pinhole camera.
 
Hope that makes sense.
 
-= Dave

PARTNERS