# David Neubelt

Member

269

866 Good

• Rank
Member

## Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

1. ## Questions on Baked GI Spherical Harmonics

For the figure on page (18) it was generated with SH9 baked into light map texels. However, you're approach with a grid will work as well if it's dense enough.The occlusion will come naturally and there are no additional techniques you need to apply because at every position in shadow the cubemap will receive less light.   This works for both diffuse and ambient.   I suspect the reason you aren't seeing any occlusion is because you are adding an ambient term artificially flattening out the scene.     If there is a bright light on the opposite side of the probe then yes. Ringing can occur when you have a bright light on one side which causes that side of the probe to generate really high numbers. On the opposite side of the probe facing away from the light it will need a really high negative number to counter act the high number.    The negative numbers are the cause of the ringing
2. ## The Order 1886: Spherical Gaussian Lightmaps

1) To uniformly distribute the basis vectors - you can choose any distribution you want but that is just the one we went with.   2) Yes, the free parameters you want are the sharpness and amplitude.   There really is no rotation its just use the lights direction vector.   -= Dave
3. ## View matrix - explanation of its elements

To expand,   If you have a transformation that is a rotation and a translation, e.g. the 3d camera's position is t and it's rotation is R then it's world transform is,   x' = Rx + t,  where R is a 3x3 matrix, x is a 3x1 matrix(column vector) and t is a 3x1 matrix(column vector). The output is x', your transformed vector.   To get a view matrix you want to bring all vertices in the frame of reference of the camera so the camera sits at the origin looking down one of the Cartesian axis. You simply need to invert the above equation and solve for x.   x' = Rx + t x' - t = Rx R-1(x' - t) = (R-1)Rx RT(x' - t) = x            // because a rotation matrix is orthogonal then its transpose equals its inverse x = RTx' - RTt   So to invert a 4x4 camera transformation matrix, efficiently, you will want to transpose the upper 3x3 block of the matrix and for the translation part of the matrix you will want to negate and transform your translation vector by the transposed rotation matrix.   -= Dave
4. ## Spherical Worlds using Spherical Coordinates

Geodesic grids work with minimal distortion. They work by starting with an icosahedron and subdividing.   http://kiwi.atmos.colostate.edu/BUGS/geodesic/text.html

6. ## Gaussian filter confusion

Hmm, I'd imagine the RTR is a typo because typically you don't want to scale and you want to be normalized to 1 when you integrate over the domain -\inf to \inf. That's what the term out in the front is for. I don't have the book on me at the moment to see in context what they are talking about.   -= Dave

They aren't implausible assumptions otherwise we wouldn't make them to begin with! Hopefully,no infinities creep in to our answer otherwise we we would get flux of infinite values and that's not very useful.   Our end goal is to obtain  plausible results that are close to the ground truth. The only reason we use simplifying assumptions and simpler models is that we want simpler calculations. We know that with our simplified models we get results that are very close to the ground truth.   An example is to look at the general form of the radiation transfer equation (it describes how much flux is transfered between two surface elements). The general form requires double integrals over both surfaces which is computationally expensive. Sometimes its good enough to say that we can approximate the result as two point sources and ask the question how much energy is transfered between two point sources. These approximations will be valid and come to the right solution, for our purposes, if the two points are far enough away and the surfaces are small enough.  Since the point-to-point radiation transfer equation gave us the same answer, within our acceptable tolerance, and with no integrals we are happy to use it. Additionally, with some mathematical foot work you can show that the point-to-point transfer equation is derived directly from the defitions of irradiance, intensity and solid angle so its mathematically sound with its feet firmly planted in the ground of physical plausability.   In the same vein, it's ok to use a pinhole model and if you do it correctly and you make some assumptions about your scene, light and camera then the result should be very similiar to if you had an aperature, integrated over sensor area, over time and all wavelengths.   For example, you could write a simple raytracer that did a montecarlo integration at every pixel with a very small aperature for one spherical area light very far away from the scene and it would come very close to a rasterizer that used the point light source and a pinhole camera.   Hope that makes sense.   -= Dave

Sorry for the late reply I've been busy.     1   The above should be a little more clear. We want to develop a BRDF that for the intensity of the light coming in one direction, \L_i, should be equal to the light going out in the direction of reflection.     Yah, sorry you want to setup the dirac delta function so that the angle away from the normal axis is equivalent and the other angle is equivalent but rotated 180 degrees.     Right, as you point out, you will want to integrate over the visible hemisphere(aperture), sensor area, and time (equivalent to shutter speed) which beyond just controlling exposure will give you phenomenon such as motion blur and depth of field. If instead of integrating RGB you additionally integrate over the wavelengths then you can get other phenomenon such as chromatic aberration.   -= Dave

The fundamental radiation transfer equation deals with integrating out all of the differential terms to get flux arriving at a sensor (integrating over area's of the surfaces). When you have an image plane it will have sensors with finite area that will detect, even in the case of a pinhole camera, radiation coming from a finite surface area in a scene.   In your diagrams you are simply point sampling a contiguous signal of that flux.

The easiest way is to work backwards. You know you want to reflect out what is coming in so you construct a BRDF to match those conditions. In this case, you want a BRDF the reflects all energy in the mirror direction and to be valid it must maintain energy conservation and reciprocity. The first conditions requires the BRDF to zero everywhere except where,    1   and the second condition to conserve energy will require that,   2   thus, f must equal to   3   and the third condition is easily verified.   P.S. Sorry, I tried to use latex but the forum blocked me from using latex images. I tried using the equation tags but the latex wouldn't parse so I just posted the link instead
11. ## SIGGRAPH 2013 Master Thread

What are you guys doing for indirect specular?     There are details in the course notes but specular probes.

That only makes sense if you let the emitter area increase unbounded but physical devices are bounded in area so it doesn't make sense to think in those terms.   -= Dave

If the solid angle is constant then the area on the left wall will increase as the angle goes further oblique.