• Create Account

# David Neubelt

Member Since 01 Mar 2007
Offline Last Active Jan 31 2016 01:29 AM

### #5268943Questions on Baked GI Spherical Harmonics

Posted by on 02 January 2016 - 11:11 PM

For the figure on page (18) it was generated with SH9 baked into light map texels. However, you're approach with a grid will work as well if it's dense enough.The occlusion will come naturally and there are no additional techniques you need to apply because at every position in shadow the cubemap will receive less light.

This works for both diffuse and ambient.

I suspect the reason you aren't seeing any occlusion is because you are adding an ambient term artificially flattening out the scene.

I've also noticed these weird arfiacts on my light probes. Is this "ringing"? Or am I just really messing up the projection step?

If there is a bright light on the opposite side of the probe then yes. Ringing can occur when you have a bright light on one side which causes that side of the probe to generate really high numbers. On the opposite side of the probe facing away from the light it will need a really high negative number to counter act the high number.

The negative numbers are the cause of the ringing

### #5248126The Order 1886: Spherical Gaussian Lightmaps

Posted by on 21 August 2015 - 03:10 PM

1) To uniformly distribute the basis vectors - you can choose any distribution you want but that is just the one we went with.

2) Yes, the free parameters you want are the sharpness and amplitude.

There really is no rotation its just use the lights direction vector.

-= Dave

### #5120358View matrix - explanation of its elements

Posted by on 31 December 2013 - 01:40 PM

you can perform an optimized 4x4 matrix inverse by transposing the 3x3 portion and doing the dot products to get the last row.

To expand,

If you have a transformation that is a rotation and a translation, e.g. the 3d camera's position is t and it's rotation is R then it's world transform is,

x' = Rx + t,  where R is a 3x3 matrix, x is a 3x1 matrix(column vector) and t is a 3x1 matrix(column vector). The output is x', your transformed vector.

To get a view matrix you want to bring all vertices in the frame of reference of the camera so the camera sits at the origin looking down one of the Cartesian axis. You simply need to invert the above equation and solve for x.

x' = Rx + t

x' - t = Rx

R-1(x' - t) = (R-1)Rx

RT(x' - t) = x            // because a rotation matrix is orthogonal then its transpose equals its inverse

x = RTx' - RTt

So to invert a 4x4 camera transformation matrix, efficiently, you will want to transpose the upper 3x3 block of the matrix and for the translation part of the matrix you will want to negate and transform your translation vector by the transposed rotation matrix.

-= Dave

Posted by on 20 August 2013 - 10:40 PM

In other words: we are compensating physically implausible assumptions (perfect pinnhole) with more physically nonsense (infinetly high sensor sensibility) which still leads to implausible results (infinetly high measurements in the case of perfect reflections).

They aren't implausible assumptions otherwise we wouldn't make them to begin with! Hopefully,no infinities creep in to our answer otherwise we we would get flux of infinite values and that's not very useful.

Our end goal is to obtain  plausible results that are close to the ground truth. The only reason we use simplifying assumptions and simpler models is that we want simpler calculations. We know that with our simplified models we get results that are very close to the ground truth.

An example is to look at the general form of the radiation transfer equation (it describes how much flux is transfered between two surface elements). The general form requires double integrals over both surfaces which is computationally expensive. Sometimes its good enough to say that we can approximate the result as two point sources and ask the question how much energy is transfered between two point sources. These approximations will be valid and come to the right solution, for our purposes, if the two points are far enough away and the surfaces are small enough.
Since the point-to-point radiation transfer equation gave us the same answer, within our acceptable tolerance, and with no integrals we are happy to use it. Additionally, with some mathematical foot work you can show that the point-to-point transfer equation is derived directly from the defitions of irradiance, intensity and solid angle so its mathematically sound with its feet firmly planted in the ground of physical plausability.

In the same vein, it's ok to use a pinhole model and if you do it correctly and you make some assumptions about your scene, light and camera then the result should be very similiar to if you had an aperature, integrated over sensor area, over time and all wavelengths.

For example, you could write a simple raytracer that did a montecarlo integration at every pixel with a very small aperature for one spherical area light very far away from the scene and it would come very close to a rasterizer that used the point light source and a pinhole camera.

Hope that makes sense.

-= Dave

Posted by on 20 August 2013 - 03:00 AM

Sorry for the late reply I've been busy.

In the energy conservation equation, what does rho represent? Wikipedia says that this equation should be <= 1. Is rho some sort of reflection percentage?

1

The above should be a little more clear. We want to develop a BRDF that for the intensity of the light coming in one direction, \L_i, should be equal to the light going out in the direction of reflection.

In the final BRDF, should the second delta function be [link]? The sign of pi doesn't matter, if I'm thinking about this correctly.

Yah, sorry you want to setup the dirac delta function so that the angle away from the normal axis is equivalent and the other angle is equivalent but rotated 180 degrees.

Then I think of how a film camera works, you have a photographic film that gets darker when photons hit it, thus making the final image brighter (because the film is a negative)

Right, as you point out, you will want to integrate over the visible hemisphere(aperture), sensor area, and time (equivalent to shutter speed) which beyond just controlling exposure will give you phenomenon such as motion blur and depth of field. If instead of integrating RGB you additionally integrate over the wavelengths then you can get other phenomenon such as chromatic aberration.

-= Dave

Posted by on 14 August 2013 - 02:45 AM

I suppose this is another area where approximations are tripping me up, as there is no pinhole that will only let one ray through, nor any perfect lens that can ensure that every point on the retina receives light from just one direction. In reality, there is always a tiny solid angle around that ideal direction from which the point on the retina receives light.

The fundamental radiation transfer equation deals with integrating out all of the differential terms to get flux arriving at a sensor (integrating over area's of the surfaces). When you have an image plane it will have sensors with finite area that will detect, even in the case of a pinhole camera, radiation coming from a finite surface area in a scene.

In your diagrams you are simply point sampling a contiguous signal of that flux.

Posted by on 14 August 2013 - 02:40 AM

However, I know that this function can't return 1 in the required direction, because the units would be wrong. The ELcosθ term is in W·m-2, and the BRDF needs to return something in units of sr-1. If I return 1, then the end result is that the calculated radiance is the same magnitude as the irradiance, even though the units are different, which seems too easy. Can anyone help me figure out the right way to think about this problem so that I can figure out what this BRDF should look like?

The easiest way is to work backwards. You know you want to reflect out what is coming in so you construct a BRDF to match those conditions. In this case, you want a BRDF the reflects all energy in the mirror direction and to be valid it must maintain energy conservation and reciprocity. The first conditions requires the BRDF to zero everywhere except where,

1

and the second condition to conserve energy will require that,

2

thus, f must equal to

and the third condition is easily verified.

P.S. Sorry, I tried to use latex but the forum blocked me from using latex images. I tried using the equation tags but the latex wouldn't parse so I just posted the link instead

Posted by on 26 July 2013 - 09:38 PM

Yeah it's pretty nice.

Like most modern GPU's there's plenty of ALU available while you're waiting for reads from memory, and complex BRDF's are a pretty straightforward way to take advantage of that.

What are you guys doing for indirect specular?

There are details in the course notes but specular probes.

Posted by on 08 May 2013 - 02:06 AM

That only makes sense if you let the emitter area increase unbounded but physical devices are bounded in area so it doesn't make sense to think in those terms.

-= Dave

### #5054742Spherical harmonics in pixel shader

Posted by on 18 April 2013 - 06:25 PM

As an aside, the whole 'dot product' notation here is actually mathematically correct, but really confusing for beginners since most people tend to associate it with directions and angles. There are actually no 'directions' involved in SH, since wah waaah wahh wahh wahh frequency domain. You're just multiplying the coefficients and adding the results as you go. Pick apart a dot product, and, hey, there's the same operations.

The basis functions are parameterized in spherical coordinates where the parameters represent a direction.

Personally, I like to think along the same lines that Ashaman73 does as it gives me an easy mental framework to reason about SH with.

I think it's very intuitive to think of SH as a set of lobes in a fixed set of directions. Each basis function adds another set of directions that you can project your signal onto. The directions represent the input angles to the basis function that produce the global maxima's of the polynomal. The spacing between the maxima's represent the resolution of the signal you can represent.

-= Dave

### #5033296GPU particles trajectoriy and external effects

Posted by on 17 February 2013 - 02:12 AM

I was looking at the UE4 pdf for ideas on how to get trajectories and external effects(winds,explosions) to affect my particles(which I currently run on a compute shader) http://www.unrealengine.com/files/misc/The_Technology_Behind_the_Elemental_Demo_16x9_(2).pdf In the Particles section they say they use "Particle Curves to modulate particle attributes* (1D lookup)",so I guess they somehow store the trajectory in a 1D texture?How would one go about encoding the trajectory data in a texture?The second thing I didn't quite get is about the Vector Fields "Additional forces from Vector Fields* (3D lookup)" - I'm completely lost here,are they storing all vector fields in the current scene in some volume texture and then in the compute shader for each particle checking for interactions with all "Vector Fields" in the volume?They also mention "Affect all particle systems globally or a single system",so there must be some indexing going on.

If you need external forces to affect your particles then just pass it as a uniform to your compute shader. If you need localized effects then you can have an array of N effectors that you pass to the shader. If you want to have complex and localized external forces then you have to store it in a 3d grid.

The curves they are talking about is to allow for time varying attributes (e.g. color over life of the particle, or speed over life of the particle).

Finally, the vector fields are vector fields that are divergent free (there are no set of vectors that will make the particle get stuck at a point). Most 3d packages that support particle simulation use an approximation called curl-noise which maintains the divergent free property without having to resort to a 3d fluid simulation. However, they also support more complicated simulations as well that can export these fields. These vector fields can either be generated directly in your engine or imported from popular 3d packages.

No matter what source you use it just controls the motion of the particle at a point in space. In particular, it seems that UE4 allows it to control either the velocity or acceleration.

-= Dave

### #5031709Approximation of Normals in Screen Space

Posted by on 13 February 2013 - 01:01 AM

If you have two images from different viewpoints then you can reconstruct depth and from depth you can integrate depth to get normal information.

If you have one image and you can determine the light direction (via shadows) then you can determine normals via n.l*p = I (p is albedo and I is intensity of the image)

If you have multiple images from one (the same) viewpoint and multiple known light directions then you can reconstruct the light direction using the previous technique and a least squares regression. Additionally, you can use expectation maximization or other non-linear solvers. The term to search for is photometric stereo.

Lastly, if you can have user input and one image and have the user pick highlights then you can try to determine the light direction and then reconstruct normals.

-= Dave

### #4964978Generating spheres with unifrom vertex density

Posted by on 31 July 2012 - 02:05 PM

This gives decent results for render able geometry.

http://mathproofs.blogspot.com/2005/07/mapping-cube-to-sphere.html

### #4918812Wavelet Radiance Transport for Interactive Indirect Lighting

Posted by on 03 March 2012 - 03:23 AM

Are you saying calculate it like it's normal radiosity?

No - I was just trying to give an example of how you can use wavelets with radiosity.

When you say they are storing coefficients in angular dimensions, are you talking about a spherical harmonic?

Spherical harmonics are one set of basis functions and haar is another, h-basis, fourier, etc..

I think you need to read the pre-reqs for that paper. Read Hierarchical Radiosity (Pat Hanrahan) first then Wavelet Radiosity, I believe Shroder has a good exposition on it. When you're done with that you should read Wavelet Radiance and then finally this paper. By the way, there is more recent research in this area, http://www.tml.tkk.fi/~jaakko/meshless-tr/ .

In a hierarchical radiosity scheme you adaptively refine the mesh to until subdividing the mesh doesn't give you better results. For example, if the lighting is changing slowly over the surface then you can approximate the radiance at sub-patches with the parents patch radiance. That's called pushing and of course if you use the children patches to estimate radiance at the parent its called pulling. Wavelets are a set of hierarchical basis functions and have a very similar method for push/pull (pyramid up and pyramid down).

Determining if you should subdivide more is called the 'oracle' in radiosity literature.

Of course, in HR or CR(classical radiosity) they are solving radiosity [W/m^2] which is saying how do we solve the amount of light at each patch. You can apply the same ideas to solving radiance, which is [W/(m^2 sr)]. If you look at the equation for solving radiance it is parameterized by location in space and direction (e.g. light distribution is based on location and angle). Note that radiance L1(i1,o1, x1) -> L2(i1, o1, x2) says the radiance from one space and one input angle is output to another these are the sender and receiver links that the literature refers to.

They apply wavelets to the spatial and angular domain itself to link together radiance.

Anyway, it's tough to jump into a technique that is a refinement of older techniques without first understanding where they came from.

-= Dave

### #4918727Wavelet Radiance Transport for Interactive Indirect Lighting

Posted by on 02 March 2012 - 04:49 PM

I haven't implemented this technique but I get the basic ideas of the paper.

The mathematics for the basis functions are wavelets specifically using the Haar basis function. If you were to do classic radiosity then you'd say for each patch all other patches contribute some amount of influence. The influence would be the coefficients or a row in a matrix. In classic radiosity you can do a gauss-siedel solve and get your solution.

The problem is the matrix ends up being very sparse with a lot of zero coefficients so they don't contribute to the final radiance. They store coefficients for the spatial dimension and angular dimenson between each patches on the haar basis and the refinement oracle adds more coefficients only if it needs to.

Anyway, that's a hand wavey overview. It's been about three years since I read the paper so if you have any more specific questions ask and I'll go back and read the paper and go into more detail.

If you are serious about implementing this paper then there is a good book called ripples in mathematics that talks about wavelets and is pretty easy introduction to the subject.

-= Dave

PARTNERS