Jump to content

  • Log In with Google      Sign In   
  • Create Account

David Neubelt

Member Since 01 Mar 2007
Offline Last Active Jun 26 2014 04:16 PM

#5120358 View matrix - explanation of its elements

Posted by David Neubelt on 31 December 2013 - 01:40 PM

 you can perform an optimized 4x4 matrix inverse by transposing the 3x3 portion and doing the dot products to get the last row.

 

To expand,

 

If you have a transformation that is a rotation and a translation, e.g. the 3d camera's position is t and it's rotation is R then it's world transform is,

 

x' = Rx + t,  where R is a 3x3 matrix, x is a 3x1 matrix(column vector) and t is a 3x1 matrix(column vector). The output is x', your transformed vector.

 

To get a view matrix you want to bring all vertices in the frame of reference of the camera so the camera sits at the origin looking down one of the Cartesian axis. You simply need to invert the above equation and solve for x.

 

x' = Rx + t

x' - t = Rx

R-1(x' - t) = (R-1)Rx

RT(x' - t) = x            // because a rotation matrix is orthogonal then its transpose equals its inverse

x = RTx' - RTt

 

So to invert a 4x4 camera transformation matrix, efficiently, you will want to transpose the upper 3x3 block of the matrix and for the translation part of the matrix you will want to negate and transform your translation vector by the transposed rotation matrix.

 

-= Dave




#5087737 Radiometry and BRDFs

Posted by David Neubelt on 20 August 2013 - 10:40 PM

In other words: we are compensating physically implausible assumptions (perfect pinnhole) with more physically nonsense (infinetly high sensor sensibility) which still leads to implausible results (infinetly high measurements in the case of perfect reflections).

 
They aren't implausible assumptions otherwise we wouldn't make them to begin with! Hopefully,no infinities creep in to our answer otherwise we we would get flux of infinite values and that's not very useful.
 
Our end goal is to obtain  plausible results that are close to the ground truth. The only reason we use simplifying assumptions and simpler models is that we want simpler calculations. We know that with our simplified models we get results that are very close to the ground truth.
 
An example is to look at the general form of the radiation transfer equation (it describes how much flux is transfered between two surface elements). The general form requires double integrals over both surfaces which is computationally expensive. Sometimes its good enough to say that we can approximate the result as two point sources and ask the question how much energy is transfered between two point sources. These approximations will be valid and come to the right solution, for our purposes, if the two points are far enough away and the surfaces are small enough. 
Since the point-to-point radiation transfer equation gave us the same answer, within our acceptable tolerance, and with no integrals we are happy to use it. Additionally, with some mathematical foot work you can show that the point-to-point transfer equation is derived directly from the defitions of irradiance, intensity and solid angle so its mathematically sound with its feet firmly planted in the ground of physical plausability.
 
In the same vein, it's ok to use a pinhole model and if you do it correctly and you make some assumptions about your scene, light and camera then the result should be very similiar to if you had an aperature, integrated over sensor area, over time and all wavelengths.
 
For example, you could write a simple raytracer that did a montecarlo integration at every pixel with a very small aperature for one spherical area light very far away from the scene and it would come very close to a rasterizer that used the point light source and a pinhole camera.
 
Hope that makes sense.
 
-= Dave



#5087522 Radiometry and BRDFs

Posted by David Neubelt on 20 August 2013 - 03:00 AM

Sorry for the late reply I've been busy.

 

In the energy conservation equation, what does rho represent? Wikipedia says that this equation should be <= 1. Is rho some sort of reflection percentage?

 

1

 

The above should be a little more clear. We want to develop a BRDF that for the intensity of the light coming in one direction, \L_i, should be equal to the light going out in the direction of reflection.

 

In the final BRDF, should the second delta function be [link]? The sign of pi doesn't matter, if I'm thinking about this correctly.

 

Yah, sorry you want to setup the dirac delta function so that the angle away from the normal axis is equivalent and the other angle is equivalent but rotated 180 degrees.

 

Then I think of how a film camera works, you have a photographic film that gets darker when photons hit it, thus making the final image brighter (because the film is a negative)

 

Right, as you point out, you will want to integrate over the visible hemisphere(aperture), sensor area, and time (equivalent to shutter speed) which beyond just controlling exposure will give you phenomenon such as motion blur and depth of field. If instead of integrating RGB you additionally integrate over the wavelengths then you can get other phenomenon such as chromatic aberration.

 

-= Dave

 




#5085768 Radiometry and BRDFs

Posted by David Neubelt on 14 August 2013 - 02:45 AM

I suppose this is another area where approximations are tripping me up, as there is no pinhole that will only let one ray through, nor any perfect lens that can ensure that every point on the retina receives light from just one direction. In reality, there is always a tiny solid angle around that ideal direction from which the point on the retina receives light.

 

The fundamental radiation transfer equation deals with integrating out all of the differential terms to get flux arriving at a sensor (integrating over area's of the surfaces). When you have an image plane it will have sensors with finite area that will detect, even in the case of a pinhole camera, radiation coming from a finite surface area in a scene.

 

In your diagrams you are simply point sampling a contiguous signal of that flux.




#5085765 Radiometry and BRDFs

Posted by David Neubelt on 14 August 2013 - 02:40 AM

However, I know that this function can't return 1 in the required direction, because the units would be wrong. The ELcosθ term is in W·m-2, and the BRDF needs to return something in units of sr-1. If I return 1, then the end result is that the calculated radiance is the same magnitude as the irradiance, even though the units are different, which seems too easy. Can anyone help me figure out the right way to think about this problem so that I can figure out what this BRDF should look like?

 

The easiest way is to work backwards. You know you want to reflect out what is coming in so you construct a BRDF to match those conditions. In this case, you want a BRDF the reflects all energy in the mirror direction and to be valid it must maintain energy conservation and reciprocity. The first conditions requires the BRDF to zero everywhere except where, 

 

1

 

and the second condition to conserve energy will require that,

 

2

 
thus, f must equal to
 
 
and the third condition is easily verified.
 
P.S. Sorry, I tried to use latex but the forum blocked me from using latex images. I tried using the equation tags but the latex wouldn't parse so I just posted the link instead sad.png



#5080901 SIGGRAPH 2013 Master Thread

Posted by David Neubelt on 26 July 2013 - 09:38 PM

 

Yeah it's pretty nice. biggrin.png

Like most modern GPU's there's plenty of ALU available while you're waiting for reads from memory, and complex BRDF's are a pretty straightforward way to take advantage of that.

 

What are you guys doing for indirect specular?

 

 

There are details in the course notes but specular probes.




#5060226 Radiance (radiometry) clarification

Posted by David Neubelt on 08 May 2013 - 02:06 AM

That only makes sense if you let the emitter area increase unbounded but physical devices are bounded in area so it doesn't make sense to think in those terms.

 

-= Dave




#5054742 Spherical harmonics in pixel shader

Posted by David Neubelt on 18 April 2013 - 06:25 PM

As an aside, the whole 'dot product' notation here is actually mathematically correct, but really confusing for beginners since most people tend to associate it with directions and angles. There are actually no 'directions' involved in SH, since wah waaah wahh wahh wahh frequency domain. You're just multiplying the coefficients and adding the results as you go. Pick apart a dot product, and, hey, there's the same operations.

 

The basis functions are parameterized in spherical coordinates where the parameters represent a direction.

 

Personally, I like to think along the same lines that Ashaman73 does as it gives me an easy mental framework to reason about SH with.

 

I think it's very intuitive to think of SH as a set of lobes in a fixed set of directions. Each basis function adds another set of directions that you can project your signal onto. The directions represent the input angles to the basis function that produce the global maxima's of the polynomal. The spacing between the maxima's represent the resolution of the signal you can represent.

 

-= Dave




#5033296 GPU particles trajectoriy and external effects

Posted by David Neubelt on 17 February 2013 - 02:12 AM

I was looking at the UE4 pdf for ideas on how to get trajectories and external effects(winds,explosions) to affect my particles(which I currently run on a compute shader) http://www.unrealengine.com/files/misc/The_Technology_Behind_the_Elemental_Demo_16x9_(2).pdf In the Particles section they say they use "Particle Curves to modulate particle attributes* (1D lookup)",so I guess they somehow store the trajectory in a 1D texture?How would one go about encoding the trajectory data in a texture?The second thing I didn't quite get is about the Vector Fields "Additional forces from Vector Fields* (3D lookup)" - I'm completely lost here,are they storing all vector fields in the current scene in some volume texture and then in the compute shader for each particle checking for interactions with all "Vector Fields" in the volume?They also mention "Affect all particle systems globally or a single system",so there must be some indexing going on.

 

If you need external forces to affect your particles then just pass it as a uniform to your compute shader. If you need localized effects then you can have an array of N effectors that you pass to the shader. If you want to have complex and localized external forces then you have to store it in a 3d grid.

 

The curves they are talking about is to allow for time varying attributes (e.g. color over life of the particle, or speed over life of the particle).

 

Finally, the vector fields are vector fields that are divergent free (there are no set of vectors that will make the particle get stuck at a point). Most 3d packages that support particle simulation use an approximation called curl-noise which maintains the divergent free property without having to resort to a 3d fluid simulation. However, they also support more complicated simulations as well that can export these fields. These vector fields can either be generated directly in your engine or imported from popular 3d packages.

 

No matter what source you use it just controls the motion of the particle at a point in space. In particular, it seems that UE4 allows it to control either the velocity or acceleration.

 

-= Dave




#5031709 Approximation of Normals in Screen Space

Posted by David Neubelt on 13 February 2013 - 01:01 AM

Typically, you have more information then one image. 

 

If you have two images from different viewpoints then you can reconstruct depth and from depth you can integrate depth to get normal information.

 

If you have one image and you can determine the light direction (via shadows) then you can determine normals via n.l*p = I (p is albedo and I is intensity of the image)

 

If you have multiple images from one (the same) viewpoint and multiple known light directions then you can reconstruct the light direction using the previous technique and a least squares regression. Additionally, you can use expectation maximization or other non-linear solvers. The term to search for is photometric stereo.

 

Lastly, if you can have user input and one image and have the user pick highlights then you can try to determine the light direction and then reconstruct normals.

 

-= Dave




#4964978 Generating spheres with unifrom vertex density

Posted by David Neubelt on 31 July 2012 - 02:05 PM

This gives decent results for render able geometry.

http://mathproofs.blogspot.com/2005/07/mapping-cube-to-sphere.html


#4918812 Wavelet Radiance Transport for Interactive Indirect Lighting

Posted by David Neubelt on 03 March 2012 - 03:23 AM

Are you saying calculate it like it's normal radiosity?

No - I was just trying to give an example of how you can use wavelets with radiosity.

When you say they are storing coefficients in angular dimensions, are you talking about a spherical harmonic?

Spherical harmonics are one set of basis functions and haar is another, h-basis, fourier, etc..

I think you need to read the pre-reqs for that paper. Read Hierarchical Radiosity (Pat Hanrahan) first then Wavelet Radiosity, I believe Shroder has a good exposition on it. When you're done with that you should read Wavelet Radiance and then finally this paper. By the way, there is more recent research in this area, http://www.tml.tkk.fi/~jaakko/meshless-tr/ .

In a hierarchical radiosity scheme you adaptively refine the mesh to until subdividing the mesh doesn't give you better results. For example, if the lighting is changing slowly over the surface then you can approximate the radiance at sub-patches with the parents patch radiance. That's called pushing and of course if you use the children patches to estimate radiance at the parent its called pulling. Wavelets are a set of hierarchical basis functions and have a very similar method for push/pull (pyramid up and pyramid down).

Determining if you should subdivide more is called the 'oracle' in radiosity literature.

Of course, in HR or CR(classical radiosity) they are solving radiosity [W/m^2] which is saying how do we solve the amount of light at each patch. You can apply the same ideas to solving radiance, which is [W/(m^2 sr)]. If you look at the equation for solving radiance it is parameterized by location in space and direction (e.g. light distribution is based on location and angle). Note that radiance L1(i1,o1, x1) -> L2(i1, o1, x2) says the radiance from one space and one input angle is output to another these are the sender and receiver links that the literature refers to.

They apply wavelets to the spatial and angular domain itself to link together radiance.

Anyway, it's tough to jump into a technique that is a refinement of older techniques without first understanding where they came from.

-= Dave


#4918727 Wavelet Radiance Transport for Interactive Indirect Lighting

Posted by David Neubelt on 02 March 2012 - 04:49 PM

I haven't implemented this technique but I get the basic ideas of the paper.

The mathematics for the basis functions are wavelets specifically using the Haar basis function. If you were to do classic radiosity then you'd say for each patch all other patches contribute some amount of influence. The influence would be the coefficients or a row in a matrix. In classic radiosity you can do a gauss-siedel solve and get your solution.

The problem is the matrix ends up being very sparse with a lot of zero coefficients so they don't contribute to the final radiance. They store coefficients for the spatial dimension and angular dimenson between each patches on the haar basis and the refinement oracle adds more coefficients only if it needs to.

Anyway, that's a hand wavey overview. It's been about three years since I read the paper so if you have any more specific questions ask and I'll go back and read the paper and go into more detail.

If you are serious about implementing this paper then there is a good book called ripples in mathematics that talks about wavelets and is pretty easy introduction to the subject.

-= Dave


#4890856 [Release] DXTn Compression Algorithm

Posted by David Neubelt on 05 December 2011 - 04:08 PM

Nice read!

I would love to see MJP's updated Error Statistics after YogurtEmperor's fixes to the algorithm.

Also, have you guys tried to compute the optimal/global minimum for this problem by brute-forcing through all relevant possibilities for the compression? (minus any obvious pruning of the search space to make it feasible to brute force) Having the global minimum SNR-wise in MJP's statistics would be a nice addition, to be able to estimate how far these algorithms go from the optimal. Of course that doesn't necessarily give the best perceptual result, but having a good raw number for the lower bound is useful when estimating how far it's possible to improve.


If you're targeting next-gen then why would you still use dxt? Why not use BC7 - it has much higher quality and most importantly you don't see the hue-shift which is so common with DXT.

-= Dave





#4808872 Did I screw up by going to full sail?

Posted by David Neubelt on 10 May 2011 - 01:54 AM

Here is my hard advice.

Stop posting on the internet, buy a good book, and study. In particular, I recommend 'The calculus lifesaver' you can teach yourself all the basics of Calculus without ever talking to a teacher. When you get stuck on a concept then ask for help on a website like math stack exchange. If you still are having trouble then hire a tutor or buy another book.

College is hard and if you go to another college it will only get harder so you need to pick the time you want to start studying hard. Drop out now and procrastinate on when you want to start trying or hit the books now and get it over with.

-= Dave


PARTNERS