As an aside, the whole 'dot product' notation here is actually mathematically correct, but really confusing for beginners since most people tend to associate it with directions and angles. There are actually no 'directions' involved in SH, since wah waaah wahh wahh wahh frequency domain. You're just multiplying the coefficients and adding the results as you go. Pick apart a dot product, and, hey, there's the same operations.
The basis functions are parameterized in spherical coordinates where the parameters represent a direction.
Personally, I like to think along the same lines that Ashaman73 does as it gives me an easy mental framework to reason about SH with.
I think it's very intuitive to think of SH as a set of lobes in a fixed set of directions. Each basis function adds another set of directions that you can project your signal onto. The directions represent the input angles to the basis function that produce the global maxima's of the polynomal. The spacing between the maxima's represent the resolution of the signal you can represent.
I was looking at the UE4 pdf for ideas on how to get trajectories and external effects(winds,explosions) to affect my particles(which I currently run on a compute shader) http://www.unrealengine.com/files/misc/The_Technology_Behind_the_Elemental_Demo_16x9_(2).pdf In the Particles section they say they use "Particle Curves to modulate particle attributes* (1D lookup)",so I guess they somehow store the trajectory in a 1D texture?How would one go about encoding the trajectory data in a texture?The second thing I didn't quite get is about the Vector Fields "Additional forces from Vector Fields* (3D lookup)" - I'm completely lost here,are they storing all vector fields in the current scene in some volume texture and then in the compute shader for each particle checking for interactions with all "Vector Fields" in the volume?They also mention "Affect all particle systems globally or a single system",so there must be some indexing going on.
If you need external forces to affect your particles then just pass it as a uniform to your compute shader. If you need localized effects then you can have an array of N effectors that you pass to the shader. If you want to have complex and localized external forces then you have to store it in a 3d grid.
The curves they are talking about is to allow for time varying attributes (e.g. color over life of the particle, or speed over life of the particle).
Finally, the vector fields are vector fields that are divergent free (there are no set of vectors that will make the particle get stuck at a point). Most 3d packages that support particle simulation use an approximation called curl-noise which maintains the divergent free property without having to resort to a 3d fluid simulation. However, they also support more complicated simulations as well that can export these fields. These vector fields can either be generated directly in your engine or imported from popular 3d packages.
No matter what source you use it just controls the motion of the particle at a point in space. In particular, it seems that UE4 allows it to control either the velocity or acceleration.
Typically, you have more information then one image.
If you have two images from different viewpoints then you can reconstruct depth and from depth you can integrate depth to get normal information.
If you have one image and you can determine the light direction (via shadows) then you can determine normals via n.l*p = I (p is albedo and I is intensity of the image)
If you have multiple images from one (the same) viewpoint and multiple known light directions then you can reconstruct the light direction using the previous technique and a least squares regression. Additionally, you can use expectation maximization or other non-linear solvers. The term to search for is photometric stereo.
Lastly, if you can have user input and one image and have the user pick highlights then you can try to determine the light direction and then reconstruct normals.
Are you saying calculate it like it's normal radiosity?
No - I was just trying to give an example of how you can use wavelets with radiosity.
When you say they are storing coefficients in angular dimensions, are you talking about a spherical harmonic?
Spherical harmonics are one set of basis functions and haar is another, h-basis, fourier, etc..
I think you need to read the pre-reqs for that paper. Read Hierarchical Radiosity (Pat Hanrahan) first then Wavelet Radiosity, I believe Shroder has a good exposition on it. When you're done with that you should read Wavelet Radiance and then finally this paper. By the way, there is more recent research in this area, http://www.tml.tkk.fi/~jaakko/meshless-tr/ .
In a hierarchical radiosity scheme you adaptively refine the mesh to until subdividing the mesh doesn't give you better results. For example, if the lighting is changing slowly over the surface then you can approximate the radiance at sub-patches with the parents patch radiance. That's called pushing and of course if you use the children patches to estimate radiance at the parent its called pulling. Wavelets are a set of hierarchical basis functions and have a very similar method for push/pull (pyramid up and pyramid down).
Determining if you should subdivide more is called the 'oracle' in radiosity literature.
Of course, in HR or CR(classical radiosity) they are solving radiosity [W/m^2] which is saying how do we solve the amount of light at each patch. You can apply the same ideas to solving radiance, which is [W/(m^2 sr)]. If you look at the equation for solving radiance it is parameterized by location in space and direction (e.g. light distribution is based on location and angle). Note that radiance L1(i1,o1, x1) -> L2(i1, o1, x2) says the radiance from one space and one input angle is output to another these are the sender and receiver links that the literature refers to.
They apply wavelets to the spatial and angular domain itself to link together radiance.
Anyway, it's tough to jump into a technique that is a refinement of older techniques without first understanding where they came from.
I haven't implemented this technique but I get the basic ideas of the paper.
The mathematics for the basis functions are wavelets specifically using the Haar basis function. If you were to do classic radiosity then you'd say for each patch all other patches contribute some amount of influence. The influence would be the coefficients or a row in a matrix. In classic radiosity you can do a gauss-siedel solve and get your solution.
The problem is the matrix ends up being very sparse with a lot of zero coefficients so they don't contribute to the final radiance. They store coefficients for the spatial dimension and angular dimenson between each patches on the haar basis and the refinement oracle adds more coefficients only if it needs to.
Anyway, that's a hand wavey overview. It's been about three years since I read the paper so if you have any more specific questions ask and I'll go back and read the paper and go into more detail.
If you are serious about implementing this paper then there is a good book called ripples in mathematics that talks about wavelets and is pretty easy introduction to the subject.
I would love to see MJP's updated Error Statistics after YogurtEmperor's fixes to the algorithm.
Also, have you guys tried to compute the optimal/global minimum for this problem by brute-forcing through all relevant possibilities for the compression? (minus any obvious pruning of the search space to make it feasible to brute force) Having the global minimum SNR-wise in MJP's statistics would be a nice addition, to be able to estimate how far these algorithms go from the optimal. Of course that doesn't necessarily give the best perceptual result, but having a good raw number for the lower bound is useful when estimating how far it's possible to improve.
If you're targeting next-gen then why would you still use dxt? Why not use BC7 - it has much higher quality and most importantly you don't see the hue-shift which is so common with DXT.
Stop posting on the internet, buy a good book, and study. In particular, I recommend 'The calculus lifesaver' you can teach yourself all the basics of Calculus without ever talking to a teacher. When you get stuck on a concept then ask for help on a website like math stack exchange. If you still are having trouble then hire a tutor or buy another book.
College is hard and if you go to another college it will only get harder so you need to pick the time you want to start studying hard. Drop out now and procrastinate on when you want to start trying or hit the books now and get it over with.