In other words: we are compensating physically implausible assumptions (perfect pinnhole) with more physically nonsense (infinetly high sensor sensibility) which still leads to implausible results (infinetly high measurements in the case of perfect reflections).
David NeubeltMember Since 01 Mar 2007
Offline Last Active Yesterday, 12:30 PM
- Group Members
- Active Posts 266
- Profile Views 1,918
- Submitted Links 0
- Member Title Member
- Age Age Unknown
- Birthday Birthday Unknown
David Neubelt hasn't added any contacts yet.
Posted by David Neubelt on 20 August 2013 - 10:40 PM
Posted by David Neubelt on 20 August 2013 - 03:00 AM
Sorry for the late reply I've been busy.
In the energy conservation equation, what does rho represent? Wikipedia says that this equation should be <= 1. Is rho some sort of reflection percentage?
The above should be a little more clear. We want to develop a BRDF that for the intensity of the light coming in one direction, \L_i, should be equal to the light going out in the direction of reflection.
In the final BRDF, should the second delta function be [link]? The sign of pi doesn't matter, if I'm thinking about this correctly.
Yah, sorry you want to setup the dirac delta function so that the angle away from the normal axis is equivalent and the other angle is equivalent but rotated 180 degrees.
Then I think of how a film camera works, you have a photographic film that gets darker when photons hit it, thus making the final image brighter (because the film is a negative)
Right, as you point out, you will want to integrate over the visible hemisphere(aperture), sensor area, and time (equivalent to shutter speed) which beyond just controlling exposure will give you phenomenon such as motion blur and depth of field. If instead of integrating RGB you additionally integrate over the wavelengths then you can get other phenomenon such as chromatic aberration.
Posted by David Neubelt on 14 August 2013 - 02:45 AM
I suppose this is another area where approximations are tripping me up, as there is no pinhole that will only let one ray through, nor any perfect lens that can ensure that every point on the retina receives light from just one direction. In reality, there is always a tiny solid angle around that ideal direction from which the point on the retina receives light.
The fundamental radiation transfer equation deals with integrating out all of the differential terms to get flux arriving at a sensor (integrating over area's of the surfaces). When you have an image plane it will have sensors with finite area that will detect, even in the case of a pinhole camera, radiation coming from a finite surface area in a scene.
In your diagrams you are simply point sampling a contiguous signal of that flux.
Posted by David Neubelt on 14 August 2013 - 02:40 AM
However, I know that this function can't return 1 in the required direction, because the units would be wrong. The ELcosθ term is in W·m-2, and the BRDF needs to return something in units of sr-1. If I return 1, then the end result is that the calculated radiance is the same magnitude as the irradiance, even though the units are different, which seems too easy. Can anyone help me figure out the right way to think about this problem so that I can figure out what this BRDF should look like?
The easiest way is to work backwards. You know you want to reflect out what is coming in so you construct a BRDF to match those conditions. In this case, you want a BRDF the reflects all energy in the mirror direction and to be valid it must maintain energy conservation and reciprocity. The first conditions requires the BRDF to zero everywhere except where,
and the second condition to conserve energy will require that,
Posted by David Neubelt on 26 July 2013 - 09:38 PM
Yeah it's pretty nice.
Like most modern GPU's there's plenty of ALU available while you're waiting for reads from memory, and complex BRDF's are a pretty straightforward way to take advantage of that.
What are you guys doing for indirect specular?
There are details in the course notes but specular probes.
Posted by David Neubelt on 08 May 2013 - 02:06 AM
That only makes sense if you let the emitter area increase unbounded but physical devices are bounded in area so it doesn't make sense to think in those terms.
Posted by David Neubelt on 18 April 2013 - 06:25 PM
As an aside, the whole 'dot product' notation here is actually mathematically correct, but really confusing for beginners since most people tend to associate it with directions and angles. There are actually no 'directions' involved in SH, since wah waaah wahh wahh wahh frequency domain. You're just multiplying the coefficients and adding the results as you go. Pick apart a dot product, and, hey, there's the same operations.
The basis functions are parameterized in spherical coordinates where the parameters represent a direction.
Personally, I like to think along the same lines that Ashaman73 does as it gives me an easy mental framework to reason about SH with.
I think it's very intuitive to think of SH as a set of lobes in a fixed set of directions. Each basis function adds another set of directions that you can project your signal onto. The directions represent the input angles to the basis function that produce the global maxima's of the polynomal. The spacing between the maxima's represent the resolution of the signal you can represent.
Posted by David Neubelt on 17 February 2013 - 02:12 AM
I was looking at the UE4 pdf for ideas on how to get trajectories and external effects(winds,explosions) to affect my particles(which I currently run on a compute shader) http://www.unrealengine.com/files/misc/The_Technology_Behind_the_Elemental_Demo_16x9_(2).pdf In the Particles section they say they use "Particle Curves to modulate particle attributes* (1D lookup)",so I guess they somehow store the trajectory in a 1D texture?How would one go about encoding the trajectory data in a texture?The second thing I didn't quite get is about the Vector Fields "Additional forces from Vector Fields* (3D lookup)" - I'm completely lost here,are they storing all vector fields in the current scene in some volume texture and then in the compute shader for each particle checking for interactions with all "Vector Fields" in the volume?They also mention "Affect all particle systems globally or a single system",so there must be some indexing going on.
If you need external forces to affect your particles then just pass it as a uniform to your compute shader. If you need localized effects then you can have an array of N effectors that you pass to the shader. If you want to have complex and localized external forces then you have to store it in a 3d grid.
The curves they are talking about is to allow for time varying attributes (e.g. color over life of the particle, or speed over life of the particle).
Finally, the vector fields are vector fields that are divergent free (there are no set of vectors that will make the particle get stuck at a point). Most 3d packages that support particle simulation use an approximation called curl-noise which maintains the divergent free property without having to resort to a 3d fluid simulation. However, they also support more complicated simulations as well that can export these fields. These vector fields can either be generated directly in your engine or imported from popular 3d packages.
No matter what source you use it just controls the motion of the particle at a point in space. In particular, it seems that UE4 allows it to control either the velocity or acceleration.
Posted by David Neubelt on 13 February 2013 - 01:01 AM
Typically, you have more information then one image.
If you have two images from different viewpoints then you can reconstruct depth and from depth you can integrate depth to get normal information.
If you have one image and you can determine the light direction (via shadows) then you can determine normals via n.l*p = I (p is albedo and I is intensity of the image)
If you have multiple images from one (the same) viewpoint and multiple known light directions then you can reconstruct the light direction using the previous technique and a least squares regression. Additionally, you can use expectation maximization or other non-linear solvers. The term to search for is photometric stereo.
Lastly, if you can have user input and one image and have the user pick highlights then you can try to determine the light direction and then reconstruct normals.
Posted by David Neubelt on 31 July 2012 - 02:05 PM
Posted by David Neubelt on 03 March 2012 - 03:23 AM
No - I was just trying to give an example of how you can use wavelets with radiosity.
Are you saying calculate it like it's normal radiosity?
Spherical harmonics are one set of basis functions and haar is another, h-basis, fourier, etc..
When you say they are storing coefficients in angular dimensions, are you talking about a spherical harmonic?
I think you need to read the pre-reqs for that paper. Read Hierarchical Radiosity (Pat Hanrahan) first then Wavelet Radiosity, I believe Shroder has a good exposition on it. When you're done with that you should read Wavelet Radiance and then finally this paper. By the way, there is more recent research in this area, http://www.tml.tkk.fi/~jaakko/meshless-tr/ .
In a hierarchical radiosity scheme you adaptively refine the mesh to until subdividing the mesh doesn't give you better results. For example, if the lighting is changing slowly over the surface then you can approximate the radiance at sub-patches with the parents patch radiance. That's called pushing and of course if you use the children patches to estimate radiance at the parent its called pulling. Wavelets are a set of hierarchical basis functions and have a very similar method for push/pull (pyramid up and pyramid down).
Determining if you should subdivide more is called the 'oracle' in radiosity literature.
Of course, in HR or CR(classical radiosity) they are solving radiosity [W/m^2] which is saying how do we solve the amount of light at each patch. You can apply the same ideas to solving radiance, which is [W/(m^2 sr)]. If you look at the equation for solving radiance it is parameterized by location in space and direction (e.g. light distribution is based on location and angle). Note that radiance L1(i1,o1, x1) -> L2(i1, o1, x2) says the radiance from one space and one input angle is output to another these are the sender and receiver links that the literature refers to.
They apply wavelets to the spatial and angular domain itself to link together radiance.
Anyway, it's tough to jump into a technique that is a refinement of older techniques without first understanding where they came from.
Posted by David Neubelt on 02 March 2012 - 04:49 PM
The mathematics for the basis functions are wavelets specifically using the Haar basis function. If you were to do classic radiosity then you'd say for each patch all other patches contribute some amount of influence. The influence would be the coefficients or a row in a matrix. In classic radiosity you can do a gauss-siedel solve and get your solution.
The problem is the matrix ends up being very sparse with a lot of zero coefficients so they don't contribute to the final radiance. They store coefficients for the spatial dimension and angular dimenson between each patches on the haar basis and the refinement oracle adds more coefficients only if it needs to.
Anyway, that's a hand wavey overview. It's been about three years since I read the paper so if you have any more specific questions ask and I'll go back and read the paper and go into more detail.
If you are serious about implementing this paper then there is a good book called ripples in mathematics that talks about wavelets and is pretty easy introduction to the subject.
Posted by David Neubelt on 05 December 2011 - 04:08 PM
I would love to see MJP's updated Error Statistics after YogurtEmperor's fixes to the algorithm.
Also, have you guys tried to compute the optimal/global minimum for this problem by brute-forcing through all relevant possibilities for the compression? (minus any obvious pruning of the search space to make it feasible to brute force) Having the global minimum SNR-wise in MJP's statistics would be a nice addition, to be able to estimate how far these algorithms go from the optimal. Of course that doesn't necessarily give the best perceptual result, but having a good raw number for the lower bound is useful when estimating how far it's possible to improve.
If you're targeting next-gen then why would you still use dxt? Why not use BC7 - it has much higher quality and most importantly you don't see the hue-shift which is so common with DXT.
Posted by David Neubelt on 10 May 2011 - 01:54 AM
Stop posting on the internet, buy a good book, and study. In particular, I recommend 'The calculus lifesaver' you can teach yourself all the basics of Calculus without ever talking to a teacher. When you get stuck on a concept then ask for help on a website like math stack exchange. If you still are having trouble then hire a tutor or buy another book.
College is hard and if you go to another college it will only get harder so you need to pick the time you want to start studying hard. Drop out now and procrastinate on when you want to start trying or hit the books now and get it over with.