
Advertisement
David Neubelt
Member
Content count
269 
Joined

Last visited
Community Reputation
866 GoodAbout David Neubelt

Rank
Member
Recent Profile Visitors
The recent visitors block is disabled and is not being shown to other users.

Questions on Baked GI Spherical Harmonics
David Neubelt replied to Epaenetus's topic in Graphics and GPU Programming
For the figure on page (18) it was generated with SH9 baked into light map texels. However, you're approach with a grid will work as well if it's dense enough.The occlusion will come naturally and there are no additional techniques you need to apply because at every position in shadow the cubemap will receive less light. This works for both diffuse and ambient. I suspect the reason you aren't seeing any occlusion is because you are adding an ambient term artificially flattening out the scene. If there is a bright light on the opposite side of the probe then yes. Ringing can occur when you have a bright light on one side which causes that side of the probe to generate really high numbers. On the opposite side of the probe facing away from the light it will need a really high negative number to counter act the high number. The negative numbers are the cause of the ringing 
The Order 1886: Spherical Gaussian Lightmaps
David Neubelt replied to yoshi_lol's topic in Graphics and GPU Programming
1) To uniformly distribute the basis vectors  you can choose any distribution you want but that is just the one we went with. 2) Yes, the free parameters you want are the sharpness and amplitude. There really is no rotation its just use the lights direction vector. = Dave 7 replies

10

View matrix  explanation of its elements
David Neubelt replied to savail's topic in Graphics and GPU Programming
To expand, If you have a transformation that is a rotation and a translation, e.g. the 3d camera's position is t and it's rotation is R then it's world transform is, x' = Rx + t, where R is a 3x3 matrix, x is a 3x1 matrix(column vector) and t is a 3x1 matrix(column vector). The output is x', your transformed vector. To get a view matrix you want to bring all vertices in the frame of reference of the camera so the camera sits at the origin looking down one of the Cartesian axis. You simply need to invert the above equation and solve for x. x' = Rx + t x'  t = Rx R1(x'  t) = (R1)Rx RT(x'  t) = x // because a rotation matrix is orthogonal then its transpose equals its inverse x = RTx'  RTt So to invert a 4x4 camera transformation matrix, efficiently, you will want to transpose the upper 3x3 block of the matrix and for the translation part of the matrix you will want to negate and transform your translation vector by the transposed rotation matrix. = Dave 
Spherical Worlds using Spherical Coordinates
David Neubelt replied to studentTeacher's topic in Graphics and GPU Programming
Geodesic grids work with minimal distortion. They work by starting with an icosahedron and subdividing. http://kiwi.atmos.colostate.edu/BUGS/geodesic/text.html 
Deferred Shading Different Light Models
David Neubelt replied to Quat's topic in Graphics and GPU Programming
We use GGX for iso/anisotropic materials at Ready At Dawn. You can see the details in the section Core Shading Model http://blog.selfshadow.com/publications/s2013shadingcourse/rad/s2013_pbs_rad_notes.pdf = Dave 
Hmm, I'd imagine the RTR is a typo because typically you don't want to scale and you want to be normalized to 1 when you integrate over the domain \inf to \inf. That's what the term out in the front is for. I don't have the book on me at the moment to see in context what they are talking about. = Dave

They aren't implausible assumptions otherwise we wouldn't make them to begin with! Hopefully,no infinities creep in to our answer otherwise we we would get flux of infinite values and that's not very useful. Our end goal is to obtain plausible results that are close to the ground truth. The only reason we use simplifying assumptions and simpler models is that we want simpler calculations. We know that with our simplified models we get results that are very close to the ground truth. An example is to look at the general form of the radiation transfer equation (it describes how much flux is transfered between two surface elements). The general form requires double integrals over both surfaces which is computationally expensive. Sometimes its good enough to say that we can approximate the result as two point sources and ask the question how much energy is transfered between two point sources. These approximations will be valid and come to the right solution, for our purposes, if the two points are far enough away and the surfaces are small enough. Since the pointtopoint radiation transfer equation gave us the same answer, within our acceptable tolerance, and with no integrals we are happy to use it. Additionally, with some mathematical foot work you can show that the pointtopoint transfer equation is derived directly from the defitions of irradiance, intensity and solid angle so its mathematically sound with its feet firmly planted in the ground of physical plausability. In the same vein, it's ok to use a pinhole model and if you do it correctly and you make some assumptions about your scene, light and camera then the result should be very similiar to if you had an aperature, integrated over sensor area, over time and all wavelengths. For example, you could write a simple raytracer that did a montecarlo integration at every pixel with a very small aperature for one spherical area light very far away from the scene and it would come very close to a rasterizer that used the point light source and a pinhole camera. Hope that makes sense. = Dave

Sorry for the late reply I've been busy. 1 The above should be a little more clear. We want to develop a BRDF that for the intensity of the light coming in one direction, \L_i, should be equal to the light going out in the direction of reflection. Yah, sorry you want to setup the dirac delta function so that the angle away from the normal axis is equivalent and the other angle is equivalent but rotated 180 degrees. Right, as you point out, you will want to integrate over the visible hemisphere(aperture), sensor area, and time (equivalent to shutter speed) which beyond just controlling exposure will give you phenomenon such as motion blur and depth of field. If instead of integrating RGB you additionally integrate over the wavelengths then you can get other phenomenon such as chromatic aberration. = Dave

The fundamental radiation transfer equation deals with integrating out all of the differential terms to get flux arriving at a sensor (integrating over area's of the surfaces). When you have an image plane it will have sensors with finite area that will detect, even in the case of a pinhole camera, radiation coming from a finite surface area in a scene. In your diagrams you are simply point sampling a contiguous signal of that flux.

The easiest way is to work backwards. You know you want to reflect out what is coming in so you construct a BRDF to match those conditions. In this case, you want a BRDF the reflects all energy in the mirror direction and to be valid it must maintain energy conservation and reciprocity. The first conditions requires the BRDF to zero everywhere except where, 1 and the second condition to conserve energy will require that, 2 thus, f must equal to 3 and the third condition is easily verified. P.S. Sorry, I tried to use latex but the forum blocked me from using latex images. I tried using the equation tags but the latex wouldn't parse so I just posted the link instead

What are you guys doing for indirect specular? There are details in the course notes but specular probes.

Radiance (radiometry) clarification
David Neubelt replied to skytiger's topic in Graphics and GPU Programming
Skytiger, Let me clarify my post above. I'm not disagreeing with your math that radiance is higher as the emission area grows larger while the solid angle stays constant. However we can't think of radiance physically in those terms, imagine a scenario where you have a spectroradiometer and you were calibrating a TV. On the TV you have a black screen with a picture of a small red square in the center emitting red light. If you point the spectroradiometer directly at the red square so the red square fills the view of your measuring device then you will get the full radiance. Now, if you start angling the gun closer to the tv so the red square still fills the view then the spectroradiomer will see more of the red square (because the area the gun sees is now larger) and the radiance will go higher. This would be expected (assuming the TV emits photons equivalently in all directions). However, if you go past 45 degrees to 70 degrees and suddenly not all of the red square and some of the black part of the screen is seen by the spectroradiometer because the angle is so oblique then now the radiance will start to decrease. If I was able to jam the spectroradiometer into the TV so its parallel to the red square the radiance would fall to 0 because it sees none of the red square. What happens in our example is the solid angle that is subtended by our red surface element decreases, the flux decreases and the projected area decreases as the gun becomes more oblique to the TV and the radiance decreases to zero. 
Radiance (radiometry) clarification
David Neubelt replied to skytiger's topic in Graphics and GPU Programming
That only makes sense if you let the emitter area increase unbounded but physical devices are bounded in area so it doesn't make sense to think in those terms. = Dave 
Radiance (radiometry) clarification
David Neubelt replied to skytiger's topic in Graphics and GPU Programming
If the solid angle is constant then the area on the left wall will increase as the angle goes further oblique. 
Radiance (radiometry) clarification
David Neubelt replied to skytiger's topic in Graphics and GPU Programming
As the angle becomes more oblique to the wall on the left the solid angle the detector subtends will decrease. The solid angle will reach it's maximum when the detector and emitter are parallel.

Advertisement