Jump to content

  • Log In with Google      Sign In   
  • Create Account


We have 4 x Pro Licences (valued at $59 each) for 2d modular animation software Spriter to give away in this Thursday's GDNet Direct email newsletter.

Read more in this forum topic or make sure you're signed up (from the right-hand sidebar on the homepage) and read Thursday's newsletter to get in the running!


Member Since 07 Mar 2007
Offline Last Active Yesterday, 07:32 AM

Posts I've Made

In Topic: How do triangle strips improve cache coherency ?

04 November 2014 - 12:43 PM

With an index buffer you also get cache coherency; the hardware can just store the most recently accessed indices, and if one of them comes up again, it doesn't need to retransform the vertex.  Of course you need to order your vertices to more optimally enable this, but in the best case adding indices can get you significantly better vertex reuse than strips or fans (memory usage isn't everything).


I think this is a really important point, and here is a good link for more information about it:



In Topic: A radiometry question for those of you who own Real Time Rendering, 3rd Edition

31 October 2014 - 08:30 AM

I'm not that far ahead of you, so someone else might have a better idea than I do. I work with realtime graphics (OpenGL) but not games per se, and my company's graphics needs are very modest compared to that of AAA game companies. I have been trying to push things like HDR and PBR, but that would require an R&D outlay that my company is not prepared to spend, and so I find myself trying to fit those things in during my spare time. Unfortunately, I am also in school full-time as a physics student, and so my time for side projects is nil. </life story>


My best guess is that it would be good to practice your calculus so that you can understand the theory a little better. I still think that understanding what an integral does is more important that understanding how to actually do one. If you're reading a paper on radiometry and you see an integral, you want to be able to reason about what quantity the integral will produce. You'd be surprised how much you can learn and understand simply by thinking about the units. "Aah. This is a mass density function, so if I integrate it over this volume, I'll get the mass of that volume." Being able to actually do the integral is less important. There may even be areas in graphics that involve differential equations, but once again, I think that understanding why you need to solve the equation, and what you get out of it, is more important than actually being able to solve the equation itself.


I would also guess that Linear Algebra is a lot more directly applicable. It's not just the fact that you're multiplying matrices and vectors, there are a lot of concepts from Linear Algebra (vector spaces, function spaces, inner products, orthogonality) that carry over into other things, like Fourier Analysis (which you might run into if you want to do ocean rendering or edge detection or any graphics technique that requires a pass filter or convolution). 


Good luck!

In Topic: General Programmer Salary

30 October 2014 - 09:58 AM

Well, I don't want to derail the thread any further, but I do want to thank Tom, Quat, and stupid_programmer for all of your advice. Very helpful, and I appreciate it.

In Topic: A radiometry question for those of you who own Real Time Rendering, 3rd Edition

30 October 2014 - 09:54 AM

I think one of the reasons that radiance is confusing is that it is used in a few different situations, interchangeably, and people don't often specify when they're switching from one to the other. I think the important thing to remember is that we want to quantify the "amount of light" that is contained in a single ray. That's all. The ray could be drawn from a light source to a surface, or from a surface to the camera pixel, or whatever. So, you'll see both in various diagrams. The information I posted concerns what is meant by "amount of light" -- if you look at the physics of it, and do the dimensional analysis, you see that quantity of light present in a ray has to be an energy flux density of some kind. In this case, a flux density with respect to both area and solid angle. 


Concerning the calculus, I do think it's a good idea to brush up, but I don't think that the actual methods of integration and differentiation will be as important as just understanding conceptually what integration and differentiation are, and that they are inverses of one another much in the same way that division is the inverse of multiplication. It's not like we're doing the full calculus anyway. We're not actually performing a triple integral (over the area of the pixel, the hemisphere of incoming rays at each point, and the exposure time) in order to get the actual photon energy hitting the pixel. We're simply assuming that the radiance doesn't change much within those limits, and thus the radiance is assumed to be directly proportional to the energy collected by the pixel while the "shutter" was open. The tone mapping step decides the constant of proportionality.


By analogy, it's as if you made the assumption that the density of a ball is uniform throughout, and so instead of integrating over the ball's density function over the volume of the ball, you just sample the density function in the middle of the ball and multiply that by the ball's volume. It's an approximation, but if the ball's density is pretty much uniform throughout, then it's a very good approximation.


The calculus that you see in these books and papers is there for the sake of being precise while explaining the theory, but the stuff that is actually coded is only an approximation of the theory.

In Topic: A radiometry question for those of you who own Real Time Rendering, 3rd Edition

29 October 2014 - 11:01 AM

Radiance is sort of an abstract quantity, but I think it's not so bad if you think about it in terms of it's dimensions. When rendering, we like to think of light in terms of geometrical optics. So, instead of light being waves with a continuous spread of energy, they are discrete rays that shoot straight from the surface you're rendering to the pixel element on the screen. This takes some doing, however, because in reality, light is a continuous wave (classically-speaking -- no intention of modeling things at the quantum level here).

So how do you turn a continuous quantity like an EM wave into a discrete ray? By analogy, consider mass. As a human, you have a certain mass. However, that mass is not distributed evenly in your body. Some tissue is more dense than others. For instance, muscle is more dense than bone. Lets say you knew about the mass density function for your body. That is, if someone gives you a coordinate (x,y,z) that is inside your body, you can plug it into the function and the result will be the mass density at that coordinate. How would you calculate the total mass of your body with this function? Well, you would split up the volume into a bunch of tiny cubes, and you sample the density function in the center (say) of those cubes, and then multiply that density by the volume of the cube to get the mass of that cube, then add up the masses of all the tiny cubes. The tinier the cubes, the more of them you'll have to use, but this will make your mass calculation more accurate. Where integral calculus comes into play is that it tells you the mass you get in the limiting case where the cubes are infinitely tiny and there are infinitely many of them. In my opinion, it's easier to reason about it as "a zillion tiny small cubes" and just remember that the only difference with integral calculus is that you get an exact answer rather than an approximation.

So consider a surface that you're rendering. It is reflecting a certain amount of light that it has received from a light source. We want to think of the light in terms of energy, unlike the mass example. The surface as a whole is reflecting a certain amount of energy every second, which we call the energy flux (measured in Watts, also known as Joules/sec). However, we don't really care what the entire surface is doing. We just want the energy density along a specific ray. So, let's break the surface down into tiny little area elements (squares) and figure out how much flux is coming from each tiny area element. We only care about the area element that is under our pixel. That gives us a flux density per unit area, which is called Irradiance (or Exitance, depending on the situation). So now we know the energy flux density being emitted from the area under our pixel. But wait! Not all of that energy is moving toward our pixel. That little surface element is emitting energy in all directions. We only want to know how much energy is moving in the specific direction of our pixel. So, we need to further break down that Irradiance according to direction, to find out how much of that Irradiance is being emitted along each direction (a.k.a. infinitesimal solid angle). This gives us an energy density with respect to time, area, and solid angle, known as Radiance.