patw

Members
  • Content count

    73
  • Joined

  • Last visited

Community Reputation

223 Neutral

About patw

  • Rank
    Member
  1. Billboard polygon with a "light ray" texture on it. Fade the polygon to invisible with distance.
  2. That is not at all the purpose of the DSF buffer. The purpose of a DSF buffer allows you to use a lower-resolution light-buffer/g-buffer than the back-buffer. This also allows for the use of up to 4 layers of transparency by doing something akin to interlacing in order to render translucency. While it seems as though this information could be gathered via depth/normal discontinuities, you will find, in practice, that this has a high rate of error. I strongly recommend re-reading the paper. The DSF buffer is critical to the technique.
  3. Particles

    -Generate a unit vector in a random direction -Multiply by the radius of the sphere -If you want the particle to be generated anywhere within the sphere (as opposed to just on the surface) multiply by a random number in the range 0..1 -Take the direction unit-vector and multiply by your desired parameters (and random) and you have velocity
  4. Yeah the memory transfer is always the killer. Without a trivial blend, the memory transfer costs effectively eliminate the benefit of pre-pass over multi-buffer, and it noticeably scales up performance cost as # of lights increase; the very problem deferred lighting tries to solve.
  5. One slight clarification: It does not let you store specular color, but it does allow you to more accurately re-construct specular than RGB storage. With RGB storage, you usually approximate luminance by extracting it from the RGB color, however this saturates luminance at 1.0. With Luv, luminance is actually stored, and so it allows you to blend lights in HDR since, for the purposes of lights (in Lambertian shading), luminance is: N dot L * Attenuation You are correct about the blend. The blend is most accurate when blending lights of similar luminance, since this value gets clamped (otherwise the color would extend beyond the two source colors and that would also be bad). The ideal way to do this, in my opinion, is not to LERP but to use some kind of curve (probably something based on: e^x) to blend the light colors. I would also like the function to be adjustable via parameters to give more control to the artists over the way lights blend. Ultimately I settled on a lerp because it was cheap. Some day it would be cool to do Luv and maybe the blend function on SPU's but I decided Luv was too expensive on the Xbox360. (For pre-pass on SPUs, check out: Parallelized Light Pre-Pass Rendering with the Cell Broadband Engine by Steven Tovey and Stephen McAuley, in GPU Pro)
  6. If you think about it, bilinear filtering does not invalidate the data of the normal-map. It does, however, create a vector which is not normalized. The vector is still valid, and should still point in a valid direction (this would depend on the specific values/uvs) but you need to re-normalize before using it in lighting equations.
  7. In matrix multiplication A * B != B * A, however, for an affine matrix, A * B == B^(-1) * A. You can save yourself a matrix inversion by switching your multiplication order, provided you know the input data is going to be kosher (which it should be, for a view matrix).
  8. Inferred lighting is a specialization of Pre-Pass lighting which expands the data stored in the G-buffer to allow for the transparency etc. Besides this, it is performed exactly like pre-pass lighting. n00body: I have poked at a few effects using post-processed light buffers, but I haven't come up with anything I'm ready to really write about yet. One of the main issues with doing post-processing of the light buffer alone, is that you have no albedo, and this can drastically change the values for which one would use the light buffer. So it really depends on what it is that you are doing, because you may just not have the data you need. Sorry that doesn't really answer the question.
  9. I would hold off until it is clear what Mac will support. We're still waiting on most GL 2.0 stuff, on that platform, so you've got a bit of time before 3.x stuff is relevant.
  10. Also suggest taking a look here: https://mollyrocket.com/forums/viewforum.php?f=21
  11. A few years back I was doing an NVIDA demo for the GDC launch of the FX cards (shader 2.0! woo! we were excited.) and was trying to get proper shader support into the material system. At one point I excitedly called the marketing guy over and exclaimed, "It's a shader!" to which he said, "It's...a black triangle." I replied that I could make it red with just a few keystrokes. He was still not impressed.
  12. Your pre-pass pixel shader should output the linear depth to the color channel(s), not to depth output. Depth output for the fragment is not altered; it is perspective depth as comes through from the position in the vertex shader. This is why you do not lose the hi-z.
  13. zedz, if you are not already doing so with your deferred rendering, I recommend reserving a stencil bit for "opaque", the shot you show has a ton of alpha-blended/clipped pixels because very little of the viewport is occupied by opaque objects.
  14. The hardware will use a Z-buffer if you tell it to. You will need to create, and bind that target yourself. You want this z-buffer because otherwise you would need to submit perfectly sorted polygons in order to not get errors in your shadow map. You could also enable blending and and use the MAX blend function, but I don't recommend it for this :)