KRIGSSVIN

Members
  • Content count

    77
  • Joined

  • Last visited

Community Reputation

172 Neutral

About KRIGSSVIN

  • Rank
    Member
  1. Hardware-to-linear conversion is a simple "one division-one add" operation.
  2. Quote:Original post by jcabeleira- If each color component requires 4 SH coeficients, how can I encode this into a single RGBA volume texture? Perhaps I'll need a different volume texture for each color component? You need one volume texture per color channel.
  3. OpenGL Global Illumination techniques

    If the scene was not very big and complex I'd use Irradiance Volumes technique, because they support indirect occlusion and thus the lighting is more realistic.
  4. OpenGL Global Illumination techniques

    Must GI be fully dynamic or may it be pre-computed?
  5. There is clearly said that they either do R2VB or point-rendering to the volume using vertex texture fetch. Existing communication: http://www.gamedev.net/community/forums/topic.asp?topic_id=555536 You can write ur experience implementing it there.
  6. So, Ganga, have you got any progress?
  7. Hello, I'm here at last. About size. When setting intensity we must account for the relevance between VPL surfel area and cell size in world space. For point lights... I will think. Now algorithm. Let's compare what we're doing. First, propagation. I deal with the following scheme: - inject initial radiance in MAIN (i.e. initial) - iteration #1 - propagate from MAIN to TEX0 - additively blend the result from TEX0 to MAIN - iteration #2: - propagate from TEX0 to TEX1 - additively blend the result from TEX1 to MAIN - iteration #3: - propagate from TEX1 to TEX0 - additively blend the result from TEX0 to MAIN So in the end we have sum of all separate iterations and initial radiance distribution. Second, rendering. Instead of just projected normal I integrate against hemispherical cosine lobe generated at this normal, i.e.: // compute irradiance vec3 E = max(SH_Dot(sc, SH_ProjectHemisphere(-N)), 0.0); where N is the surface normal in world space. All cone and hemisphere generation functions are taken from Crytek's paper without any changes as I am relatively new to spherical harmonics math. And tell me what coefficients are you injecting into the volume for point/directional light?
  8. Quote:Since the size of propagation volume cell changes, e.g. from 1 units to 2 units, the amount of calculated radiance would be 2^3 times more with the same light... Well, this is an issue, I must think about it. I'll write on the topic later, don't have time atm.
  9. Quote:Original post by Ganga It shows that, quite strangely, to gain the correct propagated radiance direction, i have to use SHProjectCone(offsets * float(-1,1,-1), PiOver2) instead of SHProjectCone(offsets, PiOver2). And during rendering, using dot(float4(1.0f, faceNormal.yzx), sampledSHCoeffs) for each color creates the right lighting. I think that maybe caused by the different coordinate systems of the SH functions i used for calculating and rendering. Lets assume we have a vector N(x;y;z). Then its projection onto SH basis for first two bands is: c.x = 0.5 / sqrt(pi); c.y = -N.y * 0.5 / sqrt(pi) * sqrt(3); c.z = N.z * 0.5 / sqrt(pi) * sqrt(3); c.w = -N.x * 0.5 / sqrt(pi) * sqrt(3); This is constant for all implementations, thus coordinate system doesn't matter. You use N.yzx, pay attention to your offsets multiplier (-1; 1; -1) and look at my equation again. Question to you: do you scale your point light injection coefficients (0.282; 0; 0; 0) by some normalization factor, not counting light intensity? All coefficients - point light, hemispherical light injection, normal - all must be normalized to work correctly and this seems to be the place I'm quite stuck with atm.
  10. Ganga, which coordinate system do you have? Mine is (forward; left; up).
  11. Thanks for another hosting. I.e you injected (0.282; 0; 0; 0) and this picture is done without propagation phase, which you have troubles on, don't you?
  12. Put it on imageshack.us please, this site asks for login. I'm working on it too and I'll be glad to communicate.
  13. Forward rendering limitations?

    Martin is right, when you use stippling, each fourth sample represent FOUR fragments of an alpha surface, which can be of any alpha grade. Read about inferred lighting more to completely understand what I mean.