• Content count

  • Joined

  • Last visited

Community Reputation

187 Neutral

About mauro78

  • Rank
  1. well almoust clear...working on this...hope to post reusults soon...also exploring "shooting" rays from the probe to get incoming radiance....   I get the key point after reading the paper "Stupid SH Tricks": for my own goal the best solution would be compute the incoming radiance from a VPL from all the possible directions over a sphere. This is just the integral over a surface. Of course I was aware that storing this was impossible for real time solution.   So I reading the paper I had sort of enlightment: SH will allow me to express the above integral using SH basis and original fuction samples. SH basis simple express the sampling direction....of course we've some degree of detail here....second order SH is just a raw solution....high order SH can better approximate the spherical function...   This will awill allow me to reconstruct original (approximated) function later on shading phase....Very similar to Fourier series for 1D case.   back on work now....   EDIT: the bad news is that I'll probably not go throught Light Propagation Volume but stick with Irradiance Volume...mostly becouse LPV requires DX11 hardware to be efficently implemented (even if volume textures can be handled in DX9 using some tricky code)
  2. I've read more on SH during this weekend and I'm even more conviced that is the right solution for my goal. To summarize right know with RSM (and associated Virtual Lights)  I'm doing this:   for every shaded pixel   compute "irradiance" between the pixel and the i-th Virtual Light   This is very slow of a potential solution using SH is:   teake some probes on the scene and for every probe compute incoming radiance and store it using SH.    Then at runtime the shading pass will be: for every shaded pixel   compute the interpolated irradiance reading the nearest probe using the pixel normal as index into SH   BTW...I'm struggling storing and reading back the irradiance into SH :-(
  3. N emitting normal SH derivation is order 2 truncation of polynomial form...
  4.   Ok first thanks for response and yes I need to dig into SH realms better.... but let's make it simple Stefano (well not so simple ): given a normal vector  n (x,y,z) I can get second order SH coefficents c = (c0,c1,c2,c3)     where:    c0 = sqrt(pi) / 2 c1 = -sqrt(pi/3) * y c2 = sqrt(pi/3) * z c3 = - sqrt(pi/3) * x   so for every VPL ( virtual point light ) I'll have a n vector and an associated SH coefficents c.    The question is how I can go back and compute reflected radiance from those second order SH coefficents?   Thanks   Mauro
  5. Based on my experience you can add GI on your engine.....but two goals are difficult to achieve:   1. Even if global illumination stays into the low frequency domain....make it convincing and temporal coherent it's hard 2. Make it fast is even harder    just my 2 cents   p.s. Is it worth since it will "open" your brain :-)
  6.   Ok I need to clarify this with some background informations:   my first attempt on using a GI in the engine was using RSM (reflective shadows maps); as you know the basic idea is:   1. Render a RSM for every light in the scene 2. Gather VPL (virtual point light) from every RSM and use those VPL as secondary light source float3 Bounce2(float3 surfacePos, float3 normal, float3 worldSpace,float3 worldNormal,float3 Flux ){ float distBias = 0.1f; float3 R = (worldSpace) - surfacePos; float l2 = dot (R,R); R *= rsqrt (l2); float lR = 1.0f / ( distBias + l2 * 3.141592); float cosThetaI = max (0.1, dot(worldNormal,-R)); float cosThetaJ = max (0.0, dot(normal,R)); float Fij = cosThetaI * cosThetaJ * lR; return Flux*Fij; }  The above code is used to compute how much light reaches a world space point from the (world space) secondary sources. Clearly this approach has many drawbacks....slow slow slow (even using interpolation methods).   This lean towards the SH approach where my goal is to use the above Bounce function to collect incoming light from secondary sources and storing them using SH into the "probes".   So my question is how I store incoming lights from the VPL using SH into sample probes?
  7. really mate.....I didn't ever think about this.....but do I think that cubemaps are not so real time friendly.....I hope I'm wrong   And also most important: can be used to store irradiance for indoor scene or are they better suited for outdoor scenes?   Thx
  8. Most video cards can render the scene into 6 128x128 textures pretty damn quick, given that most video cards can render complex scenes at resolutions of 1280x720 and up without struggling. You would then also perform the integration of the cubemap in a shader as well, instead of pulling the rendered textures down form the GPU and performing it on the CPU. This yields a table of colours in a very small texture (usually a 9 pixel wide texture) which can then be used in your shaders at runtime. You dont need to recreate your SH probes every frame either, so the cost is quite small. The Stupid SH tricks paper also has functions for projecting directional, point and spot lights directly onto the SH coefficients table, however the cubemap integration is much more ideal as you can trivially incorporate area and sky lights.     Thanks for clarification...very helpful!
  9.   Thanks for the quick reply MJP. After another night reading specific papers (and your response...)   Assumption #1 In my mind a "light probe" is a point in space X where I want to store incoming radiance from a set of VPL (virtual point lights). To store this incoming radiance the "Rendering Equation" must be solved L(X,W) somehow; evaluating this equation can tell us how much radiance is coming to point X from direction W.    Rendering Equation is complex to solve and one way to solve it is "Monte Carlo" integration.   Assumption #2 Spherical Harmonics can be used to store incoming radiance to X and approximate the "Rendering Equation".   What I don't get :-( To compute SH coefficients, you have to be able to integrate your function. In your case the function is the radiance (incoming lighting) at a given point for a given direction. The common way to do this is first render a cubemap at the given point, and then integrate the values in the cubemap texels against the SH basis functions. Rendering the cubemap is easy: just render in the 6 primary directions and apply lighting and shaders to your geometry the way that you normally would.For performing the integration, there's some pseudo code in Peter-Pike Sloan's "Stupid SH Tricks" paper. Another way to do this would be to use a ray-tracer, and shoot random rays out from the sample point and perform monte carlo integration.   This what I don't understand from your reply: It's necessary to render the cube maps? I mean I understand that SH will help me "compress" the cube maps (becouse storing them for let's say 32x32x32 grid of probes would requires too much memory)....and then lookup them using SH...but wouldn't be too expensive create those cube maps in real time? I'm using RSM to store virtual point light of the it possible to store incoming radiance into SH without using Cube Maps?   Thanks for future reply   EDIT: Another way to do this would be to use a ray-tracer, and shoot random rays out from the sample point and perform monte carlo integration.   This is close to my goal: I have a probe point in space  and I have n VPL...I want to shoot ray over the point sphere (hemishpere) and compute the form factor between the point and n VPL. But storing those informations requires a lot of SH comes in help....
  10. Hi All, I've just started reading classical papers on SH (Green and Peter-Pike Sloan papers tbp)    Our renderer by now just uses "Reflective Shadows Maps" to evaluate VPL and attempt to create a global illumination solution. Problem is that "naive" RSM just can't be used in Real Time applications. So I've started looking at more advanced techniques like LPV and Radiance Hints.   So far I understand that SH are the base of those techniques....I'll try to explain my objectives:   Goal 1: We've a 3D scene with n lights and a 64x64x64 grid over the whole 3D scene. Every "cell" is a light probe and I want to use SH to store incoming illumination from the n lights      Goal 2: Use the 64x64x64 "probes" texture to interpolate GI for the scene in a final (deferred) renderer pass   Problem: I really really struggling understanding how SH works and how they can help me storing incoming radiance from scene lights.   Can someone help me on that and point me in the right direction?   Thanks in advance
  11. you're right I just figured this late in the evening :-) I'll probably try this in the next few days.... thx
  12. so do you think they'll finally copy all the slices into a volume texture (using pseudo memcpy)? And then use this "composed" volume texture into the shader? I guess they won't pass every slice as shader variables....and use sort of volume texture I wrong? EDIT:It would be' too their approach is 32x32x32 texture on dx9? No problem! Just use a 1024x32 texture and use x%32 to compute the correct Z....
  13. So every 2d texture can be seen as a slice of "depth" of the volume right?
  14. Thanks for reply guys.....I won't probably implement this on DX9 and going straight to DX11 for this feature (eventually). Too much tricky and time consuming considering that most of the card nowdays support DX10/DX11   Thanks
  15. Hi all,   I've just completed a basic support for Global Illumination for our proprietary DX9 based engine. RSM is used to to compute the first bounce of indirect illumination. An interpolation scheme is used to compute the Indirect Illumination for the whole scene. Then the final scene lighting is composed using direct + indirect contribution.....   Next Step for us is to use one of the following tech. to increase performance and quality:   Light Propagation Volume Radiance Hints   So here come my question: how and is it possible to render a volume texture in DX9? I mean, let's say I want to sample the RSM and store the SH into a 3D Grid using a volume it possible using DX9 or should we go for DX10-DX11? The goal is to reuse that computed volume texture for interpolate (indirect) light into the final scene....   If yes...can someone point me into the right direction ( to website)   Thanks in advance   Mauro