zoret

Members
  • Content count

    84
  • Joined

  • Last visited

Community Reputation

126 Neutral

About zoret

  • Rank
    Member
  1. an orthographic projection wouldn't have been simpler ?
  2. a question about VSM.

    Quote:Original post by AndyTX This is the same as normal shadow maps except that you need to consider receivers as well as occluders. why do we need to include receivers into the shadow map frustum for VSM only ?
  3. Quote:Original post by AndyTX Quote:Original post by zoret what is exactly the DX9 hw-accel allowing to use shadow from all the splits in only one pass ? Compute the split index in the fragment shader and use dynamic branching to project and look-up from the correct shadow map. In D3D10 you can use a texture array. Quote:Original post by zoret my idea was (but not yet tried) to use only one big shadow map for all the splits (like the technique used by virtual depth cube shadow texture for omni directional lights) That will work too, but as you note it won't really help much except that you don't need dynamic branching (you can index a matrix array to do the projection). Still, this type of branching is extremely coherent and thus should be quite efficient even on older cards (unless you're targeting SM2.0 hardware). yes but dynamic branching is quite expensive on "some" platforms, and that could be useful to have only one shadow map (because on "certain" platforms, only the first render target are optimised by the hardware) but I'm happy to see that the idea is correct... the performance gain could be important since I won't have to render the full scene for each split with different hardware clipping planes.. nothing in common with this, AndyTX, it seems you succeed to merge PSSM with variance SM I've tried this last year but without success ! in fact I had big blur continuity problems on split edge and that was clearly not satisfying for us.. how have you resolved this problem ?
  4. Hi guys I'm a bit surprised that nobody have asked more informations about this point ! Quote:Original post by FanZhang 2) How to alleviate the performance drop caused by multiple rendering passes? For this issue, in our Gems 3 paper, we thoroughly discussed this issue. For the split scheme PSSM(m) (the frustum is split into m parts), the number of rendering passes for 1) without hardware-acceleration, 2) with DX9-level HW-accel. and 3) with DX10-level HW-accel. are 2m, m+1 and 1+1 respectively. In particular, in comparison with the standard shadow mapping approach, we reduce ALL extra rendering passes in our DX10 implementation. For more details, see the upcoming book GPU Gems 3 and the accompanying source codes. in my case, I've implemented PSSM for directional light shadows last year (in our game engine) and I'm clearly happy with this technique except for this multi rendering passes problem. what is exactly the DX9 hw-accel allowing to use shadow from all the splits in only one pass ? because I have to use the hw clipping planes for each split, so if a mesh (for example the ground) interesect all the splits, it'll be rendered using each split shadow map (with hardware clipping planes enabled) so of course the performances aren't perfect ! what is exactly this dx9 hw optimisation ? my idea was (but not yet tried) to use only one big shadow map for all the splits (like the technique used by virtual depth cube shadow texture for omni directional lights) imagine with 4 splits you'll have 4 shadow map 1024*1024 in a big 2048*2048 shadow map when you render the scene in each shadow map split, you just have to use a custom setViewport to only update the chunk allocated to this split and then when you render the scene with the big shadow map binded you "just" have to choose the correct shadow projection matrix in order to look up in the good shadow map chunk the difficulty seems to find the projection matrix corresponding to the pixel depth without dynamic branching but that seems possible maybe this looks like your proposition ???
  5. It seems to be an article of ShaderX5. What is the basic of this new concept ? thanks
  6. Quote:Original post by B_old Quote:Original post by zoret there's a big error in your pixel shader because you destroy the y of your shadowIndirectCoord Heh, that was just a typo while I posted... I will try the thing with the LightProjParams. Does this mean that I cannot render the shadowmap with different near/far values for each (virtual) cube face? *** Source Snippet Removed *** indeed you can't have different projection values for each face Quote:Original post by B_old I just wanted to mention again, that if I use texCUBE() with a model space vector, rotation will not be reflected properly by the shadowmap. Any idea? why not ? that should work in model space as in world space or light space you just have to take care of space you use.
  7. there's a big error in your pixel shader because you destroy the y of your shadowIndirectCoord float4 shadowIndirectCoord = texCUBE(indirectionCubeMap, input.lightVec); shadowIndirectCoord.y = 1.0f; //I'm pretty sure this is the problem!!!! shadowIndirectCoord.z = 1.0f; float shadow = tex2Dproj(shadowMap, shadowIndirectCoord).r; if your depth map is projected on everything it's just because you don't recompute a correct Z for your pixel depending the rendering API you use the pixel shader should be something like that float3 p = pixelPos - lightPos; // in same space (world or local, as you want) float3 pAbs = abs(positionInLight.xyz); float MA = max(max(pAbs.r, pAbs.g), pAbs.b); float2 redirectedUV = texCUBE( indirectionMap, p ).xy; float4 shadowUV; shadowUV.x = redirectedUV.x; shadowUV.y = redirectedUV.z; // correct Z shadowUV.z = (-1.f / MA) * LightProjParams.x + LightProjParams.y; shadowUV.w = 1.0; float shadow = tex2Dproj(shadowMap, shadowUV).r; with these projection parameters : LightProj.SetPerspectiveFov(FOV, AspectRatio(1), Near, Far) LightProjParams.y = Far / (Far - Near) LightProjParams.x = Near * LightProjParams.y
  8. ok it works, I just had to take care of Y inversion between opengl and direct3d9 when you render in a render target (=> managed in the indirection cube map)
  9. arggh I've added a glScissor after the glViewport... just forgot to enable the Scissor test ! ;) now I'm going to try it in my engine.. thank you very much
  10. Hi, I'm sure the topic is clear enough ! I've already a direct3D9 implementation of this really nice shadow techniques for omni-directional light and I need an opengl implementation For now I've successfully implemented the cubemap indirection process Next step is just to be able to render in a specific viewport of a render target And this is where I'm stucked ! :( I can't use glViewport correctly when I'm a rendering in a fbo ! Any ideas ? Thanks
  11. Hi, I'm desperately trying to render in a render target (fbo) with 2 different viewports Unfortunately when I clear the second viewport it clears everything ! so I've decided to modify the simple_framebuffer_object sample from the NVidia SDK (9.5) but I've exactly the same result !! Is there really a problem ? you can replace and test your SDK with this display function if you want : void display() { // render to the render target texture first glBindTexture(texTarget, 0); glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, fb); { glPushAttrib(GL_VIEWPORT_BIT); glViewport(0, 0, texWidth/2, texHeight/2); glScissor(0, 0, texWidth/2, texHeight/2); glMatrixMode(GL_MODELVIEW); glPushMatrix(); glLoadIdentity(); glTranslatef(0.0, 0.0, -1.5); glRotatef(teapot_rot, 0.0, 1.0, 0.0); glClearColor(18, 0, 0, 0); glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glBindProgramARB(GL_FRAGMENT_PROGRAM_ARB, renderProgram); glEnable(GL_FRAGMENT_PROGRAM_ARB); glColor3f(0.0, 1.0, 0.0); glutWireTeapot(0.5f); glPopMatrix(); glPopAttrib(); } glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, 0); // render to the render target texture again glBindTexture(texTarget, 0); glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, fb); { glPushAttrib(GL_VIEWPORT_BIT); glViewport(texWidth/2, texHeight/2, texWidth/2, texHeight/2); glScissor(texWidth/2, texHeight/2, texWidth/2, texHeight/2); glClearColor(0, 0, 18, 0); glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glMatrixMode(GL_MODELVIEW); glPushMatrix(); glLoadIdentity(); glTranslatef(0.0, 0.0, -1.5); glRotatef(teapot_rot, 0.0, 1.0, 0.0); glBindProgramARB(GL_FRAGMENT_PROGRAM_ARB, renderProgram); glEnable(GL_FRAGMENT_PROGRAM_ARB); glColor3f(0.0, 1.0, 0.0); glutWireTeapot(0.5f); glPopMatrix(); glPopAttrib(); } glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, 0); // now render to the screen using the texture... glClearColor(0.2, 0.2, 0.2, 0.0); glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glMatrixMode(GL_MODELVIEW); glPushMatrix(); glLoadIdentity(); object.apply_transform(); // draw textured quad glBindProgramARB(GL_FRAGMENT_PROGRAM_ARB, textureProgram); glEnable(GL_FRAGMENT_PROGRAM_ARB); glBindTexture(texTarget, tex); glEnable(texTarget); glColor3f(1.0, 1.0, 1.0); glBegin(GL_QUADS); { glTexCoord2f(0, 0); glVertex2f(-1, -1); glTexCoord2f(maxCoordS, 0); glVertex2f( 1, -1); glTexCoord2f(maxCoordS, maxCoordT); glVertex2f( 1, 1); glTexCoord2f(0, maxCoordT); glVertex2f(-1, 1); } glEnd(); glPopMatrix(); glDisable(GL_FRAGMENT_PROGRAM_ARB); glutSwapBuffers(); } Any ideas ? Thank you
  12. yep in fact it worked also with GL_HILO16_NV format thanks however
  13. Hi, I need to create a 2 channels (16b) cube texture. This is equivalent to this D3D format : D3DFMT_G16R16 Any ideas how to do that ? Do I need to use a specific extension ? FYI it's for my opengl implemenation of virtual shadow depth cube texture (working on D3D9 only for now) Thank you for any help
  14. Hi B_old, first, do you have a point shadow implementation using a classic cube render target (float rendertarget of course, just store the distance from each vertex to the light / light radius) ? because this concept is very simple to implement when you have already all the basics of point shadows you just have to add this famous indirection cube map in order to use a classic depth map (and so you will be able to have very nice soft shadows with PCF...) the indirection cube map could have exactly the size of your cube render target for example if your cube render target is 512*512 for each face (so 512*512*6) you can have an indirection cube map equals to 512*512 (for each face) as you need to store the uv in order to map in the depth map, you will have to use a format with 2 float channels (16b is enough) if your card support bilinear interpolation on float texture you can use a smaller texture (16*16 for each face) here is the fragment shader for the final lighting phase (checking shadow map) float3 vModelSpaceVertex2LightVector = vModelSpaceVertex - vModelSpaceLightPos; float2 vRedirectedUV = texCUBE( indirectionCubeMap, vModelSpaceVertex2LightVector).xy; float KShadow = tex2D(virtualshadowDepthMap, vRedirectedUV); with a classic implementation you will have something like that float3 vModelSpaceVertex2LightVector = vModelSpaceVertex - vModelSpaceLightPos; float DistToLightInShadowMap = tex2D(shadowDistanceToLightMap, vModelSpaceVertex2LightVector); if (DistToLightInShadowMap < currentDistanceToLight) KShadow = 1 else KShadow = 0 of course no bilinear filtering in this last implementation (shadow is 0 or 1 !) the matrix to use is the view matrix computed and used to update positive Z face in fact it depend which space you choose again you should first try to have a classic implementation R32 cube render target for each face, you use classic pass and correct view matrix for each face to store distance from vertex to light just use always the same zbuffer (don't forget to clear it at the beginning of each face) then for you lighting/shadowing pass use the PositiveZ view matrix and compare each z with values computed in the cube render target after that you could try to optimize your implementation hope it helps
  15. I've a working implementation of this technique on Direct3D9 If you want some help don't hesitate to ask me. The indirection cube map is used to transform a cube map space UV (3D vector) in depth map space UV (2D vector) For example your depth map could be like that |----------| |+X|+Y|+Z| |----------| |-X|-Y|-Z| |----------| and your X face of your indirection cube map will just give you UV coordinates corresponding to the X face in the depth map repeat that for each face. for each view of your point light, instead render the scene in a cube render target, you just render it in the depth map (size of depth map = width*height*6) using viewport and scale to write in the good face of your depth map the indirection cube map doesn't necessarly need to have the same size than your depth map, you just need to take care of face border (just render with a FOV greater than 90°, and deal with that when you create your indirection cube map) Is it clear enough ? point light shadows will ber really simpler with D3D10 ! :)