grunt123

Member
  • Content count

    120
  • Joined

  • Last visited

Community Reputation

280 Neutral

About grunt123

  • Rank
    Member
  1.   thx for the reply. But, that pdf is about global fog effects, which is different from what frostbite volume tech is doing.  Frostbite volume is rendering a local/individual fog volume, so it can render multiple number of fog volumes that are differently authored by artist. (and each fog volume can have its own noise texture too).  If this is global fog effect (that is used in AC4 in that pdf), i would agree that low res texture would suffice ... But, for localized volume rendering? i'm not quite sure.  Frostbite volume tech is using temporal reprojection trick, but does this help much ?  i implemented simple version of frostbite tech just to render local volumes, and rendering into low res (froxel size buffer) produced really low quality volume.
  2. http://www.frostbite.com/2015/08/physically-based-unified-volumetric-rendering-in-frostbite/   i'm specifically looking at the above paper.  In there, it's using so called 'froxel' technique to do volumetric rendering.  i think i understand the overall idea, but i just can't figure out how to voxelize participating media.  They are using 'VBuffer', and is this 3D texture that has the same dimension as main froxel buffer? If so, it seems way too low resolution for rendering density media?   i'd appreciate any help on this.
  3. Hi, all   I've read a couple of presentations on volume rendering. (Frostbite volume rendering, AC4 volume fog), They are using 3D texture as intermediate storage that is later used for ray marching (so that you can raymarch once for multiple volume entities).  However, you need to 'voxelize' the volume entities into this intermediate 3D storage ... can anyone explain how to achieve this?    i think since the target rt is 3D texture, we need to render the volume entities into each depth slice of 3D texture with adjusted near/far plane ... but not sure.   thanks in advance.
  4. Hi,    i'm trying to implement directional shadow in cube reflection map.  I'm trying to avoid the case where i have to re-render reflected objects into separate shadow map.  Here is what i'm thinking ...   - render cube reflectionmap along with cube depth map - for each pixel in reflectionmap, (all cube texture) cast a ray to the sun, and use cube depth map to check if ray is obstructed by object - if it's obstructed, then shade a pixel.   i'm not quite sure about step 2, is it possible to use the ray vector as reflection vector into cubemap ref. texture? 
  5. Improving cascade shadow

    thanks for the link.  seems to be a great source of materials.
  6. Improving cascade shadow

    yes, 3x3 pcf/soft shadow. (blurring)  But, still seeing 'chain-saw' edges, especially in distant cascades .... What type of filtering can improve the situation? maybe, EVSM? 
  7. Hi,   I'm working on improving cascade shadows for my project.  The main problem is that for cascades that are distant (i have 4 cascades) i see some kind of 'chain-saw' looking edge.  i thought about implementing SDSM, but my project doesn't allow GPU-generated data to be used for cascade bounds ... (we have two threads, one doing all update and the other doing all the rendering)     Is there any other way of countering this kinds of visual problem in cascade shadow?
  8. ... without using copyresource/copysubresource?   i need to copy depth/stencil from one depth target to another using blitting screen aligned rectangle (via shader). i'm able to copy depth info, but somehow stencil info is lost.  (i'm guessing it's prolly source shader resource view is R24_X8 type ... )   any idea?
  9. Hi,   i'm trying to update old shader codes.  When i try to use uniform bool shader arguments, i found that shader function is not getting correct value.  So, for example,   technique abc { pass 0 { SetVertexShader( CompileShader( vs_4_0, vs(true) ) ); } };   vsout vs(vsin IN, uniform bool bCheck) { if (bCheck) ... else ... }   it seems bCheck is always false, and i'm not sure what's preventing bool value to be passed to vs function.  Any idea?   thanks.
  10. Hi, I'm trying to do manual viewport transformation in vertex shader ... first, i'm setting an identity viewport matrix by doing the following D3D11_VIEWPORT vp; vp.Width = 2; vp.Height = -2; vp.TopLeftX = -1; vp.TopLeftY = 1; vp.MaxDepth = 1.0f; vp.MinDepth = 0.0f; g_grcCurrentContext->RSSetViewports( 1, &vp ); and then, in vertex shader, after doing world/view/projection transformation, i do the following (given that i want to render into 1024x1024 viewport with 1.0 maxz and 0.0 minz) OUT.pos.x = OUT.pos.x * 512.0f + 512.0f * OUT.pos.w; OUT.pos.y = OUT.pos.y * -512.0f + 512.0f * OUT.pos.w; I can't think of anything else that i need to do to get this working .... i'd appreciate any help, thx
  11. I'm in the process of converting existing project into 3D stereo game. I'm using deferred renderer and directional shadow (cascade) is done during lighting phase. The problem is that depth perception of shadow looks wrong. I'm guessing that this problem is caused by the fact that everything before lighting phase (gbuffer phase) gets stereoized while nothing in lighting phase gets stereorized. I read their doc, but still not quite sure how to solve this problem. Any idea? thanks in advance
  12. I'm not exactly sure how to create pcf shadow filter with gather function properly. If i do depth-compare with each of four components returned by the function, (and the add them up and divide the result by filter size), i'm getting aliasing visual glitch. I've been trying to do bilinear interpolation on the four components returned by the function and then do depth-compare, but wasn't very successful (still visual glitch) Is there something i'm missing? (i know that you can use samplecompare or gathercmp (shader 5.0 only) ...but want to know if there's any way to do pcf filter with gather function.)