Jump to content
  • Advertisement

grunt123

Member
  • Content count

    122
  • Joined

  • Last visited

Community Reputation

280 Neutral

About grunt123

  • Rank
    Member

Personal Information

  • Role
    Programmer
  • Interests
    Programming
  1. hmm, ok. that means if i want to convert dx11 shader that uses w (viewspace depth) to vulkan shader (gl_fragcord), i need to do 1/w to get correct result?
  2. Hi, guys i read from the internet that w component of gl_fragcoord (opengl) used in pixel shader is actually 1/w. But is it same for dx11 SV_Position? i did some pixel debugging, and it looks like it's actually just w, not 1/w. Can anyone confirm this? thanks
  3.   thx for the reply. But, that pdf is about global fog effects, which is different from what frostbite volume tech is doing.  Frostbite volume is rendering a local/individual fog volume, so it can render multiple number of fog volumes that are differently authored by artist. (and each fog volume can have its own noise texture too).  If this is global fog effect (that is used in AC4 in that pdf), i would agree that low res texture would suffice ... But, for localized volume rendering? i'm not quite sure.  Frostbite volume tech is using temporal reprojection trick, but does this help much ?  i implemented simple version of frostbite tech just to render local volumes, and rendering into low res (froxel size buffer) produced really low quality volume.
  4. http://www.frostbite.com/2015/08/physically-based-unified-volumetric-rendering-in-frostbite/   i'm specifically looking at the above paper.  In there, it's using so called 'froxel' technique to do volumetric rendering.  i think i understand the overall idea, but i just can't figure out how to voxelize participating media.  They are using 'VBuffer', and is this 3D texture that has the same dimension as main froxel buffer? If so, it seems way too low resolution for rendering density media?   i'd appreciate any help on this.
  5. Hi, all   I've read a couple of presentations on volume rendering. (Frostbite volume rendering, AC4 volume fog), They are using 3D texture as intermediate storage that is later used for ray marching (so that you can raymarch once for multiple volume entities).  However, you need to 'voxelize' the volume entities into this intermediate 3D storage ... can anyone explain how to achieve this?    i think since the target rt is 3D texture, we need to render the volume entities into each depth slice of 3D texture with adjusted near/far plane ... but not sure.   thanks in advance.
  6. Hi,    i'm trying to implement directional shadow in cube reflection map.  I'm trying to avoid the case where i have to re-render reflected objects into separate shadow map.  Here is what i'm thinking ...   - render cube reflectionmap along with cube depth map - for each pixel in reflectionmap, (all cube texture) cast a ray to the sun, and use cube depth map to check if ray is obstructed by object - if it's obstructed, then shade a pixel.   i'm not quite sure about step 2, is it possible to use the ray vector as reflection vector into cubemap ref. texture? 
  7. grunt123

    Improving cascade shadow

    thanks for the link.  seems to be a great source of materials.
  8. grunt123

    Improving cascade shadow

    yes, 3x3 pcf/soft shadow. (blurring)  But, still seeing 'chain-saw' edges, especially in distant cascades .... What type of filtering can improve the situation? maybe, EVSM? 
  9. Hi,   I'm working on improving cascade shadows for my project.  The main problem is that for cascades that are distant (i have 4 cascades) i see some kind of 'chain-saw' looking edge.  i thought about implementing SDSM, but my project doesn't allow GPU-generated data to be used for cascade bounds ... (we have two threads, one doing all update and the other doing all the rendering)     Is there any other way of countering this kinds of visual problem in cascade shadow?
  10. ... without using copyresource/copysubresource?   i need to copy depth/stencil from one depth target to another using blitting screen aligned rectangle (via shader). i'm able to copy depth info, but somehow stencil info is lost.  (i'm guessing it's prolly source shader resource view is R24_X8 type ... )   any idea?
  11. Hi,   i'm trying to update old shader codes.  When i try to use uniform bool shader arguments, i found that shader function is not getting correct value.  So, for example,   technique abc { pass 0 { SetVertexShader( CompileShader( vs_4_0, vs(true) ) ); } };   vsout vs(vsin IN, uniform bool bCheck) { if (bCheck) ... else ... }   it seems bCheck is always false, and i'm not sure what's preventing bool value to be passed to vs function.  Any idea?   thanks.
  12. Hi, I'm trying to do manual viewport transformation in vertex shader ... first, i'm setting an identity viewport matrix by doing the following D3D11_VIEWPORT vp; vp.Width = 2; vp.Height = -2; vp.TopLeftX = -1; vp.TopLeftY = 1; vp.MaxDepth = 1.0f; vp.MinDepth = 0.0f; g_grcCurrentContext->RSSetViewports( 1, &vp ); and then, in vertex shader, after doing world/view/projection transformation, i do the following (given that i want to render into 1024x1024 viewport with 1.0 maxz and 0.0 minz) OUT.pos.x = OUT.pos.x * 512.0f + 512.0f * OUT.pos.w; OUT.pos.y = OUT.pos.y * -512.0f + 512.0f * OUT.pos.w; I can't think of anything else that i need to do to get this working .... i'd appreciate any help, thx
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!