ingramb

Member
  • Content count

    32
  • Joined

  • Last visited

Community Reputation

440 Neutral

About ingramb

  • Rank
    Member
  1. Removing self-shadow acne

    This is my favorite reference regarding shadow maps:   https://mynameismjp.wordpress.com/2013/09/10/shadow-maps/   To improve self-shadowing, check out the various biasing techniques, such as receiver plane bias.  Also, variance based filtering methods (VSM/EVSM/MSM) tend to behave much better with shadow acne.  They are more expensive and they have their own artifacts, but they can be a good choice.
  2. Most forward+ implementations will still require a z-prepass to prevent overdraw at the very least.  You can use this to compute SSAO (or whatever), and then read it in the forward lighting pass.   Depending on what you need, it's possible to write out a slim gbuffer (maybe include vertex normals) during the z prepass to allow more types of screen space effects.
  3. That's very helpful, thank you.   What kind of values did you end up needing for leak reduction?  Is that a value that you think makes sense to expose to artists per light, or did you get away with a hard-coded global value?
  4. Good general overview of many rendering techniques: http://www.amazon.com/Real-Time-Rendering-Third-Edition-Akenine-Moller/dp/1568814240   Lots of good presentations about more advanced techniques: http://advances.realtimerendering.com/   Free online books with some interesting techniques: http://http.developer.nvidia.com/GPUGems/gpugems_pref02.html http://http.developer.nvidia.com/GPUGems2/gpugems2_frontmatter.html https://developer.nvidia.com/gpugems/GPUGems3/gpugems3_pref01.html   Similar to above, but more recent and not free: http://www.amazon.com/GPU-Pro-Advanced-Rendering-Techniques/dp/1568814720 (I think there are 5 or 6 GPU Pro books now, with more on the way)
  5. I'm looking for an explanation and/or example of why using 4 components vs 2 components is important for EVSM.  In my limited tests, using 2 component 32 bit gives much nicer results overall vs 4 component 16bit.  Dropping from 4 to 2 components does result in a bit more light-leaking, but it doesn't seem bad in the test scenes I'm looking at.  It's still hugely improved vs straight VSM.  Going down to 16bit however (even 4 components) causes pretty noticeable aliasing, and the required bias seems to result in peter-panning as well.   I remember specifically reading a presentation from Ready at Dawn, where they had to fall back to using 16bit for memory reasons on The Order.  Is there a good reason they chose this trade-off, vs dropping down to 2 components?
  6. Awesome, thanks.   Follow up question, what about applying shadow in a deferred pass?  In this case, I would be reconstructing world position from depth buffer, and then use that position to look up shadow map.  This will mostly work, except the derivatives will be discontinuous on object edges in the depth buffer.   Googling around, the only real solutions I could find were to clamp the maximum mip level to minimize the problem, or remove mips all together.  Is there any better technique to handle this case?
  7. I'm experimenting with a forward+ rendering setup, where I loop over a list of lights to apply in the pixel shader.  If a given light has shadows enabled, I sample the shadow map for that light directly in the loop.  I'd like to use EVSM shadow maps with anisotropic filtering, so I need the ddx/ddy of the texture sample position.   I'm thinking I can compute ddx/ddy of the world position outside the lighting loop, and then transform these derivatives into shadow map space using shadow matrix for each light.  This seems like it would work, but might be expensive.  Are there any other/better tricks for getting approximate derivatives to use?
  8. No fixed function.   I ended up just using LEQUAL for the main pass, and running alpha test again, at least on PC D3D9.  I'd still be curious if anyone has any ideas.
  9. Thanks for the replies.   I'm targeting D3D9 on the PC, so sadly no precise keyword.  No trickery with changing primitive types.  I've looked at the D3D bytecode, and it appears to match, but I could be missing something.   I'll keep looking.
  10. I'm doing deferred lighting, where I have a gbuffer pass, followed by a main pass.  For alpha tested objects, I would like to do the alpha test in the gbuffer pass, and then skip alpha test in the main pass by using a depth compare function of EQUAL.  This way the main pass should only render on top of pixels that passed the alpha test during the gbuffer pass.   The problem is that I can't get the depth values to match exactly between passes.  It works sometimes, on some machines, in some situations, but not all the time.  As far as I know, I am passing the exact same data for all verts and relevant shader parameters (matrices, etc).  Each pass has a different vertex shader, but they compute the output position in exactly the same way.  Is there any way to make this work reliably in D3D9?  Or will I just have to use LESSEQUAL in the main pass, with a possible depth bias?
  11. I have a typical system for rendering light probes, where I render a cube map at the probe position, and then project the cube map onto SH.  I render the lightmapped world geometry into the cube map, which captures indirect lighting from the lights in the lightmap.   I'd also like to capture direct lighting from the lightmap lights.  To do this, I was thinking I'd render a sphere with the light's color/radius/position in the cube map for each lightmap light.  Is this the right idea?  Am I missing anything?
  12. I'm working with the shader debugger built into visual studio 2013.  As I step through the shader program, the values shown for local variables seem very inconsistent.  They seem correct when the program counter is directly after the statement in question.  As I step through, variables set on previous lines tend to show up as NAN.  The shader itself seems to be working, it's just a display problem in the debugger.  I'm compiling all my shaders with "D3DCOMPILE_DEBUG | D3DCOMPILE_PREFER_FLOW_CONTROL | D3DCOMPILE_SKIP_OPTIMIZATION".  Is this just a limitation of the shader debugger, or am I missing something?
  13. Problem was setting 0 as the sample mask for OMSetBlendState.  It's always the little things.  Thanks for suggestions everyone.
  14. Yes. Not as far as I can tell.  When I check the pixel history, I only see the ClearRenderTarget. When I run D3D sample apps through the graphics debugger, I do see pixel shader stage. The pixel shader is set in the device context.  All the states in the device context look correct, as far as I can tell. When I check pixel history, I don't see my pixel shader anywhere.  Only the framebuffer clear.   Anyway, thanks for the suggestions so far.
  15. I'm just starting out with D3D11, trying to port an existing OpenGL game to D3D11.  Using the graphics debugger built into vs2013, I can see that I'm passing correct geometry into the input assembler, and it appears that I'm getting valid output from the vertex shader.  After that, it skips right to the output merger.  The pixel shader isn't being run, despite being set.   I'm not getting any errors or warning from the debug runtime.  I've disabled backface culling.  I have a valid viewport set.  I'm feeling pretty stumped at this point.  Anybody have any idea of what else I should look at?   This is my vertex shader:   This is my vertex shader output: