• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.

ingramb

Members
  • Content count

    32
  • Joined

  • Last visited

Community Reputation

440 Neutral

About ingramb

  • Rank
    Member
  1. This is my favorite reference regarding shadow maps:   https://mynameismjp.wordpress.com/2013/09/10/shadow-maps/   To improve self-shadowing, check out the various biasing techniques, such as receiver plane bias.  Also, variance based filtering methods (VSM/EVSM/MSM) tend to behave much better with shadow acne.  They are more expensive and they have their own artifacts, but they can be a good choice.
  2. Most forward+ implementations will still require a z-prepass to prevent overdraw at the very least.  You can use this to compute SSAO (or whatever), and then read it in the forward lighting pass.   Depending on what you need, it's possible to write out a slim gbuffer (maybe include vertex normals) during the z prepass to allow more types of screen space effects.
  3. That's very helpful, thank you.   What kind of values did you end up needing for leak reduction?  Is that a value that you think makes sense to expose to artists per light, or did you get away with a hard-coded global value?
  4. Good general overview of many rendering techniques: http://www.amazon.com/Real-Time-Rendering-Third-Edition-Akenine-Moller/dp/1568814240   Lots of good presentations about more advanced techniques: http://advances.realtimerendering.com/   Free online books with some interesting techniques: http://http.developer.nvidia.com/GPUGems/gpugems_pref02.html http://http.developer.nvidia.com/GPUGems2/gpugems2_frontmatter.html https://developer.nvidia.com/gpugems/GPUGems3/gpugems3_pref01.html   Similar to above, but more recent and not free: http://www.amazon.com/GPU-Pro-Advanced-Rendering-Techniques/dp/1568814720 (I think there are 5 or 6 GPU Pro books now, with more on the way)
  5. I'm looking for an explanation and/or example of why using 4 components vs 2 components is important for EVSM.  In my limited tests, using 2 component 32 bit gives much nicer results overall vs 4 component 16bit.  Dropping from 4 to 2 components does result in a bit more light-leaking, but it doesn't seem bad in the test scenes I'm looking at.  It's still hugely improved vs straight VSM.  Going down to 16bit however (even 4 components) causes pretty noticeable aliasing, and the required bias seems to result in peter-panning as well.   I remember specifically reading a presentation from Ready at Dawn, where they had to fall back to using 16bit for memory reasons on The Order.  Is there a good reason they chose this trade-off, vs dropping down to 2 components?
  6. Awesome, thanks.   Follow up question, what about applying shadow in a deferred pass?  In this case, I would be reconstructing world position from depth buffer, and then use that position to look up shadow map.  This will mostly work, except the derivatives will be discontinuous on object edges in the depth buffer.   Googling around, the only real solutions I could find were to clamp the maximum mip level to minimize the problem, or remove mips all together.  Is there any better technique to handle this case?
  7. I'm experimenting with a forward+ rendering setup, where I loop over a list of lights to apply in the pixel shader.  If a given light has shadows enabled, I sample the shadow map for that light directly in the loop.  I'd like to use EVSM shadow maps with anisotropic filtering, so I need the ddx/ddy of the texture sample position.   I'm thinking I can compute ddx/ddy of the world position outside the lighting loop, and then transform these derivatives into shadow map space using shadow matrix for each light.  This seems like it would work, but might be expensive.  Are there any other/better tricks for getting approximate derivatives to use?
  8. No fixed function.   I ended up just using LEQUAL for the main pass, and running alpha test again, at least on PC D3D9.  I'd still be curious if anyone has any ideas.
  9. Thanks for the replies.   I'm targeting D3D9 on the PC, so sadly no precise keyword.  No trickery with changing primitive types.  I've looked at the D3D bytecode, and it appears to match, but I could be missing something.   I'll keep looking.
  10. I'm doing deferred lighting, where I have a gbuffer pass, followed by a main pass.  For alpha tested objects, I would like to do the alpha test in the gbuffer pass, and then skip alpha test in the main pass by using a depth compare function of EQUAL.  This way the main pass should only render on top of pixels that passed the alpha test during the gbuffer pass.   The problem is that I can't get the depth values to match exactly between passes.  It works sometimes, on some machines, in some situations, but not all the time.  As far as I know, I am passing the exact same data for all verts and relevant shader parameters (matrices, etc).  Each pass has a different vertex shader, but they compute the output position in exactly the same way.  Is there any way to make this work reliably in D3D9?  Or will I just have to use LESSEQUAL in the main pass, with a possible depth bias?
  11. I have a typical system for rendering light probes, where I render a cube map at the probe position, and then project the cube map onto SH.  I render the lightmapped world geometry into the cube map, which captures indirect lighting from the lights in the lightmap.   I'd also like to capture direct lighting from the lightmap lights.  To do this, I was thinking I'd render a sphere with the light's color/radius/position in the cube map for each lightmap light.  Is this the right idea?  Am I missing anything?
  12. I'm working with the shader debugger built into visual studio 2013.  As I step through the shader program, the values shown for local variables seem very inconsistent.  They seem correct when the program counter is directly after the statement in question.  As I step through, variables set on previous lines tend to show up as NAN.  The shader itself seems to be working, it's just a display problem in the debugger.  I'm compiling all my shaders with "D3DCOMPILE_DEBUG | D3DCOMPILE_PREFER_FLOW_CONTROL | D3DCOMPILE_SKIP_OPTIMIZATION".  Is this just a limitation of the shader debugger, or am I missing something?
  13. Problem was setting 0 as the sample mask for OMSetBlendState.  It's always the little things.  Thanks for suggestions everyone.
  14. Yes. Not as far as I can tell.  When I check the pixel history, I only see the ClearRenderTarget. When I run D3D sample apps through the graphics debugger, I do see pixel shader stage. The pixel shader is set in the device context.  All the states in the device context look correct, as far as I can tell. When I check pixel history, I don't see my pixel shader anywhere.  Only the framebuffer clear.   Anyway, thanks for the suggestions so far.
  15. I'm just starting out with D3D11, trying to port an existing OpenGL game to D3D11.  Using the graphics debugger built into vs2013, I can see that I'm passing correct geometry into the input assembler, and it appears that I'm getting valid output from the vertex shader.  After that, it skips right to the output merger.  The pixel shader isn't being run, despite being set.   I'm not getting any errors or warning from the debug runtime.  I've disabled backface culling.  I have a valid viewport set.  I'm feeling pretty stumped at this point.  Anybody have any idea of what else I should look at?   This is my vertex shader:   This is my vertex shader output: