• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.


  • Content count

  • Joined

  • Last visited

Community Reputation

2397 Excellent

About knarkowicz

  • Rank

Personal Information

  1. Exposure is just a simple color mult before the tonemapping. Auto exposure means automatically calculating this exposure value based on scene avg luminance, histogram or something else, but still it doesn't change the tonemapping. It works the same way as for a real-world camera - exposure is manual/auto "brightness setting" and tonemapping is analog film. Shameless plug: some time ago I wrote a lengthy post about auto-exposure in games: https://knarkowicz.wordpress.com/2016/01/09/automatic-exposure/ .  
  2. IMHO in this case (mip map generation) difference between ^2.2 and proper sRGB is negligible, so you shouldn't worry about that. As for NVTT and mip map generation - just do a simple test. Provide some random data in mips (e.g. one mip level red, second green etc.) and check if NVTT outputs the same texture or if NVTT will generate new mips from the top level.
  3. I would recommend generating a second UV set just for lightmaps. This way you can fix light bleeding (cut UV charts at hard edges), light seams (stitch or use "seamless" UV mapping), control density, tightly pack generated charts etc. If you decide to use an external renderer, keep in mind that it needs to be able to support all your custom materials. This may be hard to do with something like 3dsMax. Additionally, it may be hard to extend it to use smarter lightmaps with normal map support (like SH or dir+ambient). Personally I would go with UVAtlas (+bruteforce UV chart packer and object chart bin packer) for UVs and either custom GPU lightmapper or custom CPU lightmapper using Embree for rendering. BTW some time ago I wrote a lengthy post about my lightmapping implementation (UV gen, packing, rendering) which may interest you: https://knarkowicz.wordpress.com/2014/07/20/lightmapping-in-anomaly-2-mobile/
  4. 1. Log/exp are used in order to calculate geometric mean. Imagine a very dark scene with a single extremely bright pixel. Without log you would get a very high average luminance. 2. It looks like that your skybox simply isn't bright enough compared to the rest of the scene.
  5. With single pass you need to process every cubemap per pixel (if you don't use compute shader with tiles). With deferred you process only required pixels per cubemap, but with some overhead. It can be faster or slower depending on the number of cubemaps.
  6. @IoreStefani you are 100% correct, sorry pseudomarvin for the misinformation. I just checked my own clipper and I'm also doing linear interpolation in homogeneous space.
  7. Guard-band's idea is that you reduce amount of clipping to minimum. You clip only if a triangle intersects near plane or intersects massively offseted frustum sides. In your case it look like you will be clipping more triangles, as you clip with non-offseted frustum sides.   This is just an explanation what's is guard band. It's not crucial for a fast rasterizer, as clipping is a relatively rare operation. The crucial part is to do some clipping with (offseted or not) frustum sides, so your interpolation math won't explode.
  8. You don't divide by accumulated alpha value. Just use vanilla alpha blend. There is no correct formula for the blend weights. Just some do falloff in order to hide the transitions.
  9. You can't use linear interpolation in screen space for attributes which aren't linear in screen space (e.g. UV, position, normal etc.).   BTW you need also some guard-bard (clipping against offseted frustum sides). Without guard-band you will have precision issues when interpolating attributes.
  10. Hi,   It's impractical to use SH for high freq IBL, as you need a crazy amount of terms. Additionally, Brian's split sum approximation splits the radiance integral into two parts - one which can be stored in a cube map and second which can be stored in a small 2D texture (this is why there is no L(i) term in the second equation). Anyway, I'm not sure what are you trying to calculate here, as your equations look quite strange - e.g. f(l,v) term is missing.
  11.   You could use RGB16_UNORM by manually scaling shader outputs and inputs, but RGB16_Float render target is more convenient and precise (you want a lot of precision for the low values and not too much for the very high ones).     Tonemapping is about reducing color range for the final display, so yes a common tonemapper (one designed for LDR displays) will bring image to the [0;1] range.     Yes. Output values will be clamped to [0;1] range.     Yes. You want bloom before tonemapping (so it works with linear HDR values) and FXAA after tonemapping (so edge blending works "correcly"). 
  12. Just create a render target with fog of war, where 1 pixel represents one tile and it's value represents if it's visible or not. Update this render target when camera moves, by rendering a circle. This way new tiles around the current camera position will be marked as visible. Finally, render fog of war, by reading mentioned render target, filtering it (maybe using some higher order filter) and apply some kind of cool effect in pixels where fog of war is marked as visible.
  13. Most often HDR sky textures don't contain sun, as it's a separate analytic light source and you don't want to apply it twice to the scene. Therefore you usually don't have to deal with such extreme values and ~32-64 samples are enough for diffuse pre-integration using importance sampling. It's also pretty cheap, as 6x8x8 destination pixels are enough for low frequency data like diffuse. This of course doesn't yield perfect results, but errors are hard to notice in a real scene with textures.   If you are interested in diffuse pre-integration using SH, then check out this great post by Sébastien Lagarde (it also contains an example application with source code): https://seblagarde.wordpress.com/2012/06/10/amd-cubemapgen-for-physically-based-rendering/
  14. DX12

    Arrays in HLSL constant buffers are stored without any packing and every element is stored as float4. This way there in no extra overhead for array indexing in shader. If you want to pack arrayd more tightly then simply pass them as float4[] and cast to float2[] in shader.
  15. There are two usual solutions for leaking. First one is to disable occluded probes at runtime, which may be a bit costly when using 3D textures. Second one is to replace occluded probes with correct ones by blending their unoccluded neighbors during an offline step.