• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.

Ingenu

Members
  • Content count

    1319
  • Joined

  • Last visited

Community Reputation

1629 Excellent

About Ingenu

  • Rank
    Contributor
  1. inverted Z buffer is way more practical than a logarithmic depth buffer.
  2. Note that AMD is to present a Primitive program stage that replaces both Vertex & Geometry stages for their upcoming VEGA GPU from the slides we have seen. Not sure what it will do. Geometry shaders are unusable, you should consider them defunct and only them under threat.
  3. I meant 1 buffer and all Positions, then all Normals and so on... So still SoA but in a single buffer, each attribute array appended to the next if you will.
  4. Are you using 1 buffer for all attributes or 1 per attribute ? If you're not using 1 for all attributes you should probably try that.
  5. You should use a ByteAddressBuffer as suggested by MJP.
  6. MJP point 2 is the most important, you need an index buffer in order to benefit from post vertex transform cache, besides that you could, if you only target recent hardware, go SoA (not interleave your vertex data) and fetch manually, that's what will happen on any GCN anyway. As mentionned by MJP also, nVidia hardware works differently, not sure about latest gen, all consoles being GCN we tend to optimise for it...
  7. Some procedural terrain generation turned into chunk LoD with most detailed version including tessellation, that's about it. Even though it helps, it's not perfect by any means, writing the rules for generation/item placement is still trial and error and a change in terrain requires a complete regeneration, and since terrain is usually critical to gameplay you may have changes even late in dev...
  8. D3D11 is still the standard 3D API and is updated by MS, D3D12 is the low level 3D API and also updated by MS, we now have two MS API for the same task, depending on time/effort you pick one or the other, so I don't think it's going to vanish any time soon...
  9. You could have a look at Doom 3 source code, it's open source and rather neat. You should avoid having too many GPU programs (commonly referred as shaders but it's not really appropriate anymore), it costs memory, switching is still not free (but getting better every generation) and it costs time too (to compile). If you decide to do the most logical thing, that is physically based rendering (unless you want another graphic style of course), you should end up with few GPU programs.   [And deferred shading/lighting/whatever ;)]
  10. Exactly. Light bounds vs object bounds.
  11. A spatial acceleration structure should help you finding the closest lights, brute force is also a viable option depending on the number of lights you have in your DataBase. If you write a deferred renderer you won't need to find the closest lights at all.   A directionnal light is a point light at infinity, ie like the sun or moon or star lights, so you will always consider it visible (unless in a cave or such).   Without light the lighting part of your shader won't trigger, no light means pure black unless you have some ambient light to avoid the issue, but sure your per light function won't be called, I don't really see a problem with that...   There are many shadow tech, shadow buffers (usually called shadow maps, even though maps are typically used for precomputed things and not runtime computed things) are a popular tech, and there are plenty of variations, I'd try straightforward shadow mapping (ie depth buffer) + Parallel Split Shadows for the sun/moon/stars.
  12. OpenGL

    You could write the code in a flexible enough manner so that it would work, just have your light bucket, in the forward case send it to whatever needs to be rendered so it can pick the lights it needs, for the deferred case just render them and compute lighting. In both cases you'll need to store light information in an array, and probably branch on light type (actually you'd rather have 3 loops for 3 different light types), if you have an array of lights (or 3 arrays, one for each type) you can as well have an array of textures (your shadow buffers, 3 arrays again)...   So I don't really see any difficulty here. Just try to cull as many lights as possible to generate the least amount of shadow buffers (you'll only have a maximum of n of them anyway), to save on processing.   That's pretty much how I made my engine work with any rendering technique I may want to try or fancy, having a plugin rendering process and a list of ordered Step/Groups that change depend on the rendering tech I chose. (Which would be similar to your buckets)
  13. OpenGL

    Go for deferred shading directly and add a light bucket, that will be much easier. Otherwise go for the light list, if you plan to go deferred I don't really see the point in wasting my time on the forward proof of concept step, so it will be fine.
  14. Data oriented design is thinking about what data you need and what algorithms you'll run it through, because that will be the deciding factor in how to laid it out.   Here's an exemple: http://fr.slideshare.net/DICEStudio/culling-the-battlefield-data-oriented-design-in-practice   or rather: http://fr.slideshare.net/DICEStudio/introduction-to-data-oriented-design?next_slideshow=1 http://fr.slideshare.net/EmanWebDev/pitfalls-of-object-oriented-programminggcap09?next_slideshow=1
  15. Yes that's closer.   Basically as you write your GPU program code, you decide where you put your data and what data you need, and if you write say "cbuffer Object : register(cb0) { float4x4 WorldMatrix; };" for your fragment subprogram you have just explicitely decided to put that constant buffer to constant buffer slot 0, you must therefore write the corresponding glue code engine-side, which would something like "gfx.PSSetConstantBuffers( 0, 1, pCB );" in your setParameters(...) procedure, and you must also write the code that will fill the data in that CB (mapping it, casting it and writing the data such as "pCBMatrix = instance->GetWorldMatrix();")