• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.

osmanb

Members
  • Content count

    564
  • Joined

  • Last visited

Community Reputation

2081 Excellent

About osmanb

  • Rank
    Advanced Member
  1. This question jogged my memory about an article from Game Developer from (holy crap) 14 years ago. Turns out it's still online: http://www.gamasutra.com/view/feature/131401/gdc_2002_realistic_level_design_.php?print=1   Check out the 'Scale' section near the end.  :wink:
  2. Despite what other people are saying here, it is possible to save and load floats via text, correctly. But you have to be more careful (formatting) about how you do it:   https://randomascii.wordpress.com/2012/03/08/float-precisionfrom-zero-to-100-digits-2/ https://randomascii.wordpress.com/2013/02/07/float-precision-revisited-nine-digit-float-portability/
  3. I'm guessing that you are asking what Hodgman suspects... my intuition is that it's going to make no difference, at all. I guess that if the CBs are small enough, then the second layout could be faster in some scenario, but I can't really imagine a set of architecture decisions that would ever make the first layout faster. Of course, GPUs are often not susceptible to intuition, so your best bet is to test both and measure. (Then remember that whatever results you get are probably inverted from what they'll be on a different vendor, or in the next generation of HW from your vendor).
  4. Pointing to The Witness as an example of people writing everything in house is laughable. They've spent about 8 years (including a 5 year delay!) writing Myst. I was baffled for years as they continued to post updates on their "progress", re-writing every possible piece of technology from scratch, completely losing sight of the forest for the trees. Seriously - I think that they're smart guys, but they completely fell into the NIH trap, which is something many smart people do. If you're enjoying the challenge for its own sake, that's fine - but it's not the most efficient route to finishing your product. Make intelligent and well-reasoned build vs. buy (including $0 purchase for free tools and libraries) decisions. There are plenty of great tools and libraries out there, and lots of very smart people have recognized that, and use them.
  5. Up-voted for the excellent diagram. As far as the other answers go - I think you're missing the point of his question... My initial reaction was that you would want to group objects together (to fix case 3). I think you're going to want some kind of iterative approach where you group things while there is no connection with the ground plane, and simulate the dropping as much as possible. Once something is in contact with the ground, detach it from any groups, and let everything else continue to fall in whatever groups remain. Repeat. I imagine this isn't going to be super-clean to implement, but if you treat it as a set of connected components, and re-construct (or adjust) those components as important events occur, it may not be too bad. Initially, the ground plane is the only immovable piece. As soon as any piece is touching that, it becomes part of the ground as well (flagged as such, or added to that immovable component). Everything else gets clumped according to connectivity - don't even try to distinguish the really hard cases (#3) from slightly simpler cases - just merge connected pieces while they're falling to avoid any race conditions on movement. As soon as a piece is resting on anything in the ground component, detach it from its previous component, and continue. (You may need to re-compute the components at that point, if a single block that now touches the ground was the only thing keeping two otherwise non-touching blocks connected).   Make sense?
  6. RenderDoc (https://github.com/baldurk/renderdoc) is another great option - it was developed by CryTek, then released for free. Now it's maintained as an open-source project, but includes most of the functionality you mentioned wanting.
  7. Yeah, this is an incredibly common operation in many GPU-accelerated solutions to ... just about everything. Search for "prefix sum", which is the problem, and also leads to the standard solutions (like Alvaro's).
  8. Do you mean that you try rotating the light's camera around it's (local) forward vector? That's a good idea, actually - assuming that the result remains stable over multiple frames. I do what (I think) most people do, and pick an arbitrary/constant up vector for the shadow camera, but that does lead to waste, like you're saying. It seems like ultimately, you want your up vector to be based on the pitch of the view frustum? Not sure - there are quite a few different cases that might make a direct solution tricky, vs. your simple approach. Do you just test N quantized rotations?
  9. ... And after you get the bounding sphere method working, you'll realize that your light frustums are now VERY conservative, and wasting a lot of space. So, as a better solution, you can basically quantize the scale (size) of your light's ortho frustums... Compute the AABB like you were doing originally, then round the width and height of the frustum up to to the next nearest multiple of some value. (The actual value will depend on the scale of your world, and how much shimmer you can tolerate). This basically means that you only get the shimmer occasionally during camera rotation (when the AABB scale crosses the quantization threshold), but in exchange for that, you get tighter fitting bounds (which means higher effective shadow map resolution).   ... and you still need to do the thing where you snap translations to texel size increments, like Alundra said.
  10. Our engine uses a greedy algorithm that's basically what you described. I implemented it a long time ago for the exact same reasons that you're explaining, and it's worked fine. I'm sure that we occasionally end up with models needing one or two draw calls more than the optimal solution, but - as you've guessed - finding that is significantly more difficult, and definitely not worth the effort.
  11. With so little information, we're not really going to be able to help. What we *can* do is help you help yourself. Have you tried using a graphical debugging tool to analyze what's going on? If not, I'd suggest RenderDoc (https://github.com/baldurk/renderdoc) or VSGD (Part of Visual Studio 2012+). Other options include Intel's GPA or the much older PIX for Windows (which tends to not work well if you have the Windows Platform Update installed). That will let you (at a minimum) look at your resources (textures and frame buffers) during the frame, which should help narrow down what's not working.
  12. You can get this information directly in the pixel shader. In HLSL these days, if you declare an input to your pixel shader with the SV_Position semantic, it will be filled with the screen coordinates of the pixel you're rendering. That should be exactly what you need to get the scanline effect.
  13. Honestly, as smart as Aras may be - his suggestion for this problem is pretty bad. Assume that the actual range of depth values for your draw calls ranges over a few orders of magnitude (maybe from 10 - 10,000). That means that your floating point depth is using 8 bits to encode about 4 bits worth of information (you're covering values from about 23 to 216, or 13 unique exponents). That's pretty wasteful, and doesn't give you much actual precision in your remaining bits.   If you're going to be storing all (or nearly all) of your floating point bits in your sort key, then this works fine. But if you're trying to quantize down to ~10 bits or whatever, you're wasting quite a bit of potential precision. Normalizing to a single exponent [1,2) and using the highest bits of the mantissa will provide much better precision. (Which is pretty much just the same as normalizing to [0,1), multiplying by 2N, and casting to int.
  14. I don't doubt that there are applications that have these requirements, but I do want to ask: why? What exactly are you doing that needs that much (true) random data? (As opposed to cryptographically secure pseudo-random, or pseudo-random with other guarantees on distribution, etc...)
  15.   I'm guessing 5 (haven't tested this yet). The explicit tag on the constructor prevents conversion of the 0 to an X, so we have to convert to the type of the second argument (the zero). But that's an integer literal, not a floating point or double one - so we convert x to double, then to integer, which rounds to 5.