• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.


  • Content count

  • Joined

  • Last visited

Community Reputation

172 Neutral

About Polarist

  • Rank
  1.   Ah, I see, that sort of calculation should be relatively cheap to what I was imagining.  Thanks for explaining.     This is good to know.  A lot of what I know about game programming is, unfortunately, dated to roughly 10 years ago.  It doesn't help that I was reading a bunch of articles from Intel recently to catch up, who may be blowing the bandwidth issue out of proportion  (I don't know, just a guess).
  2.   That's not necessarily true, you can implement an "oct-tree" with a b-tree under the hood; your recursion step should cycle over each axis.  Thus the number of recursive steps would be similar in complexity to a BSP-tree.   But I think my question was poorly worded.  Simply put, I'm curious about if it's worth it to "choose" a splitting plane versus naively cutting it down some midpoint in the dynamic real-time situation you were describing.  The obvious difference is that choosing the latter would be O(1), and the former would be (presumably) at least O(N).  That of course would pay off as searching would be faster.  If the tree were generated before-hand and kept static, then of course it'd be better to load the cost up front, but in the case of a real-time dynamically generated tree, I'm wondering if, generally, such an approach is still worth it.   It's a bit of a subjective question and it begs more for intuition/experience than a purely academic response, I suppose.     I'm wondering if there's a subtle point here that you're also describing.  If you're just talking about memory allocations, then of course it's almost never an issue, but isn't memory bandwidth is a large bottleneck for speed these days?  I don't work in the games industry, so I'm not aware of the current state, but isn't it a challenge to maintain cache coherency in collision detection systems, too?  Or is cache coherency kind of a solved issue at this point?
  3. So of course, it doesn't *really* matter if you include everything as long as it compiles.   The typical rule of thumb, however, as mentioned above, is to include as little as possible in each place (both in the .h and in the .cpp, meaning that most includes per translation unit should end up in the .cpp file).  This is primarily for 2 reasons, managing compile time and managing dependencies.   Compile time is obvious, because every time you modify something and something else includes it, the latter will need to be compiled again.  And clearly, you'd rather spend more time coding and testing than waiting for things to compile.   Managing dependencies is the other, you can use your includes to document how many dependencies each file has.  As a program gets to be complex, you generally want pieces of your program to be as independent as possible from the other pieces.  Thus, you can evaluate how many dependencies something has by looking at how many includes there are at the top.  (This heuristic works only if you followed the rule above of having as few includes as possible.)   But of course, this rule shouldn't always be followed 100%.  Especially if you're working in a small team or individually.  There's a competing idea of optimizing for coding time ("optimizing for life"), which means that you should do these things only to the point at which you are actually gaining time in the long run.   If you find that having some often-used things in a big "globals.h" include saves you time in the end, and you know you can manage your project's complexity well, then you should consider putting stuff in the global include file to save you typing time in the end.  For instance, if you wrote a run-time debugger, profiler, or even a global game state that you need to query almost everywhere, then just put it in the globals.h to save you typing time in the end.  If you know where your dependencies are, you can always fix up your includes later if you wish.  Especially towards the beginning of a project, when you are still prototyping a lot of different systems.  I think it makes sense to use larger global includes.   Other people may have other wisdom to share in this regard though, as when and where to use global includes is pretty subjective.
  4.       I'm about to implement some broad phase checks into my engine, and I'm curious about the approach you've listed here.  In practice, how much better is it to "sensibly" construct a BSP-tree over "naively" dividing the scene by some notion of a midpoint (as in a typical quad- or oct-tree)?   So far, I've been leaning towards a a purely naive oct-tree construction.  Has your experience with the "sensible" BSP-tree determined that it's been worth it over a simple oct-tree?
  5. Take a look at Screen-Space Ambient Occlusion (SSAO), it's a relatively cheap way to achieve AO.   I'm not sure what effect you're referring to in particular, as there are a lot of things going on in those screenshots..  specular maps, reflections, lens flares, to name a few
  6. If you want a normal vector like the one he describes, then you need to know the polygon that the player is "standing" on.  This requires a much finer calculation than a bounding box, as you need the exact polygon to get that normal vector.  How expensive it is depends on the complexity of the world, but when compared to the super-cheap alternative of  subtraction of two vectors, it's extremely expensive.   It's not that you shouldn't do collision casts per frame, as you do these all the time for things like shooting projectiles and such.  It's just that you only have so much processing power to spare if you want your game to run smoothly, so you'd rather save calculations for other things.  One of the guys mentions that the reason why there are only so many monsters in the game is because more would be too much for the hardware.   Also, keep in mind that this hardware is the PS1 (I believe) so we're talking about much worse hardware than today.
  7.   You're welcome.  I edited the original post to add more detail.  (I may have veered a bit off track though.  )   Feel free to add more or PM me if you have any further questions.
  8. You're talking about different concepts.   Practically every 3d game will find a use for forward, up, and right vectors.  But they also need direction, position, velocity vectors for each entity.   In the context of the video, forward, up and, right are are constantly changing (as opposed to most games which can hardcode "up" to an arrow pointing along the positive y axis, meaning that only forward and right are changing.)  The three directions are a basic inputs used, among other things, to orient the player and to orient the camera (set up a "view matrix" and "projection matrix").  Their optimization was that in a spherical world, the up vector could be calculated simply by taking the position of the player and subtracting the center of the world.  This makes it cheap to calculate the right vector.  This is compared to a more general approach of finding the normal vector that corresponds to the polygon that you're standing on (the collision approach he describes as super expensive).   I watched the entire video (thanks, it was interesting), but they did not compare forward, up, right to direction, position, velocity, directly.  They are not competing concepts.  Defining forward, up, and right are useful constructs to describe orientation, in general.  Where as direction, position, velocity are useful to describe movement and physics and such.   Sounds like just semantics. I think you might be over-thinking the concept if you understand vectors, cross products, and dot products (and other basic ideas in linear algebra).  These are super, SUPER basic ideas, so if they are confusing you then you're probably thinking about them the wrong way.   Again just to make it clear.  EVERY 3D game will use vectors that represent forward, up, right, direction, position, velocity, acceleration, etc.  These are just terms to describe what a particular vector is trying to describe.  Unless I missed something, there aren't "two approaches" here.     Don't even think about vectors as "a direction and a magnitude," that idea will be limiting and it conveys them as being special in a sense.  Really, the best way to think about it is that a vector is just a number.  (Sure it's actually 3 numbers, but just think of it as a bloated "number".)   So a vector *could* be used to describe a point, or a direction, or a direction and a magnitude, but that's giving them special meaning.  It just happens that in a 3d world, it's convenient to use 3 numbers at a time to describe things.  That's really all there is to it.   (Tangential side note: In higher mathematics, they are useful because often you want to describe things in higher dimensional space or even n-dimensional space, meaning that you don't define whether the problem lies in 1-D, 2-D, 3-D, 12,345-D.  To grasp these concepts, you'll have to "unlearn" that stuff about "directions" and "magnitude" if you've internalized it too much.)
  9. I'm very interested in this approach, could you share any resources that you find particularly useful for implementing this approach?  It seems like a very elegant way to approach this issue, and would be much cleaner syntax than what I'm used to doing (i.e. shenanigans in constructors or a clunky factory approach).
  10. @Milcho The solution you posted locks the cursor to the window, while the solution I posted above locks the cursor to the "client area."  The difference is that the window allows you to move your cursor to the titlebar and sides of the window, which means that players will be able to drag the mouse "out of the game" and interact with the window elements (such as dragging the window or resizing the window), which in my implementation would be unwieldy.   When the game crashes, I read somewhere that older versions of windows (perhaps pre-Vista) would indeed keep the mouse locked to the clip rect.  The suggested alternative was to simulate the ClipCursor behavior with manual SetCursor calls.  I haven't tested any of this personally though.     Raw input (WM_INPUT) allows you to separate the input from the mouse from the pointer representation in the OS (WM_MOUSEMOVE).  Thus you can do things like locking the mouse cursor in place (e.g. ClipCursor on a 1x1 Rect) while still receiving move events.   I found that using raw input was no more complex than windows messages, in both cases, you simply get passed a dx and dy.  The only logically different piece was checking for "absolute position" devices such as touchpads, and even there you simply need a subtraction before you have a dx and dy.   So while complexity is substantially the same, it is important to note the differences in the data.  WM_MOUSEMOVE will be affected by mouse settings in the OS.  If the user changes his mouse acceleration or mouse sensitivity, the dx and dy will be scaled accordingly.  Raw Input won't reflect those changes.   So to answer your question, there are advantages to it and it's not substantially harder to code.   I settled for a hybrid solution to support the RPG controls you described, using WM_MOUSEMOVE when the pointer is visible and using WM_INPUT when needing to rotate the camera.  This appears to be the most robust solution. (Unless there are alternatives I'm not aware of.)
  11.   Thanks for the link!  Even if it wasn't exactly what I asked, it was a good read and pretty helpful.   My system is quite similar to the one described, but it's nice to hear what works well for people after the fact.  I didn't however do the XML configuration file approach, but that's a good idea for user-friendly configuration.  I might have to come back and do that.
  12. It may be not as efficient, but I tend to use STL everywhere (until I need to optimize).  It's much easier to maintain/read. const int NUM_THINGS = 10; std::vector<int> myThings(NUM_THINGS); for( const auto& thing : myThings ){    do_something_with(thing); } It also makes it quite difficult to accidentally overrun your buffer.
  13.   I vote for this.  Clear and to-the-point.   But honestly, if you're set on that set of functionality (file i/o AND database query), then you'll want a fairly non-standard name to denote that it carries a non-standard role.   I would have suggested something like DBBasicConnection because of it's similarity with a standard idea of a Connection UNTIL you mentioned that insert() and update() functions take filenames, that kind of throws standard out the window.
  14. Unity

      That's pretty impressive for a young guy like yourself, I wish you good luck and hope you keep us posted!
  15. If you're still considering the multithreaded approach, take a look at the Smoke demo from Intel which builds a n-core scalable game engine using a task based approach (using tbb). A task-based approach makes it quite painless to construct multithreaded processes. To get up and running, it took me just a couple hours to go from a single-threaded design to one that was achieving 100% CPU utilization over 4 cores during a few of my processing heavy operations.  There was certainly much more work to be done afterwards to fully take advantage of multithreading, but it was quite easy to speed up the performance sensitive pieces for my needs.   The idea behind a task-based and scalable solution is basically to "future-proof" the engine for any later developments in hardware, and to be decoupled from the number of  physical cores available on the machine.  I.e. the engine should maximize hardware usage regardless of whether you are allotted 1 or 20 cores.     There are fairly modern game engines that take the older approach of assigning subsystems their own threads.  (E.g. the renderer gets 1 thread, the physics system gets 1 thread, etc.)  But as far as I'm aware, that method is no longer advisable and antiquated.   The "new" way is to basically to cut up all your game's subsystems into smaller, independent tasks.  You then assign those tasks to cores as they finish their current tasks.  So for instance, your physics calculations could be split up into 8 tasks, your render queue construction could be split up into 4 tasks, your particle systems can be split up into 6 tasks, whatever.  You do have to keep in mind that certain systems need to come after the other, but you only need to ensure that at a high level.  You also need to consider that given the high degree of concurrency, that you should be using lock-less data structures and considering thread local storage.  But after all those tasks are completed, you reconstruct the outcomes of those tasks into a completed game state that you can pass on for rendering.     So beyond that model of concurrent programming, there's also a consideration for how asyncronous you want your "game" to be from your renderer.  From my brief dive into the topic, I understand that there are two general architectures to consider: double-buffered and triple-buffered.   In a double-buffered setup, you have two copies of your game state, one of the "last" frame and one of "next" frame.  The "last" frame should be read-only at this point and read from by the renderer for drawing and from the other systems for calculating the "next" frame.  The subsystems should have clear ownership of write-access to different parts of "next" frame.  One benefit of this second buffer is that it saves you from the headache of making sure every little thing happens in the correct order, as it removes a lot of potential for collisions.  In this approach, as soon as all the subsystems and rendering is complete, you swap your "next" frame with your "last" frame and repeat.   The triple-buffered setup is a similar idea, except that the game and the renderer do not need to render in lock-step.  The three buffers for this approach can be delineated as the one "being rendered", the one "last fully updated", and the "currently being updated".  When the renderer finishes rendering, it will immediately move on to the next "last fully updated" one (unless it's already there).  When the subsystems finish calculating, they will reassign the "currently being updated" as the "last fully updated" and begin storing the next game state in the previous "last fully updated" buffer.  With this approach, the renderer will only slow down if it's already the last fully updated buffer, but the subsystems will never have to wait for the renderer to finish.   Also, note that these buffers should not need to be block-copied over eachother, they should use pointers or indicies to denote which one currently has which role.  And if memory is a large concern, rather than full buffers, you can manage a queue of changes, instead.   Anyway, hope I shed some high level details on multithreaded engine programming.  There appears to be a lot of development in the area, and it's one that I find quite interesting.