Popular Content

Showing content with the highest reputation since 09/17/17 in all areas

  1. 14 points
    I have been a moderator here for about a decade. GDNet is a great community, and being part of the moderation team has been a wonderful experience. But today is, more or less, my last day as a part of that team. As it does when one gets older, life has increasingly intruded on the time I'd normally spend here. Fortunately all those intrusions have been (and continue to be) good things, but just the same I don't feel like I have the time to really do the job justice, and so I am stepping down. One of the remaining moderators will take over the forum in my place, although I don't know who that will be yet. Although it's very likely I'll be much less active for the next few months, I am probably not going away forever, and can be reached via private message on the site if needed. Thanks for everything, it's been great!
  2. 9 points
    Both or niether, as needed. Cache coherency is great, but it's trivial to design cache-coherent algorithms in a vacuum ("oh, just put all the components in a big array, update them in a loop, DONE"). It's much harder to design cache-coherent algorithms that adapt to actual practical use ("oh, it turns out that this component needs to read the position, which means now we might be blowing cache coherency to read that from a different component now, or blowing it to copy it into this component earlier," et cetera). What you want to do is design how you store your data in memory in a fashion that is efficient for the way you will access and transform that data. "Access," importantly, includes more than just how the memory will actually be fetched by the CPU, but how you will actually get at and use, connect, et cetera, that data in your APIs. If you bend over backwards to make some components stored in a big cache-coherent array, but you never actually need to update all those components at once, have you really gained anything? Worse, if by doing so you've made it vastly more complex to use those components at the API level, have you actually improved anything? The reason that there are no generalized, broad, great answers to this problem is that the devil is in the details. So consider the purpose of each component or other piece of data or functionality you're adding to your system, and how you want to interact with it at the API level, and how you need to interact with it at the implementation level, and weigh the pros and cons of every available approach with that in mind. And make a decision for that problem that might be different than the decision you'll make for the next. In general, I'd aim for cache coherency when I can, as long as it doesn't sacrifice usability in any fundamental way.
  3. 7 points
    This type of question comes around soon after most major violent crimes. There have been actual studies on the topic. Many of them. They've been published in all kinds of journals and such. The results are clear and consistent. Most studies looking for a positive correlation never find it, although players often stay in the same emotional frame for a short time. Playing a horror game can put you in an anxious state of mind, playing a music game can help you feel musical. Some people find it odd, but the emotional frame for violent games is generally not a violent mindset or a cruel mindset, instead it is a competitive mindset. The games put you in the same mindset as a chess tournament, or a boxing competition, or a football game, or a debate competition. The emotions are competitive and most people focus on performing their best. Playing a game that includes violence tends to help you feel upbeat and positive if you were winning, or the "agony of defeat" upon loss, usually triggering introspection for corrections and improvements rather than aggression. Instead of a positive correlation with real-world violence, broad studies find an inverse correlation with across society as a whole. The more that people have access to violent games, it seems the less inclined they are to commit violent acts. I've talked about this with people who actually study it. One theory is that people who are frustrated or feeling mildly aggressive have a safe place to explore the emotion, which helps them cope. Another theory is that those who would naturally be more prone to aggression (young men) will group together and use their aggression outside the real world, planning raids and other actions in the online world rather than the physical world. So while games clearly have an impact on people and mood, that impact usually ranges from neutral to very positive. A few rounds of my favorite game (violent or not) are a great way to cheer me up.
  4. 7 points
    What is this all about? This forum is per request from the discussion at: Game development challenges will be posted, and this forum can be used as a gathering place for developers participating in the challenges. More to come, including rules and procedures as we work with the community to figure out how best to manage these GameDev Challenges.
  5. 7 points
    Introduction This article attempts to answer one of the most asked questions on GameDev.net. "I am a beginner. What game should I make?" If you are a beginner at game development, you should definitely read this article. Your First Step to Game Development Starts Here You've learned a language. Skills are at the ready. Now it's time to finally create that game! After years of playing games and discussing graphics, design, and mechanics, the time to put the game that has been dreamt about on the big screen is now. That Final Fantasy 7 remake is the first game that needs to be made. The idea of making it an online multiplayer is also a priority. Why? It would make the game better of course. Plus, everyone wants that feature anyway. Now the googling begins. What engine was it made from? What graphics does it need? Did it use DirectX or OpenGL or even Unity? Every new question produces two more questions. It gets to the point that the tasks and goal can be overwhelming. But with ambition the goal can be met! Right? Unfortunately as many beginners come to find out, the complexity of a game can tamper ambition and in some cases completely put out the flame. However, this article will help you prevent that and also build up your game programming skills. But first let's address some issues that a vast majority of beginners run into. Many beginners ask what language they should be using. The answer is this: any language. Yes, that's right: C, C++, D, Java, Python, Lua, Scheme, or F#. And that's the short list. The language is just a tool. Using one language or the other does not matter. Don't worry about what the professionals are using. As a beginner, the only priority and goal is having the tools to create and complete the game. The second priority is to improve your code and skill. We'll get into that a bit later. Remember there is no "which language is better?" or "which language is best?". The only question anyone with experience will ask is: "How much experience do you have with the language?". At this stage in the game development journey, familiarity and skill with whatever language you are using is more important than the language itself. This is an universal truth. With your language of choice in hand, now is the time to choose how you will make the game. The choice normally is among a game development library, game engine, or a game maker. Which should you choose? If you are prototyping or are not a programmer, then a game maker would probably be the best choice. Game makers (ex: Game Maker, RPG Maker, ENIGMA) are able to create games (which I list later on) similar to the games of the 8-bit and 16-bit era. Game makers do some of the heavy lifting for you (ex: loading/handling image formats, input handling, integrating physics) and allow you to focus on the game itself. Most, if not all, game makers have a language specific to them to handle more advanced techniques and situations. The language more often than not is similar to C/C++/Javascript. Game engines such as Unreal, Crysis, and Unity handle all manner of games (ex: 2D, 3D, offline, online) and therefore are far more complex and in some cases complete overkill for the type of games a beginner should be making. Game engines should be used when you as a game developer have several games under your belt and fully understand the mechanics of making a game. For most beginner game developers, especially those who are programmers, choosing a game development library is a good choice. It puts those programming skills to the test and allows discovery of more things about the language. Like the language, the library doesn't matter. Whether it's SDL, SFML, PyGame (for Python only), or Allegro, the library is just another tool to create and complete the game. Game dev libraries have more or less the same features. So whichever you pick will be fine to get the job done. Finally after gathering our tools: language and game dev library or game maker software, we are ready to answer the most important question of all. "What game should I make?" For this question, the answer has to be approachable, doable, and for the most part understood. This is why the game that should be made should be 2D. Everyone understands 2D games. 2D games can be fancy but at its core they are basic with very few "moving parts". Believe it or not, the previous question has a definitive answer. And that answer is Pong. Now you may wonder, "why?". Well, Pong is one of the simplest games known. The graphics are simple and the mechanics are simple. There's no guesswork on how Pong should work. Therefore it's the best candidate for the first game to be made. Each new game that is presented in this article is meant to show something new and/or build upon what the last game taught you. The skills that are learned from each game become cumulative and not specific to just one game. So each game is a step up in terms of difficulty, but nothing a little brainpower and some brute force can't solve. Now I'll list some well-known games that will definitely help your game development skills and allow you to have actual complete games under your belt. I'll quickly point out some things that will be learned for each game. These games are: Pong = Simple: input, physics, collision detection, sound; scoring Worm = Placement of random powerups, handling of screen boundaries, worm data structure Breakout = Lessons of pong, powerups, maps (brick arrangements) Missile Command = targeting; simple enemy ai, movement, and sound Space Invaders = simple movement for player and enemy, very similar to breakout with the exception that the enemy constantly moves downward, simple sound Asteroids = asteroids (enemies) and player can move in all directions, asteroids appear and move randomly, simple sound Tetris = block design, clearing the lines, scoring, simple animation Pac Man = simple animation, input, collision detection, maps (level design), ai Ikari Warriors = top down view, enemy ai, powerups, scoring, collision detection, maps (level design), input, sound, boss ai Super Mario Bros = lessons of Ikari Warriors (except with side-view instead of top-down view), acceleration, jumping, platforms The list shows games in terms of difficulty from least to greatest as far as programming them goes. There are games that others may suggest but these 10 games will definitely round out what you need to know in 2D game development. If you can make and complete these games, then games like Sonic, Metroid, or even Zelda become that much easier. Those games are just variations or extensions of what you have already learned. Before I end this article, I would like to say something about completing the games. As I said above, your primary goal is making and completing the game. However, your secondary, and arguably just as important, goal is to refine your game. It's safe to say that 99% of programmers do not code Pong perfectly. Most likely your first or even second go around with Pong or Worm will not be a software architecture masterpiece. And it's not supposed to be. However, to improve your code and therefore your skill, you'll have to submit code for a code review. As all will attest to, this is A Good Thing TM. Allowing others to proofread your code will give you insights on better structure, better practices, and dangerous code that may work now, but may crash in the near or far future. Being introduced to good advice early in your game dev journey will save you from sudden and unnecessary meetings between your head and the keyboard. As you complete one game and move on to the next, do not kick that completed game to the corner. Go back (sooner rather than later) and refactor and improve the code. Doing so is proof that you understand the advice that others are giving you and shows that your skills have indeed become better. So in short, the process of making a game should be: create > complete > code review > refactor. Again, once you submit your code for review go on to the next game on the list, but remember to go back and improve that code after you get some feedback. If you've made it to the end of this article, you now have a path to start your game development journey. This article is meant to be a guide and not a be-all, end-all to game development. Hopefully the advice given here helps others start to become better game developers. NOTE: You can post code reviews in the For Beginners forum. Also, it is easier on the people checking your code if you post all the code in your post. Article Update Log 26 Mar 2013: Grammar edits, code review clarification 19 Mar 2013: Initial release
  6. 6 points
    Allocating memory on the stack is faster than allocating memory on the heap... because it's a much simpler operation, and the downside of that is that... well, it's a stack. When you exit the current scope (return from a function), all the stuff you created gets popped off the stack. To manage more long term allocations you need a different allocation strategy which is pretty much required to be more expensive, because it's going to be more complicated. Normally when you run out of stack space, it's called a "stack overflow" and it crashes your program... so increasing stack size will only do something if you're already crashing due to stack overflow... and if you're not already crashing, then it will do nothing at all. FWIW though, on many of the older console games that I've worked on, when implementing our own memory allocators, we used a stack-based mark-and-release algorithm. It uses a stack of bytes, but not "the stack", and is still as fast as allocating on "the stack"... but that's a super advanced topic given the context... The heap is not that slow. Malloc is not that slow. You can do thousands of mallocs per frame and not care. Sure, allocating a thousand integers on the stack is free -- it literally results in no instructions at all, because it's not doing anything -- so allocating integers from the heap is infinitely more expensive! Infinity is a lot!!! But it still doesn't matter. Profile your code. Time how long things take. Freak out when things take several milliseconds. Don't do cargo cult dogma without collecting data first (and afterwards for comparison!).
  7. 6 points
    Get over it. Anybody who tries to make you feel like less of a programmer for using tools like Unity is not worth listening to, and there's a very good chance you'd discover they're preaching at you from a tower of hypocrisy anyway. Unity is a good tool, it lets you make games, and that's what matters. Real game developers make games, using the tools that best fit the task and their hands. Most "gamers" do not understand the first thing about making games, or what might be the cause of any particular problem they find with a game. They're very eager to jump on the big, obvious things like the engine the game was made with because they lack sufficient domain knowledge to really understand what's going on. Don't worry too much about what they think about the tools you use. The first thing you should do is disabuse yourself of this ridiculous notion, as I've noted above. There is nothing wrong with wanting to move beyond Unity and explore different ways to build games at all - I am not trying to discourage you from that, in the end. What I am trying to discourage you from is doing it for the wrong reasons. If you leave Unity because you think it's a "crutch" for "fake programmers," you may very well become one of those people above who isn't worth listening to, who spends all his or her time railing against how terrible Unity is and how much better a programmer they are for not using it, but somehow never manages to actually ship anything or land a job or whatever their career goal is. Who preaches the virtues of "building it all yourself," but didn't actually build the compiler they use, the graphics API they leverage, the OS they work in, the computer they run it on, the chips that comprise said computer, et cetera. There are good reasons to build things yourself, but they do not involve "because using things built by other people makes you less of a programmer." Make your decisions for the right reasons, and don't let others shame you into walking away from a tool you find useful and productive. Pick a simple game archetype you're familiar with, something similar to a game you've made in Unity, and build it. Except instead of Unity, try using a framework (or set of frameworks) that provides less of a ready-made building environment for you. Consider using MonoGame, for example. This will let you continue to use C# and to build something you're reasonably familiar with (so you don't have to deal with quite so many new architecture decisions), but will also allow you to get experience with building some small subset of the bits of functionality Unity has already provided for you. This will either give you a deeper appreciation of the utility of the tool, or expose you to a different methodology of building games that you prefer (or both). Either way it makes you better equipped to decide, for any future project, if the right tool to use is Unity or something you build more directly yourself.
  8. 5 points
    Assuming that you are actually using D3D12 like your tag implies, you can re-order the channels on SRV loads/samples using the Shader4ComponentMapping field of the SRV desc. See D3D12_SHADER_COMPONENT_MAPPING.
  9. 5 points
    Pretty much any complex class (where copying would be an issue) of mine will inherit from NonCopyable by default. If at some point later I actually run into a situation where I need to copy or move them, which honestly is actually very rare, I'll remove the NonCopyable base and actually think about the problem Also, just say no to string processing in C++ ;D
  10. 5 points
    So the last couple days I've been working on something that I've wanted to do for a long time now. I've been building it as part of a terrain editor I've been working on. It's still mostly uncomplete, but so far you can create nodes, drag links to link/unlink them, then output them to a greyscale image. In the works, I've got a few node types to implement, and a lot of glue code to implement saving/loading node graphs, hooking them up to generate terrain and terrain layer weights, etc... But it's coming along pretty nicely. In the process, I have made a few fixes and additions to the noise library, to help with matters. One of the main additions is the Fractal node. The library, of course, has had fractals since the beginning, but the way they were implemented made it tough to allow them in the node graph editor. So instead, I implemented a special function type that can take as input a chain of functions for the layer noise, as well as other functions to provide the fractal parameters. Internally, the function will iterate over the number of octaves, and calculate the noise value. At each octave, the layer function chain is re-seeded with a new seed. This travels the function graph, and sets a new seed for any values of Seed type in the chain. This small change has opened up some easier ways of making fractals. Additionally, I have added a Seeder module, which implements this internal re-seeding operation. I have implemented also a Randomize module. The randomize module takes a seed or seed chain, and uses it to randomize an output value from within a range. It's kinda weird to talk about, so instead I'll demonstrate using a real-world solution to a common problem. Here is a fractal image of ridged noise: This fractal is generated by layering successive layers, where the input is a Value noise basis passed through an Abs operation before being summed with previous layers. It creates the Ridged Multifractal variant, but you can see in that image that there are grid-based artifacts. These kinds of artifacts are common; each layer of noise is generated on a grid, and so the artifacts tend to sort of amplify as the fractal is built. You can mitigate it somewhat using different values for lacunarity (which is the value that scales the frequency for each successive layer) but that can only slightly reduce the visible appearance of artifacts, not eliminate them altogether. A long time ago, I advocated for applying a randomized axial rotation to each layer of noise, rotating the noise function around a specifiable axis in order to un-align the individual grid bases, and prevent the grid biases from amplifying one another. Previously, these rotations were built directly into the fractals, but that is no longer necessary. The new Randomizer and Fractal nodes now make this easy to incorporate in a more flexible way (or eliminate, if you really want artifacts): In this screenshot, you can see that I have set up a fractal node, and for the layer source I specify a gradient basis fed through an Abs function. That function in turn feeds a RotateDomain node, which feeds the layer input of the fractal. Attached to the angle input on the fractal is a Randomize node, that randomizes a value in the range 0 to 3. The result is this: You can see that the grid artifacts are gone. The fractal iterates the layers, and at each layer it re-seeds the Layer input chain. This means that any node marked as a seed is re-set to a new value each time. This means that the Gradient basis node (which has a seed) is re-seeded, and the Randomize node that specifies the rotation angle is re-seeded. This means that each noise layer generates a different pattern than the other layers, and is rotated by a different amount around the Z axis. This misaligns the grid biases, preventing them from amplifying each other, and gives a nice non-artifacty fractal pattern. I still have quite a bit to do in implementing the rest of the noise functions in ANL. But there you go, that's what I'm working on right now.
  11. 5 points
    Even if you run a physical game studio where everyone comes into your office, people can steal your assets and go and publish them all on the internet... Even if you have proof that they've done this, then what? As well as the damage that's been done by this leak, you've got to go and find a heap more money so you can hire a legal team and launch a lawsuit to try and prove in court that this person has damaged your business and that they should pay you compensation... That's a hassle. If you hire internationally, it's the exact same problem -- you still have to sue a leaker to have them be held accountable --- but the legal issues are a bit more complicated. You can sue them in your country but may not be able to enforce the result... so you can instead sue them in their country, which may be quite a hassle. In my experience with physical studios, the answer is: don't hire people who are going to break NDA, and fire anyone who does. People don't want to ruin their entire career by becoming known as untrustworthy... No, you need to find a professional who's made a career of working for many companies and never fucking them over... which is 99.99% of professionals.
  12. 5 points
    I would echo what galo1n mentioned about being careful with regards to which memory statistic you're looking at. Like any OS that uses virtual memory. Windows has quite a few statistics that you can query via task manager or Win32 API's and they all have different meanings (and sometimes the difference is quite subtle). D3D resources will always allocate committed virtual memory via VirtualAlloc. This will increase the "commit size" in task manager, which is telling you the total amount of committed virtual memory in the process. By extension it will also increase the system total commit size that you can view under the "Performance" tab of task manager (it's the value labeled "Committed"). If you want to query these values programatically, you can use GetProcessMemoryInfo for process-specific stats and GetPerformanceInfo for system-wide stats. Committed memory has to be backed up by either system memory or the page file, which means that your commit total is typically not equal to the physical RAM consumption of your process. To see that, you want to look at the private working set, which is visible in task manager under the Details and Performance tabs. The process and system totals are also returned by the functions I linked above. In general your working set will be a function of how much memory your program is actually accessing at any given time. So if you allocate a bunch of memory that you never use, it can get paged out to disk and it won't be reflected in your working set. However Windows tends to only page out data when it really needs to, so if you access some memory once and then never again, it can stay in your working set until somebody needs that physical memory. If your D3D resources are causing a large increase in your working set, then it's possible that you're over-committing the GPU's dedicated memory pool. Either the Windows video memory manager (VidMM) or the video card driver (depending on the version of Windows that you're running) will automatically move GPU data to and from system memory if it can't keep the entire system's worth of resources resident in dedicated GPU memory. If this happens a lot it can really tank your performance, so you generally want to avoid it as much as possible. You can check for this by capturing your program with ETW and using GPUView, or by using the new PIX for Windows. In general though you probably want to limit your committed memory as much as possible, even if your working set is low. Like I mentioned earlier it needs to be backed up by either system memory or the page file, and if those are both exhausted your system will start to get very unhappy and either you or the video card's user-mode driver will crash and burn. This unfortunately means that the system's physical memory amount, page file size, and memory usage of other programs will dictate how much memory you can use without crashing, which in turn means that your higher performance settings may crash even if they have a nice GPU with lots of dedicated memory. On the last game I shipped there were a non-trivial number of crashes from users with high-end video cards who cranked their settings, but had their page file turned off!
  13. 5 points
    The compiler has deduced that since the size of the g_ScreenMinPoints and g_ScreenMaxPoints array is 1, the only valid index must be 0. If the only valid index is 0, it can infer that "meshType" must be 0. If it can infer meshType is 0 then there's no point in reading g_MaterialIDTexture or g_MeshTypePerMaterialIDBuffer. The compiler can InterlockedMin/Max on g_Screen[Min|Max]Points[0] and not bother reading either of your input textures.
  14. 5 points
    Definition: "Flow" is that mental state you enter into when you are focused and highly productive. It is a pleasurable state to achieve and leads to productive gains (aka, "getting into the zone"). When it comes to any sort of creative work (game development, writing, artwork, design, etc), it is really important to get into the flow state and maintain it for as long as possible. I would dare to suggest that this is one of the most important things for you to manage in yourself and others, and success is hardly possible without consistent progress. You want to get into this flow state when you begin creative work. Establishing Flow: Onramps The reality is that establishing flow is a fickle beast, and it's not something that can be toggled on and off like a light switch. Sometimes, you may spend an entire day trying to establish it and have no luck. These days are generally wasted, unproductive days. However, there are various controllable factors which make it easier to enter into the flow state. Some factors have no effect on some people, but some factors are universal. Here is what I have found to work: Coffee: It is brown, hot, delicious and a caffeinated stimulant. It gets my brain juices flowing. Music: I find that music helps to eliminate external distractions and can be invigorating. On ramps: I purposefully design my task list so that I have an easy entry point for the next day. Leave yourself something easy and accessible to start the day with. You want a quick and easy victory so that you can build momentum. Once you have momentum, you can increase task complexity/difficulty and slide right into the flow state. If you don't do this, you create a barrier for entry for yourself the next day and its mentally easier to procrastinate or avoid work because its hard. Example: "This bug is super simple to fix / this feature is super fast to implement, I'll leave it for tomorrows onramp." Exercise: By exercise, I don't necessarily mean going to the gym or sweating up a flight of stairs. I like to briskly walk to work, which increases blood flow and wakes me up. Intention to work: I find it's helpful to have an intention to go to work to get something done. Clench your fists and say, "I will get this done today, no matter what.", and make it happen. Set a resolve for yourself. If you are in an environment filled with other people, you will share their intentions. If they intend to screw around all day and do nothing, so will you. If they intend to focus and get work done, so will you. Enjoyment: It really helps a lot to enter into the flow state if you enjoy what you are doing. Habit: If you have established a habit of consistency, you will find it's easier to repeat a pattern. This can be good and bad, because habits can be good and bad. Focus on creating good habits and breaking bad habits. Days off: We are not machines, we're humans. We need to take days off from work in order to maintain fresh minds eager to work. If you don't, you risk burn out and your productivity will diminish to zero whether you want it to or not. Its more productive to not work every day. That doesn't necessarily mean you have to take off every weekend -- take off a week day. You know its time to take a day off or go on vacation when you mentally feel like you are in a repetitive grind, doing the same thing, day in and day out. Sleep: From experience, it is not possible to enter into the flow state and maintain it when I have not had sufficient sleep. I am adamant about this. If you need an extra hour of sleep, take it! Would you rather spend the whole day fighting against brain fog due to lack of sleep (resulting in a wasted day) or would you rather spend an extra hour or two sleeping so that you can be maximally productive for the rest of the day? Distractions: The flow crash. I think of flow like traffic and driving cars. You have to gradually increase your speed before you reach this optimum cruising speed of maximum productivity. Distractions are like getting into a head on collision or hitting the ejection seat button. Here are the distractions to worry about and why they are distractions People interrupting you - They come up to you and start a conversation with you while you were in the flow state. Now, that state has been ended and you probably lost about 15 minutes of productivity time in addition to the time it takes to have the conversation. You want to design your work situation to prevent people from interrupting you. Lock the door. Have reserved distraction free time. Work alone. Schedule meetings instead. Side conversations - Someone else is talking about something to someone. They're having a conversation about something. It doesn't even have to be interesting. Whether you want to or not, you are probably listening to bits and pieces of this conversation. Every time you switch your mental focus from your task at hand to the conversation, you are interrupting yourself and getting distracted. Ideally, the way to counter-act this is to work in a quiet space without distracting conversations or noises. A second best solution is noise cancelling head phones with music which has no vocals. This is one of the top reasons why I think "open office" floor plans are terrible for productivity. Social Media & Email - Holy crap, this can be distracting and a major time sink. This warrants a category on its own because it can really destroy your day. How? Let's say you get an email from someone. What happens? Do you get a pop up notification and a noise? This suddenly attracts your attention to this email event, even if you ignore it. Flow = hitting the brakes. Social media is terrible as well because it can turn into an addictive cycle. "I wonder what's happening on facebook? Do I need to catch up on twitter? Reddit? instagram? email? online forums?" The curiosity can haunt you when you're trying to establish the flow state and you can easily give in to your own curiosity and accidentally waste 15 minutes to 5 hours on social media and email. This is a robbery of your time. For what? What tangible value do you actually get out of it? Cell phones - Yet another source of distractions. They ring and make noise when people are trying to call you. You feel obligated to answer calls or risk being rude. You get text messages from people in your life. Ideally, I would throw my phone into the ocean and never get another one. Practically, you should put your phone on silent. Let your loved ones know that you are unavailable during certain hours. Home life - If you work from home, there are more distractions than you can count. The more people, animals and noise there are, the more distracting home becomes. Is your spouse trying to spend time with you? No work gets done. Do you have kids who need attention? No work gets done. Kids also have no concept of interruption, so they can't sense when you are busy. If you have animals, what happens when the dog barks at a noise? Or the cat meows for attention or walks across your keyboard? What about chores? "Honey, can you take out the trash? Can you do the dishes? Vacuum the living room?" etc. Home is generally a terrible place to get work productive done. If you must work from home, you should have a quiet study to work from, where you can lock the door to keep people out. Alternatively, you should work away from home. Food and bathroom breaks: It's a biological necessity for survival to eat and drink, and generally something you should do. Keep in mind though, excessive drinking of coffee (or other liquids) can lead to frequent bathroom breaks, which interrupt your flow. If you smoke cigarettes, smoke breaks can also be flow breakers. I advise against drinking alcohol if you're attempting to remain productive. If you get hungry, you should eat. Continuing to work while hungry turns into a flow interrupter because the pangs of hunger start turning into repetitive interruption signals. Technology - You have to be very careful with technology. Some technology is beneficial and enhances productivity, but other technology is a source of distractions with limited benefit. It's sometimes hard to tell the difference. Generally, instant messengers, skype, discord, email, and any application which interrupts you with a notification of any sort is bad for flow maintenance. Entertainment - In 2017, you have a ton of entertainment available for you at your fingertips, at any time you want. You can watch netflix. You can play video games. You can browse videos on youtube. Watch movies on demand. Stream TV shows. Use social media. This overabundance of available entertainment makes life fun, but it drains away your ability to be productive. This makes creative work much more challenging because there is an overabundance of distracting time sinks available to rob us of our productive time. Have fun, but be disciplined and use set hours for entertainment (start times and stop times). Conclusion: Overall, if you work in a quiet, isolated environment, you can get a lot more work done (Some people work late into the night because its quiet, isolated and distraction free). Take the time to be introspective about your work day and assess how it went. What was good and helpful? What was bad and unproductive? Some days, you won't enter into the flow state. Don't beat yourself up over it. It happens to everyone. Instead, focus on how you can make tomorrow a better day. What can you do today to make tomorrow better? I'm interested to hear what you guys think. Did I miss anything huge? What works for you? What hinders you?
  15. 5 points
    Alright, we're going to get this started officially. You can submit your challenges. These are the current rules for the forum and the submission process. These can (and will) change - so if you think something needs to change then speak up. I'm getting a lot of longer-term ideas to better support GameDev Challenges beyond the forum, and I really appreciate the discussion thus far. I'm also looking into achievements/awards. A few ideas there but want to think it through since I'd also like to use them for other parts of the site. But for now, this forum will work to get you guys going on this cool idea. In the interim if you need anything specific from the site to make these better or something isn't working well then please let me know either through this thread or DM.
  16. 5 points
    BattleTech is going to have a Random Event System during the single player campaign. Basically as time ticks by, there's a random chance for these events to popup during the time between combat missions. You're given a situation and a handful of options to choose from. Some options may not be available based on previous events, contracts, or other game play. Based on the chosen option, a result set is chosen at random (weighted) and the results of that are applied. I recently delivered the event editor for our game so the designers could crank out events at a faster pace. Originally, it was hand edited json. Then it was a simple text format that Kiva wrote a python parser for to convert that text into json. Now it's a WinForms app with a gui, validation, and other utilities built in. The designers have been using it for a while now and so far they're really happy with it. Here's a screenshot of the main editor and one of its subforms. Event System Data Model Events - Have a Title, Description, Image, Scope, a list of Requirements, and a list of Options. The Scope tells us which main object we're going to be dealing with (typically Company or Mechwarrior). The requirements say what must be met in order for this event to be pulled. And the options are the choices the player makes when they see this event. Option - An option is a choice a player makes. It has some text, a list of Requirements, and a list of potential Result Sets. Choosing this option randomly selects one of the result sets. Result Set - Is what gets applied as an outcome to the event. It has a description, a Weight, and a list of Results. The Weight influences the randomness of the outcome. If we have two result sets and one has a weight of 75 and the other a weight of 25, then the first Result Set has a 75% chance to be chosen. Result -This contains all the data that happens as a result of the event. Added Tags, Removed Tags, Stat Modifications, Forced Events, Actions, flags for a Temporary Result, and its Duration. This is how the game world gets modified. You can add a tag to the Company Honorable, Remove a cowardly tag from a mech warrior, give a star league era Gauss Rifle, or force an event into the queue for later. Requirements - Requirements have a Scope, a list of Required Tags, a list of Excluded Tags, and a list of Comparisons. The scope tells us what we're looking for. Any Required tags must be on the object. Any excluded tags most NOT be on the object. And any Comparisons must be met. For example: Scope: MechWarrior, Required Tags: Marik_Origin, Excluded Tags: Cowardly, Comparison: Injuries > 0 - would look for a wounded Marik pilot who is not a coward. TagSet - A list of strings attached to various items in our game (maps, encounters, contracts, company, mechwarriors, mechs, etc.) It's a simple concept, but very powerful. Many things are setup with an initial set of tags, but we also add tags to your Company, and your various mechwarriors. These are then querried on by events and other systems in our game to make the world feel more alive. Data Driven Content I'm a big proponent of using data to drive applications (game or not) because of all the benefits it provides. The major benefit here is being able to add content to your game without access to the source code. When done correctly, it opens the doors wide for a modding community to take your game into new directions and it can really add some longevity to a title. The event editor will be included in the final version of our game and I'm excited to see what events they'll come up with. - Edited to talk about what the event system actually is so people don't get confused about "event programming".
  17. 5 points
    No, do not pass them around indiscriminately. First off, few systems should need to know about them. Game objects themselves have no need to know about it. Game objects rarely render themselves. Game objects usually have a handle to their models, textures, or animations, and they can request the various graphics systems do something different, but usually it is extremely inefficient for the game objects themselves to be involved with the rendering or the resource details. But that doesn't directly answer your question... Among the best models is to only pass the data to systems on a "need to know" basis. It takes some discipline both in design and implementation, but it can be done. Again, since the game objects themselves don't do the drawing, they should generally have no need to know the details of how they are rendered. They might need to switch to different models or textures, such as a "damaged" or "inactive" version, but that should generally be handled as a message to the other subsystems and not through direct manipulation. Unfortunately in many games there is less discipline, people make sloppy decisions, and time is more critical than implementation quality. When you must make this sad choice, the typical model is a "well known instance". The typical implementation is a global structure that contains pointers to the active instances of some key libraries, such as logging, audio, rendering, and a few others. The instances themselves are only modified at well-defined times, such as only being modified during game initialization or only being modified when the entire game is outside the simulation loop. Things that are not the same should not be treated the same. Things that are the same should be treated the same. A texture is not a model. A sprite-sheet is not a regular texture. Animations and sounds are not the same as the others on the list. As for loading the things being needed, it is moderately common to have a prefetching system, depending on the game. Elements can have data that says "I need this at startup", and other data that says "I need this eventually". The first needs to get loaded up front, but the rest can wait until the main game is running. The exact details of such a system depend on the game and its needs. For example, in a major game (not a hobby project that doesn't have the manpower) all the expected audio can be pulled from the animation events that use them, and the build tools extract the list of audio that can be triggered. Since audio is something that needs to be instantly responsive it is generally best to load it from disk in advance. A smarter system can pull them up after the main load, continuing a background load as the level becomes playable. Generally no. The individual game object does not hold any of those. A subsystem that controls rendering controls the textures and models, both as sub-subsystems, and they do it in a way that best fits how they will be used, with different size buffers and resource caches, and different rules for resource proxies as needed, generally storing the resources directly on the video card, all focusing on how to render quickly. A subsystem that controls animations is quite separate, with completely different access patterns, different buffering system, different resource caches, different proxy services, designed around quickly processing the animations for the rapid series of matrix multiplies and transfer to the cards that must take place. Sound effects are similarly handled differently, kept in different areas of memory for fast audio playback. An individual game object may hold a handle or proxy given to them by the subsystem, but the game object themselves typically don't own those resources at all. Mostly covered above. The rendering system can be built to handle rendering. It can sort objects based on how they must be rendered. Commonly this means based on material orders, shaders, transparency/translucency, and other factors. The rendering system can also use the information to render multiple instances of the data with a single call. Such systems can build and maintain a collection of rendering order keys using bitmasks which are trivially sorted to greatly improve rendering speed. If each game object itself owned this then drawing would require much more processing, visiting every game object to discover all those properties, and re-sorting instead of keeping cached sort keys. Audio is problematic when it must be mixed or processed. If you're mixing together a bunch of positional audio information, it is horribly inefficient to query each object every frame to find out if the audio has changed, to mix only the audio through the single simulation step, and so on. Audio processing is handled radically different from graphics, and different from the simulation. Say you've got some 44.1KHz audio, some 192KHz audio, and you want to play them together. But if you're mixing them per graphics frame, you're mixing together perhaps 7056 samples and 30720 samples, and things get difficult. Similar issues happen if you try to mix them by simulation step since often simulation happens at irregular times, often many rapid simulation steps to catch up, followed by a delay waiting for rendering or other processing. It's even worse when something causes graphics frames. Instead an audio processing system can handle all the work in a separate processing thread, often in its own CPU core quietly humming away. The system can operate as best fits the audio hardware, keeping the audio buffers fed completely independently of what game objects and rendering systems are doing. The audio system can listen for events that impact audio and handle them when makes best sense for that subsystem, such as when the next round of audio buffer updates take place. However, for a small hobby game, you're unlikely to have the manpower to do most of that stuff. You'll get better performance with such systems, but they take time and effort to develop. If you're using an established game engine or good middleware tools they'll handle it for you, and that's highly recommended if your goal is to complete an actual game.
  18. 5 points
    Tons of commercial games are already being made in C#. If you're more focused on lower-level languages that can be direct replacements at the system programming level, then I'd say that since the demand for that is dropping, C and C++ are adequate. Maybe Rust/Go/Swift will be a worthy competitor in future, but I don't think there is enough critical mass.
  19. 5 points
    Lag Compensation is indeed the right term here, at least as it's commonly known based on the old Source Networking document: https://developer.valvesoftware.com/wiki/Source_Multiplayer_Networking#Lag_compensation The purpose of lag compensation in this sense is to make the game feel more accurate for the shooter or viewer, by attempting to simulate the results as they would have appeared to that player. This is important in any situation where game entities move very fast relative to the latency between clients and the server, and where that difference between each side's perceived position for a given entity has a significant effect on gameplay. So: typical first person shooter: players and weapons move very quickly, often across the field of vision - lag compensation is useful battleship simulator - it's still first person and shooting, but there's no chance of a battleship getting out of the way in 200 milliseconds - lag compensation is unnecessary online RPG - usually 3rd person, where aiming and timing is not critical and the area of effect can be fuzzy - no lag compensation necessary at all Lag compensation is different from "input prediction" which is a bit of an awkwardly-named term for allowing a client to control its own entity's movement without waiting for full acknowledgement from the server. As this helps the player feel like the input is more responsive, this is much more widely employed, probably on almost every game type, except strategy games.
  20. 4 points
    My general take on this is that such cases are softly violating single-responsibility and are code smells. 20+ years of coding and I've legit needed mutable maybe 3 times, I felt "dirty" about each of them, and ultimately all the cases I can remember were later removed during refactoring that left the code smaller and faster. I look at it like this: You have an object whose job is to receive and accumulate mutations. You have an object whose job is to provide the post-processed aggregate of those mutations. You have not yet illustrated that these must be the _same_ object. I don't know your use case so maybe they do need to be, but I'd be willing to bet that the code can be easily restructured to avoid it. Such caching mechanisms are necessary when you have an unpredictable pattern of mutation and read-back. That honestly just shouldn't be something you need all that often in well-structured and efficient software. Execute all your mutations in one pass. Transform the data into its efficient-for-use format. Then execute the passes that require that computed state. Look at it as a pipeline of data transformations. Graphics, physics, AI, all of it can be written that way for pretty much any game from Tetris to Destiny. When a caching layer really is needed (e.g. in front of IO) then build the cache as a second layer (a separate object). In place of a dirty flag, revision numbers/counters work a lot better here too, in particular because you then no longer need to mutate a dirty flag back to a clean state; your cache can just compare its last revision number with the source's current revision to know if it should update and never needs to modify the source in any way.
  21. 4 points
    While folks are correct that this is the poster child use case of mutable, keep in mind that the contract for mutable has changed in C++ 11 if this code is ever to be multi-threaded. As of C++ 11, the contract for mutable now also includes a statement of thread safety. A use case such as this in a multi-threaded engine will likely fail pretty miserably and you need to protect the cacheResult_ value. I'm only pointing this out 'in case' you intend to multi thread any of this code, if not it doesn't impact you..
  22. 4 points
    I wouldn't give you very good marks for this survey... The word "influence" here is extremely broad. Take this for example: Do you think that art has an influence on the viewer? Of course it does! If art doesn't influence us, then it's crappy art. Game are activities about the player making guided choices - the whole purpose is to create a space in which different behaviours can occur, and to influence the behaviours that will be chosen. So, in your survey, I'm forced to say "Yes, video games influence behaviour", but it seems like what you're really trying to ask is "Do violent video games make people violent?", to which the answer is "no", and you can cite the academic research to back that up objectively instead of basing it on an opinion poll. So you've asked me one question, to which the answer is unarguably "yes", but you're going to interpret the results as if I was answering a completely different question, to which the answer is "no". Again, this is terribly worded. What behaviour? That they can perfectly time the button presses to perectly duck and jump through the Battletoads Level 3 Bike Sequence - Turbo Tunnels? If so, no, that's not bad parenting at all. In this situation the game has encouraged the child to develop superhuman memory and timing skills! If what you really meant to ask is "If a child has become violent from playing violent video games, are the parents to blame?", then again that's survey design because you're begging the question -- assuming the truth of an argument so that the conclusions that support that argument can follow. It's illogical. Any results that you get out of this survey are worthless.
  23. 4 points
    Spatial hash maps are similar to linear hash maps. Every position or range is converted to a bucket. You can query for the items in the bucket. Finding an individual bucket is constant time, and so is placement and searching for a specific item. However, range searches beyond a specific location have geometric growth as hundreds or thousands of nodes need to be checked. BSP trees partition the space based on how many things are within a region. They are time consuming to create and modify. Often when objects are changed (moved, added, deleted) it requires large modifications to the partition tree. Depending on how the partitions are oriented it can require significant math to navigate them. Search for a specific object is O(log n), but broad regional queries can be difficult or impossible with the structure. R trees produce minimal bounding rectangle around objects. The boxes are irregularly shaped and are based on the current population of objects. Modifying the collection (moving, adding, or deleting) usually requires less work than a BSP tree, but can still require a significant amount of math. Creating an efficient tree requires enormous computational requirements. Like the others, searching for a specific object is log n, but range searches require much more effort due to the irregular nature of the boxes. KD trees break the world in to k dimensions, often a 2 dimensional irregular grid or a 3 dimensional irregular grid. Space required is O(n). Moving, adding, or deleting objects is always O(log n) by space. Searching areas of the map is also O(log n). They have irregularly shaped areas, and dense areas filled with nodes can give very deep trees. Quadtrees and octrees are a specialization on KD trees using regularly shaped areas. Depths can be computed easily on computers as they are all binary scale. Loose quadtrees and loose octrees are a sub-variant where the boundaries of each layer are allowed to flow over halfway into their neighbor, or in different terms, are allowed to overflow by one layer deep in the tree. Like DK trees the operations are all O(log n), but with much faster time because the subdivisions are powers of two rather than arbitrary. As for why (or why not) to use them... Spatial hashes are a great fit if everything always fits into buckets, and you are only ever interested in a single bucket. If you are interested in range searches (like everything within 10 units from a point) they quickly break down. BSP trees and R trees are computationally expensive to build and maintain, so they are only a good fit if the world never/seldom changes. For range based searches a BSP is a nightmare when partitions are arbitrary, and are merely painful when partitions are axis aligned. For R trees a range based search can get to buckets with easy motion, but navigating the buckets requires processing irregular rectangles. KD trees are inexpensive to build and maintain. There is little computational cost to build and maintain them. The big drawback is that dense areas on the map increase processing required, even though it is still log n the overhead is large. Moving objects incurs the highest cost overhead when objects cross borders and must jump around the tree rather than moved through a simple tree node rotation. Loose quadtrees and loose octrees are also inexpensive to build and maintain. There is even less computation cost than KD trees because edge points are always known as powers of two. The big drawback is that dense areas on the map tend to not be broken down, so a spatial querey in a dense area may yield many objects, but in practice this is extremely rare and can be completely avoided through design choices. Most games with spatial trees use either a loose quadtree or loose octree unless the game has other specific needs that make a different structure easier to use.
  24. 4 points
    Drivers do this to enable very quick paging of GPU memory. If the OS tells your application that it needs all the GPU memory for another process (e.g. the user just alt+tabbed to chrome), the driver can instantly discard all your textures in the GPU, safe in the knowledge that it still has a copy of them in system RAM that it can use to recreate them when you alt+tab back into the game. You're kind of at the mercy of your GPU drivers here. You could try using D3D11_BIND_RENDER_TARGET, but even if it does trick the driver into not holding onto a system memory cache/copy, it will have potential performance impacts of its own...
  25. 4 points
    Have you looked at the success of Kickstarter campaigns lately for unknown devs? The odds are not great, unless you're going for a few thousand dollars and even then they're kinda slim. If you have something playable you can record from you might have a bit more of a chance. You say you have a team of programmers but they're not listed on the site. Unless you mean the modders?... Anyway, best of luck to you!
  26. 4 points
    I think the reason you are unable to fine-tune your shaders to such extent is that the data-typs themselves are implementation-defined, so for a float in your shader, some gpus could load it into a vector register, some others into a scalar. Some of them don't even have half-types if I am not mistaken. To my knowledge, only AMD exposes the real assembly for shaders. What you see for compiled hlsl files is a bytecode for intermediate assembly, which gets compiled further down by the driver to native gpu instructions with all the render pipeline state when you are using the shader.
  27. 4 points
    JavaScript is routinely compiled these days, with various levels of optimization triggered by access frequency and invocation history. This has been a big part of the performance improvements of Google's V8 and Apple's Nitro, especially on mobile. Your argument is poor, and takes a single socio-political position as justification to demean and dismiss the entirety of someone's technological contribution. Don't do that; that's zealotry, too.
  28. 4 points
    Slow shaders (on the order of two seconds per frame) will cause Windows TDR to assume that the GPU has locked up, and it will reboot the graphics driver. Under D3D, this manifests as a "device removed" error code from the Present function. I don't know how GL reports it, but it shouldn't crash. What GL function are you calling when the crash occurs?
  29. 4 points
    OOP is all about composition/components. Inheritance ("is a") is a minor tool that should be used rarely. It's a core rule of OOP that you should prefer the use of composition (components) over inheritance (class hierarchies). This is the same mistake most of the "ECS" bloggers make. They haven't learned OOP properly, so they assume that OOP=inheritance, then at some point they realize that inheritance is a bad solution for many problems (OOP actually teaches this!) and then they assert that therefore, OOP is bad and they stop studying it further (which means they never get to the bit where OOP tells you to use components instead of inhertance). Look up "Composition over inheritance" (or "composite reuse principle"). OOP teaches that if your goal is code re-use, then composition is the right tool for the job. If your goal is to write one algorithm that can operate on different types, then inheritance (or duck typing / templates, or callbacks/events/lambdas...) is the right tool. If you are using inheritance to achieve code re-use, you are not doing OOP. OOP explicitly says to not do that, so if you are doing it, you're forging ahead under your own ad-hoc paradigm. As a performance-minded game programmer though, you know that data layouts and memory access patterns are key for performance, so writing an algorithm that operates on abstract types via inheritance flies in the face of that, which means that its use becomes even rarer in good game code! Or neither, because no one says that games have to be made up of "entities" and "components". In most of the games I've worked on, a few have used the "for each entity, do a virtual update" pattern, and the rest have used a massive array of different types of objects, with completely different update patterns for different things. You don't need a common interface for every system. You also don't need every system to be mutable / modified in-place. Complex input->process->output graphs are good. There's nothing wrong with writing the main update loop in a procedural style that knows about the flow of data between different types of systems. I'd argue that doing so is ideal! If there's parent/child update order dependencies within a batch of components, you can just store them in topographical order in the component array (so that parents come before children). A dumb linear iteration of the array then provides the correct order. If there's dependencies between different types of components, then just write your main loop to update your component-systems in the correct order.
  30. 4 points
    That's because "ECS" is an overengineering trap. These issues are only as complicated as you choose to make them, and you choose to make them more complex by ascribing more magic and dogma to what an "ECS" is than you should. Even your described "bus data" paradigm strikes me as overengineered by virtue of focusing on a generic, one-size-fits-all solution to everything and then trying to cast specific problem scenarios into that mold. But remove "component" from the names of all these interfaces. You are left with the same "convoluted dependency" problem. The issue here isn't the component approach, per se. It's just the same old API interface design problems re-made with different names. The ECS magic bullet has been billed so hard (incorrectly) as a solution to these problems, but it isn't at all. It's a solution to an entirely different problem that gone through some kind of collective-consciousness mutation. Just like scene graphs did, back in the day. If your agent movement is currently depending on pulling data from either the input system or the AI system, depending on the state of the agent (player controlled versus AI controlled), perhaps the solution is to have something push the movement commands to the agent movement object through the unified single interface of that object. Thus reducing the double dependency by inverting the relationship.
  31. 4 points
    No, that is not necessarily the spiral of death. Most games require far less time doing the update than it takes for the time to pass. Your numbers show this quite well, if you think about it. In your example the fixed update is 0.5ms where it runs as many fixed updates as needed to catch up. Also the rendering takes 4 ms. Because rendering takes at least 4ms you will always need at least 4 simulation steps for every graphical frame. But in practice you've probably got much longer than that, especially if you're using vsync for as a frame rate limiter, which most games do. On a 120Hz screen you've got about 8.3 milliseconds per frame, so you'll probably need to run 16 or 17 fixed updates, and they must run within 4 milliseconds. On a 75Hz screen you've got about 13.3 milliseconds per frame, so you'll probably need to run 26 or 27 fixed updates, and they must run within 9 milliseconds. On a 60Hz screen you've got about 16.6 milliseconds per frame, so you'll probably need to run 32 or 33 fixed updates, and they must run within 12 milliseconds. In these scenarios, it is only a problem if the number of updates take longer than the allotted time. The worst case above is the 120 Hz screen, where an update processing step needs to run faster than 0.23 milliseconds; if it takes longer then you'll drop a frame and be running at 60Hz. At 75Hz the update processing step must finish in 0.33 milliseconds before you drop a frame. At 60 Hz the update processing step must finish within 0.36 milliseconds. Your frame rate will slow down, but as long as your simulation can run updates fast enough it should be fine. If it drops to 30 frames per second then a 0.5ms processing step has more time, up to 0.44 milliseconds. If it drops to 15 frames per second then the 0.5 ms processing step has up to 0.49 milliseconds per pass to run. As long as your simulator can run a fixed update in less than that time the simulation is fine. You ONLY enter the "spiral of death" if the time it takes to compute the time interval takes longer than the values above. Since typically the simulation time is relatively fast it usually isn't a problem. If the time it takes to compute a fixed time step is longer than the times above, and if you can't make it faster, then it may be necessary to change the simulation time step. Usually the only issue with that is the games feel less responsive, feel slower. Many of the older RTS games had a simulation rate of 4 updates per second, even though their graphics and animations were running at a much higher rate. Even that may not be much of a problem. It all depends on the game.
  32. 4 points
    Unfortunately it's lacking a lot of stuff which most programmers want. It's fascinating that one of the statements in the Jai Primer is "Automatic memory management is a non-starter for game programmers who need direct control over their memory layouts" when it's increasingly clear that game programmers are demanding that level of control less and less often. Unreal and Unity are garbage collected! I'm sure Jonathan will be very productive with it, and it has many good aspects, but I don't see it ever seeing serious use by more than a double-digit number of developers.
  33. 4 points
    In the past, I never bothered with marching different meshes for different terrain materials. I just marches the terrain as a single mesh, then used vertex colors (generated after marching the surface, using various techniques) to blend between terrain textures in the shader. Something like this (very quick example): With a tri-planar shader that displays different textures for the top surface than what it displays for the side surfaces, then you can just paint the v-colors (either procedurally, or by hand if that is your wish, in a post-process step) for different materials, and the shader will handle blending between the types and applying the tri-planar projection. A single color layer provides for 5 base terrain materials, if you count black(0,0,0,0) as one material, red(1,0,0,0), green(0,1,0,0), blue(0,0,1,0) and alpha(0,0,0,1) as the others. Provide another RGBA v-color layer and you can bump that to 9. Doing it this way, you don't have to be content with sharp edges between terrain types, since the shader is content to smoothly blend between materials as needed, and you don't deal with the hassle of marching multiple terrain meshes.
  34. 4 points
    Hello there! I'm still alive and working on the game so I jump right into what I worked on in the last month or so. Even though I was pretty silent a lot has "changed". The topic will be polishing, because it never stops , some input handling tricks and another pretty complex one: game balance. Polishing During a series of play-test sessions with friends, family and old colleagues I gathered some really valuable feedback on how to enhance the user experience. Thankfully the game itself was well received, but the mentioned "issues" really bugged me, so I sat down for a week or two to further enhance the presentation. Cost indicators This was a tiny addition but helped a lot. Now the color of the chest and shop item cost texts reflect the state whether you can open/buy them. Animated texts I went into an in-game UI tuning frenzy, so I added a "pop" animation on value change, besides the existing yellow highlights, to gold and attribute texts. Health bar The health bar got some love too! I implemented a fade-in/out effect for the heart sprite slowly turning it into a "black" one when you are low on health. I also added a maximum health indicator and the same value change "pop" animation I used for the gold and attribute texts. Battle events Battle events and various skills (hit miss, dodge, fear or cripple events etc...) got many complaints due to their visibility being insufficient, leaving the player puzzled sometimes why a battle didn't play out as expected. Besides using the existing sprite effects I added text notifications, similar to the ones used with pickups. No complaints ever since . Critical strike This one was an "extra". I wanted to beef-up the effects of the critical strikes to make them look more ferocious and better noticeable. Level transition Play testers shared my enthusiasm towards having a better level transition effect, so I slapped on a black screen fade-in/out during dungeon generation and it worked wondrous. Input handling I knew for a long time now, that the simple input handling logic the game had will not be good enough for the shipped version. I already worked a lot on and wrote my findings about better input handling for grid based games, so I'm not going to reiterate. I mostly reused the special high-level input handling parts from my previous game Operation KREEP. It was a real-time action game, so some parts were obviously less relevant, but I also added tiny new extras. I observed players hitting the walls a lot. Since the player character moves relatively fast from one cell to another this happened frequently when trying to change directions, so I added a timer which blocks the "HitWall" movement state for a few milliseconds towards each walled direction for the first time when a new grid cell is reached. Again, results were really positive . Balancing My great "wisdom" about this topic: balancing a game, especially and RPG, is hard. Not simply hard, it is ULTRA hard. Since I never worked on an RPG before, in the preparation phase I guesstimated, that it will took around 2 to 3 days of full-time work, because after all it is a simple game. Oh maaaaaaan, how naive I was . It took close to two weeks. Having more experience on how to approach it and how to do it effectively I probably could do it in less than a week now with a similar project, but that is still far off from from 2/3 days . Before anyone plays the judge saying, I'm a lunatic and spending this much probably wasn't worth it, I have to say, that during the last 6 months nothing influenced the fairness and "feeling" of the game as much as these last 2 weeks so do not neglect the importance of it ! Now onto how I tamed this beast! Tools and approach Mainly excel/open-office/google-sheets, so good old-fashioned charting baby . And how? I implemented almost all the formulas (damage model, pickup probabilities, loot system etc...) in isolated sheets, filled it with the game data and tweaked it (or the formulas sometimes) to reach a desirable outcome. This may sound boring or cumbersome, but in reality charts are really useful and these tools help tremendously. Working with a lot of data is made easy and you get results immediately when you change something. Also they have a massive library of functions built-in so mimicking something like the damage reduction logic of a game is actually not that hard. That is the main chart of the game, controlling the probabilities of specific pickups, chests and monsters occurring on levels. It plays a key role in determining the difficulty and the feel of the game so nailing it was pretty important (no pressure ). If balancing this way is pretty efficient why it took so much time? Well, even a simple game like I Am Overburdened is built from an absurd number of components, so modeling it took at least a dozen gigantic charts . Another difficult aspect is validating your changes. The most reliable way is play-testing, so I completed the game during the last two weeks around 30 to 40 times and that takes a long while . There are faster but less accurate ways of course. I will talk about that topic in another post... Tricks and tips #1.: Focus on balancing ~isolated parts/chunks of your game first. This wide "chest chart" works out how the chests "behave" (opening costs, probabilities, possible items). Balancing sections of your game is easier than trying to figure out and make the whole thing work altogether in one pass. Parts with close to final values can even help solidifying other aspects! E.g.: knowing the frequency and overall cost of chests helped in figuring out how much gold the player should find in I Am Overburdened. #2.: Visualization and approaching problems from different perspectives are key! The battle model (attack/defense/damage/health formulas) wasn't working perfectly up until last week. I decided to chart the relation of the attack, defense and health values and how their change affect the number of hits required to kill an enemy. These fancy "damage model" graphs shows this relation. Seeing the number of hits required in various situations immediately sparked some ideas how to fix what was bugging me . #3.: ~Fixing many formulas/numbers upfront can make your life easier. Lot of charts I know, but the highlighted blue parts are the "interesting" ones. I settled on using them as semi-final values and formulas long before starting to balance the game. If you have some fixed counts, costs, bonuses or probabilities you can work out the numbers for your other systems more easily. In I Am Overburdened I decided on the pickup powers like the + health given by potions or the + attribute bonuses before the balancing "phase". Working out their frequencies on levels was pretty easy due to having this data. Also helps when starting out, since it gives lot of basis to work with. Now onto the unmissable personal grounds. Spidi, you've been v/b-logging about this game for a loooooong while now, will this game ever be finished?! Yes, yes and yes. I know it has fallen into stretched and winding development, but it is really close to the finish line now and it is going to be AWESOME! I'm more proud of it than anything I've ever created in my life prior . Soon, really soon... Thanks for reading! Stay tuned.
  35. 4 points
    The letter/digit keys have no constants because their value are the ASCII values, so you can just use a character literal 'A' etc.
  36. 4 points
    What exactly is an Octree? If you're completely unfamiliar with them, I recommend reading the wikipedia article (read time: ~5 minutes). This is a sufficient description of what it is but is barely enough to give any ideas on what it's used for and how to actually implement one. In this article, I will do my best to take you through the steps necessary to create an octree data structure through conceptual explanations, pictures, and code, and show you the considerations to be made at each step along the way. I don't expect this article to be the authoritative way to do octrees, but it should give you a really good start and act as a good reference. Assumptions Before we dive in, I'm going to be making a few assumptions about you as a reader: You are very comfortable with programming in a C-syntax-style language (I will be using C# with XNA). You have programmed some sort of tree-like data structure in the past, such as a binary search tree and are familiar with recursion and its strengths and pitfalls. You know how to do collision detection with bounding rectangles, bounding spheres, and bounding frustums. You have a good grasp of common data structures (arrays, lists, etc) and understand Big-O notation (you can also learn about Big-O in this GDnet article). You have a development environment project which contains spatial objects which need collision tests. Setting the stage Let's suppose that we are building a very large game world which can contain thousands of physical objects of various types, shapes and sizes, some of which must collide with each other. Each frame we need to find out which objects are intersecting with each other and have some way to handle that intersection. How do we do it without killing performance? Brute force collision detection The simplest method is to just compare each object against every other object in the world. Typically, you can do this with two for loops. The code would look something like this: foreach(gameObject myObject in ObjList) { foreach(gameObject otherObject in ObjList) { if(myObject == otherObject) continue; //avoid self collision check if(myObject.CollidesWith(otherObject)) { //code to handle the collision } } } Conceptually, this is what we're doing in our picture: Each red line is an expensive CPU test for intersection. Naturally, you should feel horrified by this code because it is going to run in O(N^2) time. If you have 10,000 objects, then you're going to be doing 100,000,000 collision checks (hundred million). I don't care how fast your CPU is or how well you've tuned your math code, this code would reduce your computer to a sluggish crawl. If you're running your game at 60 frames per second, you're looking at 60 * 100 million calculations per second! It's nuts. It's insane. It's crazy. Let's not do this if we can avoid it, at least not with a large set of objects. This would only be acceptable if we're only checking, say, 10 items against each other (100 checks is palatable). If you know in advance that your game is only going to have a very small number of objects (i.e., asteriods), you can probably get away with using this brute force method for collision detection and ignore octrees altogether. If/when you start noticing performance problems due to too many collision checks per frame, consider some simple targeted optimizations: 1. How much computation does your current collision routine take? Do you have a square root hidden away in there (ie, a distance check)? Are you doing a granular collision check (pixel vs pixel, triangle vs triangle, etc)? One common technique is to perform a rough, coarse check for collision before testing for a granular collision check. You can give your objects an enclosing bounding rectangle or bounding sphere and test for intersection with these before testing against a granular check which may involve a lot more math and computation time. Use a "distance squared" check for comparing distance between objects to avoid using the square root method. Square root calculation typically uses the newtonian method of approximation and can be computationally expensive. 2. Can you get away with calculating fewer collision checks? If your game runs at 60 frames per second, could you skip a few frames? If you know certain objects behave deterministically, can you "solve" for when they will collide ahead of time (ie, pool ball vs. side of pool table). Can you reduce the number of objects which need to be checked for collisions? A technique for this would be to separate objects into several lists. One list could be your "stationary" objects list. They never have to test for collision against each other. The other list could be your "moving" objects, which need to be tested against all other moving objects and against all stationary objects. This could reduce the number of necessary collision tests to reach an acceptable performance level. 3. Can you get away with removing some object collision tests when performance becomes an issue? For example, a smoke particle could interact with a surface object and follow its contours to create a nice aesthetic effect, but it wouldn't break game play if you hit a predefined limit for collision checks and decided to stop ignoring smoke particles for collision. Ignoring essential game object movement would certainly break game play though (ei, player bullets stop intersecting with monsters). So, perhaps maintaining a priority list of collision checks to compute would help. First you handle the high priority collision tests, and if you're not at your threshold, you can handle lower priority collision tests. When the threshold is reached, you dump the rest of the items in the priority list or defer them for testing at a later time. 4. Can you use a faster but still simplistic method for collision detection to get away from a O(N^2) runtime? If you eliminate the objects you've already checked for collisions against, you can reduce the runtime to O(N(N+1)/2), which is much faster and still easy to implement. (technically, it's still O(N^2)) In terms of software engineering, you may end up spending more time than it's worth fine-tuning a bad algorithm & data structure choice to squeeze out a few more ounces of performance. The cost vs. benefit ratio becomes increasingly unfavorable and it becomes time to choose a better data structure to handle collision detection. Spatial partitioning algorithms are the proverbial nuke to solving the runtime problem for collision detection. At a small upfront cost to performance, they'll reduce your collision detection tests to logarithmic runtime. The upfront costs of development time and CPU overhead are easily outweighed by the scalability benefits and performance gains. Conceptual background on spatial partitioning Let's take a step back and look at spatial partitioning and trees in general before diving into Octrees. If we don't understand the conceptual idea, we have no hope of implementing it by sweating over code. Looking at the brute force implementation above, we're essentially taking every object in the game and comparing their positions against all other objects in the game to see if any are touching. All of these objects are contained spatially within our game world. Well, if we create an enclosing box around our game world and figure out which objects are contained within this enclosing box, then we've got a region of space with a list of contained objects within it. In this case, it would contain every object in the game. We can notice that if we have an object on one corner of the world and another object way on the other side, we don't really need to, or want to, calculate a collision check against them every frame. It'd be a waste of precious CPU time. So, let's try something interesting! If we divide our world exactly in half, we can create three separate lists of objects. The first list of objects, List A, contains all objects on the left half of the world. The second list, List B, contains objects on the right half of the world. Some objects may touch the dividing line such that they're on each side of the line, so we'll create a third list, List C, for these objects. We can notice that with each subdivision, we're spatially reducing the world in half and collecting a list of objects in that resulting half. We can elegantly create a binary search tree to contain these lists. Conceptually, this tree should look something like so: In terms of pseudo code, the tree data structure would look something like this: public class BinaryTree { //This is a list of all of the objects contained within this node of the tree private List m_objectList; //These are pointers to the left and right child nodes in the tree private BinaryTree m_left, m_right; //This is a pointer to the parent object (for upward tree traversal). private BinaryTree m_parent; } We know that all objects in List A will never intersect with any objects in List B, so we can almost eliminate half of the number of collision checks. We've still got the objects in List C which could touch objects in either list A or B, so we'll have to check all objects in List C against all objects in Lists A, B & C. If we continue to sub-divide the world into smaller and smaller parts, we can further reduce the number of necessary collision checks by half each time. This is the general idea behind spatial partitioning. There are many ways to subdivide a world into a tree-like data structure (BSP trees, Quad Trees, K-D trees, OctTrees, etc). Now, by default, we're just assuming that the best division is a cut in half, right down the middle, since we're assuming that all of our objects will be somewhat uniformly distributed throughout the world. It's not a bad assumption to make, but some spatial division algorithms may decide to make a cut such that each side has an equal amount of objects (a weighted cut) so that the resulting tree is more balanced. However, what happens if all of these objects move around? In order to maintain a nearly even division, you'd have to either shift the splitting plane or completely rebuild the tree each frame. It'd be a bit of a mess with a lot of complexity. So, for my implementation of a spatial partitioning tree I decided to cut right down the middle every time. As a result, some trees may end up being a bit more sparse than others, but that's okay -- it doesn't cost much. To subdivide or not to subdivide? That is the question. Let's assume that we have a somewhat sparse region with only a few objects. We could continue subdividing our space until we've found the smallest possible enclosing area for that object. But is that really necessary? Let's remember that the whole reason we're creating a tree is to reduce the number of collision checks we need to perform each frame -- not to create a perfectly enclosing region of space for every object. Here are the rules I use for deciding whether to subdivide or not: If we create a subdivision which only contains one object, we can stop subdividing even though we could keep dividing further. This rule will become an important part of the criteria for what defines a "leaf node" in our octree. The other important criteria is to set a minimum size for a region. If you have an extremely small object which is nanometers in size (or, god forbid, you have a bug and forgot to initialize an object size!), you're going to keep subdividing to the point where you potentially overflow your call stack. For my own implementation, I defined the smallest containing region to be a 1x1x1 cube. Any objects in this teeny cube will just have to be run with the O(N^2) brute force collision test (I don't anticipate many objects anyways!). If a containing region doesn't contain any objects, we shouldn't try to include it in the tree. We can take our subdivision by half one step further and divide the 2D world space into quadrants. The logic is essentially the same, but now we're testing for collision with four squares instead of two rectangles. We can continue subdividing each square until our rules for termination are met. The representation of the world space and corresponding data structure for a quad tree would look something like this: If the quad tree subdivision and data structure make sense, then an octree should be pretty straight forward as well. We're just adding a third dimension, using bounding cubes instead of bounding squares, and have eight possible child nodes instead of four. Some of you might wonder what should happen if you have a game world with non-cubic dimensions, say 200x300x400. You can still use an octree with cubic dimensions -- some child nodes will just end up empty if the game world doesn't have anything there. Obviously, you'll want to set the dimensions of your octree to at least the largest dimension of your game world. Octree Construction So, as you've read, an octree is a special type of subdividing tree commonly used for objects in 3D space (or anything with 3 dimensions). Our enclosing region is going to be a three dimensional rectangle (commonly a cube). We will then apply our subdivision logic above, and cut our enclosing region into eight smaller rectangles. If a game object completely fits within one of these subdivided regions, we'll push it down the tree into that node's containing region. We'll then recursively continue subdividing each resulting region until one of our breaking conditions is met. At the end, we should expect to have a nice tree-like data structure. My implementation of the octree can contain objects which have either a bounding sphere and/or a bounding rectangle. You'll see a lot of code I use to determine which is being used. In terms of our Octree class data structure, I decided to do the following for each tree: Each node has a bounding region which defines the enclosing region Each node has a reference to the parent node Contains an array of eight child nodes (use arrays for code simplicity and cache performance) Contains a list of objects contained within the current enclosing region I use a byte-sized bitmask for figuring out which child nodes are actively being used (the optimization benefits at the cost of additional complexity is somewhat debatable) I use a few static variables to indicate the state of the tree Here is the code for my Octree class outline: public class OctTree { BoundingBox m_region; List m_objects; /// /// These are items which we're waiting to insert into the data structure. /// We want to accrue as many objects in here as possible before we inject them into the tree. This is slightly more cache friendly. /// static Queue m_pendingInsertion = new Queue(); /// /// These are all of the possible child octants for this node in the tree. /// OctTree[] m_childNode = new OctTree[8]; /// /// This is a bitmask indicating which child nodes are actively being used. /// It adds slightly more complexity, but is faster for performance since there is only one comparison instead of 8. /// byte m_activeNodes = 0; /// /// The minumum size for enclosing region is a 1x1x1 cube. /// const int MIN_SIZE = 1; /// /// this is how many frames we'll wait before deleting an empty tree branch. Note that this is not a constant. The maximum lifespan doubles /// every time a node is reused, until it hits a hard coded constant of 64 /// int m_maxLifespan = 8; // int m_curLife = -1; //this is a countdown time showing how much time we have left to live /// /// A reference to the parent node is nice to have when we're trying to do a tree update. /// OctTree _parent; static bool m_treeReady = false; //the tree has a few objects which need to be inserted before it is complete static bool m_treeBuilt = false; //there is no pre-existing tree yet. } Initializing the enclosing region The first step in building an octree is to define the enclosing region for the entire tree. This will be the bounding box for the root node of the tree which initially contains all objects in the game world. Before we go about initializing this bounding volume, we have a few design decisions we need to make: 1. What should happen if an object moves outside of the bounding volume of the root node? Do we want to resize the entire octree so that all objects are enclosed? If we do, we'll have to completely rebuild the octree from scratch. If we don't, we'll need to have some way to either handle out of bounds objects, or ensure that objects never go out of bounds. 2. How do we want to create the enclosing region for our octree? Do we want to use a preset dimension, such as a 200x400x200 (X,Y,Z) rectangle? Or do we want to use a cubic dimension which is a power of 2? What should be the smallest allowable enclosing region which cannot be subdivided? Personally, I decided that I would use a cubic enclosing region with dimensions which are a power of 2, and sufficiently large to completely enclose my world. The smallest allowable cube is a 1x1x1 unit region. With this, I know that I can always cleanly subdivide my world and get integer numbers (even though the Vector3 uses floats). I also decided that my enclosing region would enclose the entire game world, so if an object leaves this region, it should be quietly destroyed. At the smallest octant, I will have to run a brute force collision check against all other objects, but I don't realistically expect more than 3 objects to occupy that small of an area at a time, so the performance costs of O(N^2) are completely acceptable. So, I normally just initialize my octree with a constructor which takes a region size and a list of items to insert into the tree. I feel it's barely worth showing this part of the code since it's so elementary, but I'll include it for completeness. Here are my constructors: /*Note: we want to avoid allocating memory for as long as possible since there can be lots of nodes.*/ /// /// Creates an oct tree which encloses the given region and contains the provided objects. /// /// The bounding region for the oct tree. /// The list of objects contained within the bounding region private OctTree(BoundingBox region, List objList) { m_region = region; m_objects = objList; m_curLife = -1; } public OctTree() { m_objects = new List(); m_region = new BoundingBox(Vector3.Zero, Vector3.Zero); m_curLife = -1; } /// /// Creates an octTree with a suggestion for the bounding region containing the items. /// /// The suggested dimensions for the bounding region. /// Note: if items are outside this region, the region will be automatically resized. public OctTree(BoundingBox region) { m_region = region; m_objects = new List(); m_curLife = -1; } Building an initial octree I'm a big fan of lazy initialization. I try to avoid allocating memory or doing work until I absolutely have to. In the case of my octree, I avoid building the data structure as long as possible. We'll accept a user's request to insert an object into the data structure, but we don't actually have to build the tree until someone runs a query against it. What does this do for us? Well, let's assume that the process of constructing and traversing our tree is somewhat computationally expensive. If a user wants to give us 1,000 objects to insert into the tree, does it make sense to recompute every subsequent enclosing area a thousand times? Or, can we save some time and do a bulk blast? I created a "pending" queue of items and a few flags to indicate the build state of the tree. All of the inserted items get put into the pending queue and when a query is made, those pending requests get flushed and injected into the tree. This is especially handy during a game loading sequence since you'll most likely be inserting thousands of objects at once. After the game world has been loaded, the number of objects injected into the tree is orders of magnitude fewer. My lazy initialization routine is contained within my UpdateTree() method. It checks to see if the tree has been built, and builds the data structure if it doesn't exist and has pending objects. /// /// Processes all pending insertions by inserting them into the tree. /// /// Consider deprecating this? private void UpdateTree() //complete & tested { if (!m_treeBuilt) { while (m_pendingInsertion.Count != 0) m_objects.Add(m_pendingInsertion.Dequeue()); BuildTree(); } else { while (m_pendingInsertion.Count != 0) Insert(m_pendingInsertion.Dequeue()); } m_treeReady = true; } As for building the tree itself, this can be done recursively. So for each recursive iteration, I start off with a list of objects contained within the bounding region. I check my termination rules, and if we pass, we create eight subdivided bounding areas which are perfectly contained within our enclosed region. Then, I go through every object in my given list and test to see if any of them will fit perfectly within any of my octants. If they do fit, I insert them into a corresponding list for that octant. At the very end, I check the counts on my corresponding octant lists and create new octrees and attach them to our current node, and mark my bitmask to indicate that those child octants are actively being used. All of the left over objects have been pushed down to us from our parent, but can't be pushed down to any children, so logically, this must be the smallest octant which can contain the object. /// /// Naively builds an oct tree from scratch. /// private void BuildTree() //complete & tested { //terminate the recursion if we're a leaf node if (m_objects.Count <= 1) return; Vector3 dimensions = m_region.Max - m_region.Min; if (dimensions == Vector3.Zero) { FindEnclosingCube(); dimensions = m_region.Max - m_region.Min; } //Check to see if the dimensions of the box are greater than the minimum dimensions if (dimensions.X <= MIN_SIZE && dimensions.Y <= MIN_SIZE && dimensions.Z <= MIN_SIZE) { return; } Vector3 half = dimensions / 2.0f; Vector3 center = m_region.Min + half; //Create subdivided regions for each octant BoundingBox[] octant = new BoundingBox[8]; octant[0] = new BoundingBox(m_region.Min, center); octant[1] = new BoundingBox(new Vector3(center.X, m_region.Min.Y, m_region.Min.Z), new Vector3(m_region.Max.X, center.Y, center.Z)); octant[2] = new BoundingBox(new Vector3(center.X, m_region.Min.Y, center.Z), new Vector3(m_region.Max.X, center.Y, m_region.Max.Z)); octant[3] = new BoundingBox(new Vector3(m_region.Min.X, m_region.Min.Y, center.Z), new Vector3(center.X, center.Y, m_region.Max.Z)); octant[4] = new BoundingBox(new Vector3(m_region.Min.X, center.Y, m_region.Min.Z), new Vector3(center.X, m_region.Max.Y, center.Z)); octant[5] = new BoundingBox(new Vector3(center.X, center.Y, m_region.Min.Z), new Vector3(m_region.Max.X, m_region.Max.Y, center.Z)); octant[6] = new BoundingBox(center, m_region.Max); octant[7] = new BoundingBox(new Vector3(m_region.Min.X, center.Y, center.Z), new Vector3(center.X, m_region.Max.Y, m_region.Max.Z)); //This will contain all of our objects which fit within each respective octant. List[] octList = new List[8]; for (int i = 0; i < 8; i++) octList = new List(); //this list contains all of the objects which got moved down the tree and can be delisted from this node. List delist = new List(); foreach (Physical obj in m_objects) { if (obj.BoundingBox.Min != obj.BoundingBox.Max) { for (int a = 0; a < 8; a++) { if (octant[a].Contains(obj.BoundingBox) == ContainmentType.Contains) { octList[a].Add(obj); delist.Add(obj); break; } } } else if (obj.BoundingSphere.Radius != 0) { for (int a = 0; a < 8; a++) { if (octant[a].Contains(obj.BoundingSphere) == ContainmentType.Contains) { octList[a].Add(obj); delist.Add(obj); break; } } } } //delist every moved object from this node. foreach (Physical obj in delist) m_objects.Remove(obj); //Create child nodes where there are items contained in the bounding region for (int a = 0; a < 8; a++) { if (octList[a].Count != 0) { m_childNode[a] = CreateNode(octant[a], octList[a]); m_activeNodes |= (byte)(1 << a); m_childNode[a].BuildTree(); } } m_treeBuilt = true; m_treeReady = true; } private OctTree CreateNode(BoundingBox region, List objList) //complete & tested { if (objList.Count == 0) return null; OctTree ret = new OctTree(region, objList); ret._parent = this; return ret; } private OctTree CreateNode(BoundingBox region, Physical Item) { List objList = new List(1); //sacrifice potential CPU time for a smaller memory footprint objList.Add(Item); OctTree ret = new OctTree(region, objList); ret._parent = this; return ret; } Updating a tree Let's imagine that our tree has a lot of moving objects in it. If any object moves, there is a good chance that the object has moved outside of its enclosing octant. How do we handle changes in object position while maintaining the integrity of our tree structure? Technique 1: Keep it super simple, trash & rebuild everything. Some implementations of an Octree will completely rebuild the entire tree every frame and discard the old one. This is super simple and it works, and if this is all you need, then prefer the simple technique. The general consensus is that the upfront CPU cost of rebuilding the tree every frame is much cheaper than running a brute force collision check, and programmer time is too valuable to be spent on an unnecessary optimization. For those of us who like challenges and to over-engineer things, the "trash & rebuild" technique comes with a few small problems: You're constantly allocating and deallocating memory each time you rebuild your tree. Allocating new memory comes with a small cost. If possible, you want to minimize the amount of memory being allocated and reallocated over time by reusing memory you've already got. Most of the tree is unchanging, so it's a waste of CPU time to rebuild the same branches over and over again. Technique 2: Keep the existing tree, update the changed branches I noticed that most branches of a tree don't need to be updated. They just contain stationary objects. Wouldn't it be nice if, instead of rebuilding the entire tree every frame, we just updated the parts of the tree which needed an update? This technique keeps the existing tree and updates only the branches which had an object which moved. It's a bit more complex to implement, but it's a lot more fun too, so let's really get into that! During my first attempt at this, I mistakenly thought that an object in a child node could only go up or down one traversal of the tree. This is wrong. If an object in a child node reaches the edge of that node, and that edge also happens to be an edge for the enclosing parent node, then that object needs to be inserted above its parent, and possibly up even further. So, the bottom line is that we don't know how far up an object needs to be pushed up the tree. Just as well, an object can move such that it can be neatly enclosed in a child node, or that child's child node. We don't know how far down the tree we can go. Fortunately, since we include a reference to each node's parent, we can easily solve this problem recursively with minimal computation! The general idea behind the update algorithm is to first let all objects in the tree update themselves. Some may move or change in size. We want to get a list of every object which moved, so the object update method should return to us a boolean value indicating if its bounding area changed. Once we've got a list of all of our moved objects, we want to start at our current node and try to traverse up the tree until we find a node which completely encloses the moved object (most of the time, the current node still encloses the object). If the object isn't completely enclosed by the current node, we keep moving it up to its next parent node. In the worst case, our root node will be guaranteed to contain the object. After we've moved our object as far up the tree as possible, we'll try to move it as far down the tree as we can. Most of the time, if we moved the object up, we won't be able to move it back down. But, if the object moved so that a child node of the current node could contain it, we have the chance to push it back down the tree. It's important to be able to move objects down the tree as well, or else all moving objects would eventually migrate to the top and we'd start getting some performance problems during collision detection routines. Branch Removal In some cases, an object will move out of a node and that node will no longer have any objects contained within it, nor have any children which contain objects. If this happens, we have an empty branch and we need to mark it as such and prune this dead branch off the tree. There is an interesting question hiding here: When do you want to prune the dead branches off a tree? Allocating new memory costs time, so if we're just going to reuse this same region in a few cycles, why not keep it around for a bit? How long can we keep it around before it becomes more expensive to maintain the dead branch? I decided to give each of my nodes a count down timer which activates when the branch is dead. If an object moves into this nodes octant while the death timer is active, I double the lifespan and reset the death timer. This ensures that octants which are frequently used are hot and stick around, and nodes which are infrequently used are removed before they start to cost more than they're worth. A practical example of this usefulness would be apparent when you have a machine gun shooting a stream of bullets. Those bullets follow in close succession of each other, so it'd be a shame to immediately delete a node as soon as the first bullet leaves it, only to recreate it a fraction of a second later as the second bullet re-enters it. And if there's a lot of bullets, we can probably keep these octants around for a little while. If a child branch is empty and hasn't been used in a while, it's safe to prune it out of our tree. Anyways, let's look at the code which does all of this magic. First up, we have the Update() method. This is a method which is recursively called on all child trees. It moves all objects around, does some house keeping work for the data structure, and then moves each moved object into its correct node (parent or child). public void Update(GameTime gameTime) { if (m_treeBuilt == true) { //Start a count down death timer for any leaf nodes which don't have objects or children. //when the timer reaches zero, we delete the leaf. If the node is reused before death, we double its lifespan. //this gives us a "frequency" usage score and lets us avoid allocating and deallocating memory unnecessarily if (m_objects.Count == 0) { if (HasChildren == false) { if (m_curLife == -1) m_curLife = m_maxLifespan; else if (m_curLife > 0) { m_curLife--; } } } else { if (m_curLife != -1) { if(m_maxLifespan <= 64) m_maxLifespan *= 2; m_curLife = -1; } } List movedObjects = new List(m_objects.Count); //go through and update every object in the current tree node foreach (Physical gameObj in m_objects) { //we should figure out if an object actually moved so that we know whether we need to update this node in the tree. if (gameObj.Update(gameTime)) { movedObjects.Add(gameObj); } } //prune any dead objects from the tree. int listSize = m_objects.Count; for (int a = 0; a < listSize; a++) { if (!m_objects[a].Alive) { if (movedObjects.Contains(m_objects[a])) movedObjects.Remove(m_objects[a]); m_objects.RemoveAt(a--); listSize--; } } //recursively update any child nodes. for( int flags = m_activeNodes, index = 0; flags > 0; flags >>=1, index++) if ((flags & 1) == 1) m_childNode[index].Update(gameTime); //If an object moved, we can insert it into the parent and that will insert it into the correct tree node. //note that we have to do this last so that we don't accidentally update the same object more than once per frame. foreach (Physical movedObj in movedObjects) { OctTree current = this; //figure out how far up the tree we need to go to reinsert our moved object //we are either using a bounding rect or a bounding sphere //try to move the object into an enclosing parent node until we've got full containment if (movedObj.BoundingBox.Max != movedObj.BoundingBox.Min) { while (current.m_region.Contains(movedObj.BoundingBox) != ContainmentType.Contains) if (current._parent != null) current = current._parent; else break; //prevent infinite loops when we go out of bounds of the root node region } else { while (current.m_region.Contains(movedObj.BoundingSphere) != ContainmentType.Contains)//we must be using a bounding sphere, so check for its containment. if (current._parent != null) current = current._parent; else break; } //now, remove the object from the current node and insert it into the current containing node. m_objects.Remove(movedObj); current.Insert(movedObj); //this will try to insert the object as deep into the tree as we can go. } //prune out any dead branches in the tree for (int flags = m_activeNodes, index = 0; flags > 0; flags >>= 1, index++) if ((flags & 1) == 1 && m_childNode[index].m_curLife == 0) { m_childNode[index] = null; m_activeNodes ^= (byte)(1 << index); //remove the node from the active nodes flag list } //now that all objects have moved and they've been placed into their correct nodes in the octree, we can look for collisions. if (IsRoot == true) { //This will recursively gather up all collisions and create a list of them. //this is simply a matter of comparing all objects in the current root node with all objects in all child nodes. //note: we can assume that every collision will only be between objects which have moved. //note 2: An explosion can be centered on a point but grow in size over time. In this case, you'll have to override the update method for the explosion. List irList = GetIntersection(new List()); foreach (IntersectionRecord ir in irList) { if (ir.PhysicalObject != null) ir.PhysicalObject.HandleIntersection(ir); if (ir.OtherPhysicalObject != null) ir.OtherPhysicalObject.HandleIntersection(ir); } } } else { } } Note that we call an Insert() method for moved objects. The insertion of objects into the tree is very similar to the method used to build the initial tree. Insert() will try to push objects as far down the tree as possible. Notice that I also try to avoid creating new bounding areas if I can use an existing one from a child node. /// /// A tree has already been created, so we're going to try to insert an item into the tree without rebuilding the whole thing /// /// A physical object /// The physical object to insert into the tree private void Insert(T Item) where T : Physical { /*make sure we're not inserting an object any deeper into the tree than we have to. -if the current node is an empty leaf node, just insert and leave it.*/ if (m_objects.Count <= 1 && m_activeNodes == 0) { m_objects.Add(Item); return; } Vector3 dimensions = m_region.Max - m_region.Min; //Check to see if the dimensions of the box are greater than the minimum dimensions if (dimensions.X <= MIN_SIZE && dimensions.Y <= MIN_SIZE && dimensions.Z <= MIN_SIZE) { m_objects.Add(Item); return; } Vector3 half = dimensions / 2.0f; Vector3 center = m_region.Min + half; //Find or create subdivided regions for each octant in the current region BoundingBox[] childOctant = new BoundingBox[8]; childOctant[0] = (m_childNode[0] != null) ? m_childNode[0].m_region : new BoundingBox(m_region.Min, center); childOctant[1] = (m_childNode[1] != null) ? m_childNode[1].m_region : new BoundingBox(new Vector3(center.X, m_region.Min.Y, m_region.Min.Z), new Vector3(m_region.Max.X, center.Y, center.Z)); childOctant[2] = (m_childNode[2] != null) ? m_childNode[2].m_region : new BoundingBox(new Vector3(center.X, m_region.Min.Y, center.Z), new Vector3(m_region.Max.X, center.Y, m_region.Max.Z)); childOctant[3] = (m_childNode[3] != null) ? m_childNode[3].m_region : new BoundingBox(new Vector3(m_region.Min.X, m_region.Min.Y, center.Z), new Vector3(center.X, center.Y, m_region.Max.Z)); childOctant[4] = (m_childNode[4] != null) ? m_childNode[4].m_region : new BoundingBox(new Vector3(m_region.Min.X, center.Y, m_region.Min.Z), new Vector3(center.X, m_region.Max.Y, center.Z)); childOctant[5] = (m_childNode[5] != null) ? m_childNode[5].m_region : new BoundingBox(new Vector3(center.X, center.Y, m_region.Min.Z), new Vector3(m_region.Max.X, m_region.Max.Y, center.Z)); childOctant[6] = (m_childNode[6] != null) ? m_childNode[6].m_region : new BoundingBox(center, m_region.Max); childOctant[7] = (m_childNode[7] != null) ? m_childNode[7].m_region : new BoundingBox(new Vector3(m_region.Min.X, center.Y, center.Z), new Vector3(center.X, m_region.Max.Y, m_region.Max.Z)); //First, is the item completely contained within the root bounding box? //note2: I shouldn't actually have to compensate for this. If an object is out of our predefined bounds, then we have a problem/error. // Wrong. Our initial bounding box for the terrain is constricting its height to the highest peak. Flying units will be above that. // Fix: I resized the enclosing box to 256x256x256. This should be sufficient. if (Item.BoundingBox.Max != Item.BoundingBox.Min && m_region.Contains(Item.BoundingBox) == ContainmentType.Contains) { bool found = false; //we will try to place the object into a child node. If we can't fit it in a child node, then we insert it into the current node object list. for(int a=0;a<8;a++) { //is the object fully contained within a quadrant? if (childOctant[a].Contains(Item.BoundingBox) == ContainmentType.Contains) { if (m_childNode[a] != null) m_childNode[a].Insert(Item); //Add the item into that tree and let the child tree figure out what to do with it else { m_childNode[a] = CreateNode(childOctant[a], Item); //create a new tree node with the item m_activeNodes |= (byte)(1 << a); } found = true; } } if(!found) m_objects.Add(Item); } else if (Item.BoundingSphere.Radius != 0 && m_region.Contains(Item.BoundingSphere) == ContainmentType.Contains) { bool found = false; //we will try to place the object into a child node. If we can't fit it in a child node, then we insert it into the current node object list. for (int a = 0; a < 8; a++) { //is the object contained within a child quadrant? if (childOctant[a].Contains(Item.BoundingSphere) == ContainmentType.Contains) { if (m_childNode[a] != null) m_childNode[a].Insert(Item); //Add the item into that tree and let the child tree figure out what to do with it else { m_childNode[a] = CreateNode(childOctant[a], Item); //create a new tree node with the item m_activeNodes |= (byte)(1 << a); } found = true; } } if (!found) m_objects.Add(Item); } else { //either the item lies outside of the enclosed bounding box or it is intersecting it. Either way, we need to rebuild //the entire tree by enlarging the containing bounding box //BoundingBox enclosingArea = FindBox(); BuildTree(); } } Collision Detection Finally, our octree has been built and everything is as it should be. How do we perform collision detection against it? First, let's list out the different ways we want to look for collisions: Frustum intersections. We may have a frustum which intersects with a region of the world. We only want the objects which intersect with the given frustum. This is particularly useful for culling regions outside of the camera view space, and for figuring out what objects are within a mouse selection area. Ray intersections. We may want to shoot a directional ray from any given point and want to know either the nearest intersecting object, or get a list of all objects which intersect that ray (like a rail gun). This is very useful for mouse picking. If the user clicks on the screen, we want to draw a ray into the world and figure out what they clicked on. Bounding Box intersections. We want to know which objects in the world are intersecting a given bounding box. This is most useful for "box" shaped game objects (houses, cars, etc). Bounding Sphere Intersections. We want to know which objects are intersecting with a given bounding sphere. Most objects will probably be using a bounding sphere for coarse collision detection since the mathematics is computationally the least expensive and somewhat easy. The main idea behind recursive collision detection processing for an octree is that you start at the root/current node and test for intersection with all objects in that node against the intersector. Then, you do a bounding box intersection test against all active child nodes with the intersector. If a child node fails this intersection test, you can completely ignore the rest of that child's tree. If a child node passes the intersection test, you recursively traverse down the tree and repeat. Each node should pass a list of intersection records up to its caller, which appends those intersections to its own list of intersections. When the recursion finishes, the original caller will get a list of every intersection for the given intersector. The beauty of this is that it takes very little code to implement and performance is very fast. In a lot of these collisions, we're probably going to be getting a lot of results. We're also going to want to have some way of responding to each collision, depending on what objects are colliding. For example, a player hero should pick up a floating bonus item (quad damage!), but a rocket shouldn't explode if it hits said bonus item. I created a new class to contain information about each intersection. This class contains references to the intersecting objects, the point of intersection, the normal at the point of intersection, etc. These intersection records become quite useful when you pass them to an object and tell them to handle it. For completeness and clarity, here is my intersection record class: public class IntersectionRecord { Vector3 m_position; /// /// This is the exact point in 3D space which has an intersection. /// public Vector3 Position { get { return m_position; } } Vector3 m_normal; /// /// This is the normal of the surface at the point of intersection /// public Vector3 Normal { get { return m_normal; } } Ray m_ray; /// /// This is the ray which caused the intersection /// public Ray Ray { get { return m_ray; } } Physical m_intersectedObject1; /// /// This is the object which is being intersected /// public Physical PhysicalObject { get { return m_intersectedObject1; } set { m_intersectedObject1 = value; } } Physical m_intersectedObject2; /// /// This is the other object being intersected (may be null, as in the case of a ray-object intersection) /// public Physical OtherPhysicalObject { get { return m_intersectedObject2; } set { m_intersectedObject2 = value; } } /// /// this is a reference to the current node within the octree for where the collision occurred. In some cases, the collision handler /// will want to be able to spawn new objects and insert them into the tree. This node is a good starting place for inserting these objects /// since it is a very near approximation to where we want to be in the tree. /// OctTree m_treeNode; /// /// check the object identities between the two intersection records. If they match in either order, we have a duplicate. /// ///the other record to compare against /// true if the records are an intersection for the same pair of objects, false otherwise. public override bool Equals(object otherRecord) { IntersectionRecord o = (IntersectionRecord)otherRecord; // //return (m_intersectedObject1 != null && m_intersectedObject2 != null && m_intersectedObject1.ID == m_intersectedObject2.ID); if (otherRecord == null) return false; if (o.m_intersectedObject1.ID == m_intersectedObject1.ID && o.m_intersectedObject2.ID == m_intersectedObject2.ID) return true; if (o.m_intersectedObject1.ID == m_intersectedObject2.ID && o.m_intersectedObject2.ID == m_intersectedObject1.ID) return true; return false; } double m_distance; /// /// This is the distance from the ray to the intersection point. /// You'll usually want to use the nearest collision point if you get multiple intersections. /// public double Distance { get { return m_distance; } } private bool m_hasHit = false; public bool HasHit { get { return m_hasHit; } } public IntersectionRecord() { m_position = Vector3.Zero; m_normal = Vector3.Zero; m_ray = new Ray(); m_distance = float.MaxValue; m_intersectedObject1 = null; } public IntersectionRecord(Vector3 hitPos, Vector3 hitNormal, Ray ray, double distance) { m_position = hitPos; m_normal = hitNormal; m_ray = ray; m_distance = distance; // m_hitObject = hitGeom; m_hasHit = true; } /// /// Creates a new intersection record indicating whether there was a hit or not and the object which was hit. /// ///Optional: The object which was hit. Defaults to null. public IntersectionRecord(Physical hitObject = null) { m_hasHit = hitObject != null; m_intersectedObject1 = hitObject; m_position = Vector3.Zero; m_normal = Vector3.Zero; m_ray = new Ray(); m_distance = 0.0f; } } Intersection with a Bounding Frustum /// /// Gives you a list of all intersection records which intersect or are contained within the given frustum area /// ///The containing frustum to check for intersection/containment with /// A list of intersection records with collisions private List GetIntersection(BoundingFrustum frustum, Physical.PhysicalType type = Physical.PhysicalType.ALL) { if (m_objects.Count == 0 && HasChildren == false) //terminator for any recursion return null; List ret = new List(); //test each object in the list for intersection foreach (Physical obj in m_objects) { //skip any objects which don't meet our type criteria if ((int)((int)type & (int)obj.Type) == 0) continue; //test for intersection IntersectionRecord ir = obj.Intersects(frustum); if (ir != null) ret.Add(ir); } //test each object in the list for intersection for (int a = 0; a < 8; a++) { if (m_childNode[a] != null && (frustum.Contains(m_childNode[a].m_region) == ContainmentType.Intersects || frustum.Contains(m_childNode[a].m_region) == ContainmentType.Contains)) { List hitList = m_childNode[a].GetIntersection(frustum); if (hitList != null) { foreach (IntersectionRecord ir in hitList) ret.Add(ir); } } } return ret; } The bounding frustum intersection list can be used to only render objects which are visible to the current camera view. I use a scene database to figure out how to render all objects in the game world. Here is a snippet of code from my rendering function which uses the bounding frustum of the active camera: /// /// This renders every active object in the scene database /// /// public int Render() { int triangles = 0; //Renders all visible objects by iterating through the oct tree recursively and testing for intersection //with the current camera view frustum foreach (IntersectionRecord ir in m_octTree.AllIntersections(m_cameras[m_activeCamera].Frustum)) { ir.PhysicalObject.SetDirectionalLight(m_globalLight[0].Direction, m_globalLight[0].Color); ir.PhysicalObject.View = m_cameras[m_activeCamera].View; ir.PhysicalObject.Projection = m_cameras[m_activeCamera].Projection; ir.PhysicalObject.UpdateLOD(m_cameras[m_activeCamera]); triangles += ir.PhysicalObject.Render(m_cameras[m_activeCamera]); } return triangles; } Intersection with a Ray /// /// Gives you a list of intersection records for all objects which intersect with the given ray /// ///The ray to intersect objects against /// A list of all intersections private List GetIntersection(Ray intersectRay, Physical.PhysicalType type = Physical.PhysicalType.ALL) { if (m_objects.Count == 0 && HasChildren == false) //terminator for any recursion return null; List ret = new List(); //the ray is intersecting this region, so we have to check for intersection with all of our contained objects and child regions. //test each object in the list for intersection foreach (Physical obj in m_objects) { //skip any objects which don't meet our type criteria if ((int)((int)type & (int)obj.Type) == 0) continue; if (obj.BoundingBox.Intersects(intersectRay) != null) { IntersectionRecord ir = obj.Intersects(intersectRay); if (ir.HasHit) ret.Add(ir); } } // test each child octant for intersection for (int a = 0; a < 8; a++) { if (m_childNode[a] != null && m_childNode[a].m_region.Intersects(intersectRay) != null) { List hits = m_childNode[a].GetIntersection(intersectRay, type); if (hits != null) { foreach (IntersectionRecord ir in hits) ret.Add(ir); } } } return ret; } Intersection with a list of objects This is a particularly useful recursive method for determining if a list of objects in the current node intersect with any objects in any child nodes (See: Update() method for usage). It's the method which will be used most frequently, so it's good to get this right and efficient. What we want to do is start at the root node of the tree. We compare all objects in the current node against all other objects in the current node for collision. We gather up any of those collisions as intersection records, and insert them into a list. We then pass our list of tested objects down to our child nodes. The child nodes will then test their objects against themselves, then against the objects we passed down to them. The child nodes will capture any collisions in a list, and return that list to its parent. The parent then takes the collision list received from its child nodes and appends it to its own list of collisions, finally returning it to its caller. If you count out the number of collision tests in the illustration above, you can see that we conducted 29 hit tests and recieved 4 hits. This is much better than [11*11 = 121] hit tests. private List GetIntersection(List parentObjs, Physical.PhysicalType type = Physical.PhysicalType.ALL) { List intersections = new List(); //assume all parent objects have already been processed for collisions against each other. //check all parent objects against all objects in our local node foreach (Physical pObj in parentObjs) { foreach (Physical lObj in m_objects) { //We let the two objects check for collision against each other. They can figure out how to do the coarse and granular checks. //all we're concerned about is whether or not a collision actually happened. IntersectionRecord ir = pObj.Intersects(lObj); if (ir != null) { intersections.Add(ir); } } } //now, check all our local objects against all other local objects in the node if (m_objects.Count > 1) { #region self-congratulation /* * This is a rather brilliant section of code. Normally, you'd just have two foreach loops, like so: * foreach(Physical lObj1 in m_objects) * { * foreach(Physical lObj2 in m_objects) * { * //intersection check code * } * } * * The problem is that this runs in O(N*N) time and that we're checking for collisions with objects which have already been checked. * Imagine you have a set of four items: {1,2,3,4} * You'd first check: {1} vs {1,2,3,4} * Next, you'd check {2} vs {1,2,3,4} * but we already checked {1} vs {2}, so it's a waste to check {2} vs. {1}. What if we could skip this check by removing {1}? * We'd have a total of 4+3+2+1 collision checks, which equates to O(N(N+1)/2) time. If N is 10, we are already doing half as many collision checks as necessary. * Now, we can't just remove an item at the end of the 2nd for loop since that would break the iterator in the first foreach loop, so we'd have to use a * regular for(int i=0;i tmp = new List(m_objects.Count); tmp.AddRange(m_objects); while (tmp.Count > 0) { foreach (Physical lObj2 in tmp) { if (tmp[tmp.Count - 1] == lObj2 || (tmp[tmp.Count - 1].IsStatic && lObj2.IsStatic)) continue; IntersectionRecord ir = tmp[tmp.Count - 1].Intersects(lObj2); if (ir != null) intersections.Add(ir); } //remove this object from the temp list so that we can run in O(N(N+1)/2) time instead of O(N*N) tmp.RemoveAt(tmp.Count-1); } } //now, merge our local objects list with the parent objects list, then pass it down to all children. foreach (Physical lObj in m_objects) if (lObj.IsStatic == false) parentObjs.Add(lObj); //parentObjs.AddRange(m_objects); //each child node will give us a list of intersection records, which we then merge with our own intersection records. for (int flags = m_activeNodes, index = 0; flags > 0; flags >>= 1, index++) if ((flags & 1) == 1) intersections.AddRange(m_childNode[index].GetIntersection(parentObjs, type)); return intersections; } ;i++)> Screenshot Demos This is a view of the game world from a distance showing the outlines for each bounding volume for the octree. This view shows a bunch of successive projectiles moving through the game world with the frequently-used nodes being preserved instead of deleted. Complete Code Sample I've attached a complete code sample of the octree class, the intersection record class, and my generic physical object class. I don't guarantee that they're all bug-free since it's all a work in progress and hasn't been rigorously tested yet.
  37. 3 points
    I’m Tamás Karsai (Spidi) a solo game developer forming "Magic Item Tech". I used to grind for magic items day and night, now I’m building games using technology fueled by them . I would like to present my third completed solo game project from start to finish . I Am Overburdened is a silly roguelike with a fun twist to the tried and true classical formula. The player takes the role of an artifact hunter, who has a surprisingly large carrying capacity, embarking on a quest to search through dungeon after dungeon for mystical artifacts and answers, in a world where magic has long been forgotten… Run focused campaign, playable in short bursts. Fill a huge inventory having 20 slots. Find more than 100 crazy artifacts, all of them unique. Learn the ins and outs of an RPG system which feels approachable and fresh, but familiar and deep at the same time. Crawl in procedural dungeons generated from hand authored layouts. Collect details about monsters and artifacts in your journal. Unfold a funny story, packed with vicious evils, puns and jokes. Immortalize your best playthroughs in the “Hall of Fame”. What is this 20 slots, 100+ unique artifacts RPG nonsense? Simple, your “hero” does not get more powerful magically by beating some orcs to death and he is not a wizard either. In the world of I Am Overburdened the art of magic wielding was lost, but legendary artifacts and relics retained their power. If you equip these you become stronger, sometimes immeasurably, and you may even learn reactive skills and otherworldly abilities, but no sane person wears two pants… The game will be sold primarily through Steam and itch.io for Windows PC initially. It will cost 4.99$ (may vary based on store & region). You can already wishlist the game on Steam to get an e-mail on release day: Or you can follow my developer profile on itch.io to get a notification: My website, the Steam store-page and the Steam Community Hub already has a lot more information about the features of the game and the release itself. I'm also doing a little marketing "sprint" thingy up until the release day. I'm calling it the "Wishlist Release Calendar". I'm going to release an artifact from the game every day with its "fluff" text posting the new version of the following image here:
  38. 3 points
    Yes. This was discussed (but for some reason the blogpost was removed, likely in server migration) in Filmic Worlds' website. Fortunately Web Archive remembers. Also twitter discussion: https://twitter.com/olanom/status/444116562430029825
  39. 3 points
    You are right in that anti aliasing needs the tone mapped result, but the AA is probably also before the post processing, which will work best in HDR space instead. So usually there is a tonemap in the anti aliasing shader before the resolve, and an inverse tonemap right after it, so that the post procesing is getting the anti aliased image in HDR space. Then of course you need to do the tonemapping again at the end.
  40. 3 points
    With a perspective projection they've always been non-linear, nothing has changed in D3D12, you probably haven't noticed before. If your units are metres, then having a far clip plane set to 0.1 millimetres is excessively small. Even your new 1cm near-clip is smaller than you probably need. Try and keep the ratio between Near/Far clip as small as possible. The trick you should start using is the reverse depth buffer trick (Google "Reverse Depth Buffer"). This should significantly reduce any Z-fighting you're getting even with larger ratios of near/far. Here's one such link: https://developer.nvidia.com/content/depth-precision-visualized
  41. 3 points
    Attention, passive aggressive post ahead: First, I think that Mike's post was a legitimate objection from the succeeding side of view and he is right that the great Kickstarter era seems to be over as there are know other platforms/possibilties people could choose from. Patreon for example or if you already have something playable there is Steam for example to pre-finance alpha state games and get feedback. And if you have a team of coders, just let the world know that and (if they agree), who they are. This makes your project seem more serious. Anyways, currently it sounds like all the other "lets build a studio" posts here, you want but dont want/are able to pay for them whats confirmed implicitely from what sounds like "so I have had some people that brought my game to a point where I now want to kick them all off and replace by more experienced/professional people for same conditions. Do not expect professional people that are talented to join your club while those will mostly already have there payed jobs. Do not expect people to work for free with an uncertainly maybe/maybe not refund. And at least dont expect a mobile app to get that much fund/yield that much profit to keep those people in your "maybe company". On every payed app on the mobile market you will have a dozens of free/freemium apps and at least "something like <insert whatever here>" will not has a strange lifetime in my opinion. Also people on the mobile market do not want to pay a full priced game especially they did not hear about before. I would suggest to look at App Store/Play Store trends for a market analysis. Dont get me wrong, I respect people that are willing to fund a company to bring there desired games to life. (... But not this way)
  42. 3 points
    It is doable, we do it all in our game, but we do it back to front ( no earlier out ) and we also interleaved them with sorted fragment from traditional geometry or unsupported particles types. The bandwidth saving plus well written shader optimization make it a good gain ( plus order independent transparency draw ) The challenge is DX11 PC without bindless, you have to deal with texture atlas and drivers having a hard time to optimise such a complex shader ( from the DXBC compared to console where we have dedicated shader compiler ), On Console and dx12/Vulkan, you can also just provide an array of texture descriptors, so easier For practical reason and storage for the culling you may want to limit the number of particles to a few thousands, it was fine for us, but other games based on heavy effects would have mourn.
  43. 3 points
    @SoldierOfLight Unfortunately, the above example is not valid, we expect descriptors to be "set in stone" by ExecuteCommandLists (submission) time. This is in accordance with the documentation below: https://msdn.microsoft.com/en-us/library/windows/desktop/mt709473(v=vs.85).aspx#descriptors_volatile "With this flag set, the descriptors in a descriptor heap pointed to by a root descriptor table can be changed by the application any time except while the command list / bundles that bind the descriptor table have been submitted and have not finished executing. For instance, recording a command list and subsequently changing descriptors in a descriptor heap it refers to before submitting the command list for execution is valid. This is the only supported behavior of Root Signature version 1.0."
  44. 3 points
    So after coming across this thread about JAI, I got to thinking why haven't more people embraced D as the programming to use, especially over/instead of C++? But just in general, will any language ever replace C and C++? Or is the amount of inertia and legacy code too insurmountable for any other language (of that sort) to be fully embraced (ie. not be a niche language)?
  45. 3 points
    As others have alluded to, there's a layer of abstraction here that you're not accounting for here. fxc outputs DXBC, which contains instructions that use a virtual ISA. So it's basically emitting code for a virtual GPU that's expected to behave according to rules defined in the spec for that virtual ISA. The GPU and its driver is then responsible for taking that virtual ISA, JIT-compiling it to native instructions that the GPU can understand (which they sometimes do by going through multiple intermediate steps!), and then executing those instructions in a way that the final shader results are consistent with what the virtual machine would have produced. This gives GPU's tons of leeway in how they can design their hardware and their instructions, which is kind of the whole point of having a virtual ISA. The most famous example is probably recent Nvidia and AMD GPU's, which have their threads work in terms of scalar instructions instead of the 4-wide instructions that are used in DXBC. Ultimately what this means for you is that you often can only reason about things in terms of the DXBC virtual ISA. The IHV's will often provide tools that can show you the final hardware-specific instructions for reference, which can occasionally help guide you in terms of writing more optimal code for a particular generation of hardware. But in the long run hardware can change in all kinds of ways, and you can never make assumptions about how hardware will compute the results of your shader program. That said, the first thing you should do is compile your shader and look at the resulting DXBC. In the case of loading from an R8_UINT texture and XOR'ing it, it's probably just going to load the integer data into a single component of one of its 16-byte registers and then perform an xor instruction on that. Depending on what else is going on in your program you might or might not have other data packed into the same register, and the compiler may or may not merge multiple scalar operations into a 2, 3, or 4-way vector operation. But again, this can have little bearing on the actual instructions executed by the hardware. In general, I think that your worries about "packing parallel XOR's" are a little misplaced in terms of modern GPU's. GPU's will typically use SIMT style execution, where a single "thread" running a shader program will run on a single lane of a SIMD unit. So as long as you have lots of thread executing (pixels being shaded, in your case) the XOR will pretty much always be run in parallel across wide SIMD units as a matter of course.
  46. 3 points
    If I'm understanding you correctly, what you're suggesting is probably the worst thing you could be trying to do. Shaders don't have a stack or any local memory to play with. They have a few (~256) registers to play with and that's it. Compute Shaders can use LDS / Group Shared Memory, but I don't think that's what you're using here. If you're writing a Pixel Shader that uses large arrays local to each thread then you're in for a world of hurt I'm afraid. The compilers will probably find a way to run what you've written by spilling back to main memory a lot, but performance will be dire. Do you have an example of the types of situations where you'd want a thread on the GPU to have access to 300,000 'cells' of data and for that data to not live in a Resource such as a Buffer or a Texture?
  47. 3 points
    Hey guys, long time no post! Funny sequence of events that led to me checking in today. My take: nothing will ever "replace" C++, because nothing will ever have the same degree of critical mass/hegemony that C had and C++ inherited. The language ecosystem will be more diverse and varied, with individual languages and their code being more generally portable, and practitioners more able to switch toolsets because the norm today is to have high-quality, cost-free, often open sourced implementations. My other (and hotter!) take: JavaScript already "replaced" C++ as the language in which the largest subset of user-facing applications, and even server-side infrastructure, is written in.
  48. 3 points
    Asking if people want better anything they will always say yes. If you know how to make a game with better writing and linear elements then I really recommend making it. Just don't be surprised if what you thought was a better story is disliked by players, sometimes people have a different idea of what is better. I for one would like to see more story based games, so you already have my interest. What ever you can safely afford and a bit more. Different experiments have different costs, emotional stories can often lead to the player feeling depressed when playing a game, up to the point where they just stop. So if you plan on doing these I recommend testing the game often on test players to see how they are impacted. This war of mine is a good example of a game where a heavy emotional story limits game play, most people I know didn't play the game more than a few tries. It's a good game but they pushed the emotions of the player past the breaking point.
  49. 3 points
    I don't think you should have two different transform components, no. You shouldn't implement the physics using the ECS-transform-component eigther. Physics is a vastly complex system that requires (or at least should have) a separate implementation. So optimally, you'd just use the transform-component to communicate between the different system, which keeping a separate view of the physics-world: struct PhysicsComponent { RigidBody* pBody; // pointer to a body aquired from the physics-world upon initialization => could also be queried instead }; struct PhysicsSystem { void Update(double dt) // just to showcase... { world.Tick(dt); for(auto entity : EntitiesWithComponents<Physics, Transform>()) { auto& physics = entity.GetComponent<Physics>(); entity.GetComponent<Transform>().SetTransform(physics.pBody->QueryTransform()); } } private: PhysicsWorld world; }; (This is just some pseudo-code to get the idea across). So essentially Transform becomes a point for communication between different systems. What the physics-system wrote in, you can later read out ie. in the RenderSystem; also your gameplay-systems can just set the transform as well. Entities just register new rigidbodies to the physics-world, but the world doesn't know that an entity even exists, which keeps your physics separated & more flexibel. For example, its pretty easy in this system to exchange your physics with say bullet at some time, while what you originally do creates a sort of tight coupling between those independant modules. As a general consensus, you should only implement the bare minimum required functionality inside the ECS, if possible. Do not use ECS as a basis for physics, rendering, HUD, ... but instead implemented those systems separately, and use the ECS as the high-level intercommunication-layer. At that, it really excells IMHO, but everything more will just result in many of the reasons why people despise ECS. Hope that gives you a rough idea.
  50. 3 points
    Press Releases Are Important, So Why Aren’t You Writing Any? Welcome back to our marketing lessons focused on the indie developer, aptly titled “Indie Marketing For N00bs”. This lesson will focus on the importance of getting the news out to journalists and the media. This can be done a number of ways, but our primary focus is on proper etiquette for writing a press release. If done right, a press release can be seen by thousands of people, so there’s certain things that anyone writing the release needs to focus on and present. The world has their eyes on you for that brief second; make it count. A well written press release can go a long way. What makes a good press release, though? We can talk for hours on intricacies of writing and proper culture in dedicated writing. But, we’ll bring this down to some general tips to make your writing better without boring you too much on the details. No Fluff! Look, the details are important. You need to make sure you convey everything you want to say to the masses and I understand that. But, this isn’t technical writing. This is your great stand about your game. People don’t care about the coding that goes into a game. They don’t want every detail about how it was made. Leave those to Dev Diaries and blogs that you can go into detail about how you made your main character’s arm move super realistic with a special line of code. “Tl;dr”, which is shorthand for “Too long, didn’t read”, is a well-known term in writing. Get your point across first. Saving important details until later in the press release can damage your chances of getting eyeballs on the post. “Personality” Doesn’t Mean “Opinions” Personality is key and will optimize the eyes that see your writing. Boring press releases get overlooked because writers want to write about things that interest them and get their attention. Be humorous and witty. Don’t be afraid to make a relevant pun in writing. If you can make the journalist laugh, you’re likely to have a good write up about the news. Extra fluff can come in a number of ways. Press releases, for instance, should be devoid of opinions. You can be happy you get your game out there, but going into opinion and blog-like writing is an automatic turn off for a lot of journalists that are picking up the write-up. People want news to be, you know, news. Inject some personality into the writing, though. This isn’t an expository high school essay. This is your masterpiece. Be proud of what you’d got here. But, be careful not to turn it into an opinion piece. You may love it, but someone else may not. Create hype by being honest and straightforward. If I wanted your opinion, I’ll read your Dev Blog or watch your Dev Diaries (which are also a great way to create hype, but need to remain separate from the news). Empower Yourself With Quotes Now, let me go against everything I’ve said prior, but only if done in a specific way. Quotes are the one place that a press release should have enthusiasm or opinion. By quoting yourself or someone on your team, you open up the ability to say whatever you want. This is your time to shine as a human that made the game. Be excited and enthusiastic. I’ve seen too many quotes that read like a robot wrote them. I once had to explain to one of these robots the best way to give a quote, “Pretend you’re telling your best friend in the entire world about your product for the very first time. Show the excitement from that moment!” I do have a personal rule that works well for quote, though. Too many quotes will drown a press release. Most reporters that take your release and have to massage it are going to pull the main information and re-write it, then maybe snag one or two of the quotes for the article, if any at all. Limit the amount of quotes in a single release to be no more than three, with no more than two quotes for a single person. Source Your Sources Everyone wants to compare their game to a bigger, well-known game. Everyone wants to mention other companies, studios, or events that are relevant and/or topical to the news. This is where the ground gets a little shaky. This release isn’t about others. This isn't an elevator pitch, this is the real thing. This is about you, your team, your game, and everything involving those things. I highly recommend keeping others out of the mix. But, if you have to, there’s good ways of going about it. Make sure to include the proper copyright and trademark information for any brand you decide to utilize. You can’t mention another company without the proper legalese. This should be included near the bottom of the release, just to cover your own behind. Additionally, if you mention any copyrighted systems that your game will be on, it’s important to give the proper copyrighting symbol with it and make sure it’s named properly. Look up proper style guides for anything you mention, because each brand has their own unique shorthand. It’s “Sony PlayStation 4”, not “Playstation” (The “S” is Capitalized). It’s “XBox One”, not “Xbone”. Properly attributing your mentions makes you look more professional, as well as more likely to have people pay attention. Don’t be afraid of links in the press release. Embrace them and link to all of your sources properly. Did you attend an event that is in your news? Link the main page of the event. Are you name-dropping a specific console or game series? Give them props. Do you have assets for your own game, like a press kit? Link it and make it bold. Adventure, Excitement… A Journalist Craves These Things I talked about journalists a bit in a previous entry to this series, but I want to elaborate on their thoughts about press releases. When you network, you make allies. But, it’s a lot easier if you give them news that they can do something with. Searching them out makes their job much easier for them. They are actively looking for things to write about and most publications keep themselves on a constant stream of press lists for this exact purpose. Even if you don’t know them, utilize that press list that you made in the earlier lesson to get ahold of them and make yourself known. Journalists, for the most part, are pretty personable and are just looking for a new scoop. Just remember: Journalists and the media love press releases. Even if the release you write isn’t as successful as you had hoped, they can be added to your own “Press Kit” that any game should have for later usage. But, Press Kits are a lesson for another day. Also, don’t forget: Hit all of the relevant news-wires and aggregators if possible. This will be key to getting the press release to those you don’t already have access to, as journalists (and even everyday people) look at sites like Gamasutra and GamesPress. Even websites and forums like GameDev.Net are notable examples of places to put your news, sharing among other developers. Additionally, don't forget to share the press release on your social media.