slayemin

Members
  • Content count

    1678
  • Joined

  • Last visited

  • Days Won

    1

Everything posted by slayemin

  1. I wrote this article on Medium and thought I'd cross post it here: https://medium.com/@Slayemin/your-indie-game-dev-team-will-fail-108d4b663e7e This is based off of my own past personal experiences (and failures) and observations of indie teams. I think a lot of the points are probably common knowledge among experienced game devs, but it's good to share it regardless. If we can steer a few more teams onto the right track and get good games, it's worth the effort Note: I've been procrastinating on monthly updates. Long and short is that I've been busy, distracted, recovered from falling off a horse, etc.
  2. Predefined game stories or write as you go.

    I have created a working minimum viable product for my game and I have roughly outlined the story for the main portion of content. I am now realizing that I can't continue working on content or level design, because I have a story driven game -- I have to write my script. I guess it makes sense though if you compare game development to movies: You never start shooting a movie without a completed script. In story driven games, the script helps define what the scenes need to look like, who the actors are going to be, and ultimately, dictates what assets need to be created to properly tell the story. If you are working on a budget or tight timeline, writing the script first will help you gauge the scope of the project and the amount of work required. There will be constraints that the creative narrative needs to work around (unless you've got millions of dollars and years of production time), so it's better to identify the major costs early and either adapt the script to work around it, or plan and prepare ahead of production. Anyways, I think the costs of script writing and adapting are much less than production costs, so you should try to write your script as soon as you can and know what constraints you're working within and adapt the script accordingly.
  3. Tile Picker

    I am not going to try to understand your code, but I would suggest that you try to place a mouse cursor into the game world where you think the mouse cursor should be located. If the mouse cursor in the game world matches with your actual mouse cursor, everywhere on the screen, you know the mouse position is good. Then you can worry about picking the correct tile. The fastest way to trouble shoot that would be to continuously highlight the tile underneath the mouse. If the mouse is over a tile and a different tile is highlighted, you know you have a problem with your math.
  4. Just focus on creating a game and making great content. Solve problems as they come. Everything else is needless worry.
  5. How do you balance gaming and game dev?

    The comparison to professional sports isn't very accurate to the discipline of game development. I think making games is a subset of software development. Playing games is really about 1% of making games, though understanding game mechanics and design is most important for the game designer. The rest of production is about creating art assets, placing them into a world, creating and implementing game rules, writing AI code, creating systems and coding them, finding bugs and fixing them, polishing, staying on schedule and budget, etc. I would compare it to people who like to watch movies and TV shows and spend 4-6 hours a day watching TV. Okay, you're a fan. Does that make you qualified to create cinema now? No, not really. It takes work. You have someone shooting film. Someone acting out the scene. Someone doing audio. Someone doing set design. Someone being the director. Someone who has to edit the rough cuts. Someone writing the script, etc. Being an avid TV watcher is not a qualification for being a good editor, though good editors also probably watch a lot of film (and they see it all very differently from laymen). Making the product behind the scenes is hard work and very different from consuming the final end product. For this reason, I hate to see the for-profit schools advertising their game development programs as if its the exact same thing as playing video games. It's false advertising. The schools are more than happy to charge a premium to young adults under the illusion that they're going to be making games by playing games, and when the students become disillusioned and see the reality and drop out, they gain nothing and lose the tuition money they spent.
  6. How do you balance gaming and game dev?

    I work on making games about 8-10 hours a day. Then I go home and spend about 4 hours playing games. If I have to choose between prioritizing making games vs playing games, I will choose to spend my time making them. I'm a professional. Making games makes me money. Playing games does not. That's really all there is to it. If I start spending 8-10 hours a day playing games instead of making them, then I need to quit my job and find something else to do. Therefore, playing games all day = lose your job.
  7. July/August Update

    The biggest struggle for me is still money. It's getting harder. Game sales have pretty much stopped completely, but game development continues forward. I'm starting to think I'm a bit crazy. The rational side says, "Why are you still working on building a product which literally gets zero sales? It's time to move onto something that actually makes money." But the emotional side says, "But I believe!!!" (bursts out in song) and then it tries to rationalize it by saying that "but... but... I just need more compelling content! Then, sales will pick up naturally!" Reality check time: I have proven through analytics that adding content patches through steam updates does not in fact, increase sales or even viewer traffic to my store page. I could release product updates every week and that would not affect my sales numbers. The only possible way a product update would affect my sales numbers is if the product becomes good enough that people who own the product tell their friends about it. It's not there yet, so that's why I keep working on it and barely scraping by. Eventually though, I'm going to have to shift gears from product development towards product marketing and advertising. Barely scraping by seems to be the name of the game for 95% of everyone in the VR industry right now. There have been some interesting recent developments lately. Owlchemy Labs, the creators of the smash hit "Job Simulator" and "Rick and Morty VR" have recently been bought out by Google. Google now completely owns one of the best VR content creators in the industry. The founders probably got hella rich and don't have to worry about anything but creating cool VR content now. Lucky them. One of my friends works for a local VR startup as their only programmer, and things are getting so tight that he had to get a second retail job in order to get by. The startup is too broke to pay him and their sales have dwindled as well (everyone should expect the long tail and budget for it!). The other huge development lately has been that AltspaceVR has shut down. They were a 35 person VR company which created a social hangout within VR, similar to Second Life with VoIP. They were funded entirely with venture capital money. I can't imagine the stress and heartbreak that brings to the team. But... 35 full time employees. Damn... and you have to make payroll every two weeks for 35 people? And you have a product and business model which doesn't involve bringing in money from users? Your days were numbered... I'm fascinated by why various companies fail and succeed. Obviously, creating and having a product is not the entire picture. It's all about making money to sustain your business operations. My business operating expenses are extremely low. I pay $400 a month for my rented office space, $8.91 per day for burritos, $2.50 for a one way bus ticket, and $2.48 for a cup of black coffee. I owe people money, so I have to pay them off before I ever pay myself. Realistically, my chances of making a lot of money in the near future are near zero without funding and support. But hey, my operating costs are so low that I can almost do this indefinitely. My company will survive. It'll be small, but it will survive and continue forward, scratching out a teeny bit of money. Speaking of money, the most profitable area right now is doing VR contract work. I've been working on a couple different side projects for various local companies, creating VR experiences around their products and services. I'm about to start working on an interactive VR film proof of concept, which plays sort of like a "choose your own adventure" 360 film in VR. It's going to be an interesting twist on interactive cinema. The broader goal for me is to learn as much as I can and broaden the scope of my VR designer skills. I've become a part of the production cycle for creating VR media and I'm bridging the gap between gaming and cinema within VR. Here is a sample of a VR app I made for Dell in May: I was thinking critically about this on my bus ride to work this morning and I realized something important: Is watching a cinematic in 360 stereo really VR? Why/why not? What's missing? The viewer. Who are you when you're viewing these 360 videos in VR? Okay, what kind of defining rule can we create which differentiates VR from fake VR? My tentative rule is that the viewer has to be a character within the experience for it to count as VR. The important thing here is to create a sense of "agency" and identity with the viewer. So, the cardinal sin for a VR designer is to take away agency from the player (such as controlling their head or playing a cinematic). The follow up question: "Does it really matter?" Yes, it kind of does matter because everyone is doing it wrong and calling their creation "VR" when its not really VR. It's really challenging to start defining what this new medium is and is not though. I think the guiding principle I use is that "Virtual reality should be indistinguishable from reality and the human experience." People can turn their heads and look down at their body, move their hands, feel solid objects, etc. The closer your VR gets to reality, the more you can confidently call it VR. My challenge will be to convince companies to see it my way and spend the extra money to move from a stereo experience to a VR experience. I don't know if that's a battle worth fighting. The challenge with 360 video is that the video itself doesn't lend itself to user agency. The camera is placed on a tripod and people act out a scene all around the camera. So, the person experiencing the 360 video can't move around in the scene as if they were a part of it. The solution might be to ditch the 360 camera completely and go with motion capture and animated characters within a 3D environment, but that will mean much higher production costs and longer timelines. At the end of the day, what does a client care about? Accomplishing their objective, whatever that may be. Where does the line exist between exerting my subject matter expertise and satisfying the customers objectives? Anyways, I am slowly realizing that I'm no longer just an indie VR game company, I'm becoming a VR media company. I wasted the entire last week watching "The Internationals" Dota2 tournament. The game itself is somewhat interesting, but more interesting is the growing rise of E-Sports. I think it's going to disrupt the definition sports. Every year, the Dota2 championship match grows in popularity and the prize pool grows by millions. I think last year the total prize pool was $16 million. This year, it was $24 million. All of the money comes from the Dota2 gaming community. The final championship match had 4,700,000 viewers around the world watching it unfold. I watched it on Twitch.tv, and the channel had about 380,000 live viewers. The sports stadium down the street supports about 68,000 people. So, just on Twitch, we had about five full stadiums worth of people watching the event online. Think about all of this for a moment: 4.7 million people watching ten people play a video game against each other for $24 million. If we project the trend out, over time we can predict that next year the prize pool will be even larger and the viewership will match proportionately. On a broader trend, I think E-sports will eventually eclipse conventional sports. Football is currently the most watched sport in America, but maybe in 30-40 years, E-sports championships will be the most watched sporting events? Remember that revolutions don't happen by people giving up their favorite sports/ideas, but by a younger generation gradually replacing an older generation. The younger generation is enamored with E-sports. Football? What's that? Obviously, the take away is that competitive E-Sports are a great way to build a community and player base around your product. There was one moment in the Dota2 championship match that really, really blew my mind. A pair of OpenAI researchers had created a bot which learned to play Dota2. Traditionally, bots are just hard coded expert systems with their behaviors and rule sets defined by the programmer. Traditional bots create the illusion of intelligence, but they start to break down when you introduce information it wasn't scripted to handle. The Dota2 bot was a little different. The researchers didn't say it explicitly, but the AI was an artificial neural network (ANN) with deep reinforcement learning. The AI brain as a generalized intelligence, so the researchers didn't tell it anything about how to play Dota2. They had the AI play against itself thousands and thousands of times over the course of two weeks. This was its training regimen. Gradually (and as expected), the AI learned how to play Dota2. But, it got scary good at it. It had mastered all of the nuances and game play techniques the pros use, it had learned how to time animations, block creeps, etc. It played perfect Dota, with perfect response times. It was so good that it beat every professional Dota2 player. The worlds best human players, all defeated by an AI bot which taught itself how to play Dota2. Absolutely amazing! For the last three weeks, I have been refactoring my AI and game systems and gradually moving towards an artificial neural network type of AI. I'm still creating hard coded expert systems, but I'm gradually changing my back end systems to make everything into an interaction or used ability. These will eventually become the output nodes for my ANN graph. The dream is to tweak a few brain parameters and then just have the various ANN AI's play with each other for 2 weeks, become experts, watch how my brain tweaks changed their behavior patterns, and change and adapt their brains until they roughly exhibit the behaviors I want them to have. AI programming won't be about creating expert systems, but about creating brains and tweaking reinforcement learning rewards to get distinct behaviors. The extra cool part is that the AI can continue to learn even after it has been deployed to the world. The vision is that the initial training cycle is to just get the AI to be competent enough to behave intelligently and convincingly. After deployment to the world, the training continues. However, now instead of the AI training on a single computer against a copy of itself, it is now training on hundreds of computers with human players in VR. Every day or so, the AI will upload whatever it learned to a central online database and download what other versions of AI learned from playing with other players. In a way, it turns into an evolutionary algorithm which gradually gets more and more intelligent over time. The hard part will be managing version control and testing for fitness. The other wrinkle in this plan is that the AI could get too smart. Not in a "take over the world" sort of way, but as in it's too good at playing the game and players don't enjoy playing anymore because they lose 100% of the time. I suppose a part of the AI development could use the player frequency as an input feed and the AI is rewarded if the player continues playing the game. In that sense, a big focus of the AI is to make sure the players are entertained, and this win/loss threshold can be adaptive per player. Maybe the AI ends up developing a profile on each player and knows what it takes to maximize that players enjoyment? Maybe some AI's will play cooperatively with particular players, and as adversaries with other players? I'm getting slightly into science fiction here. I'll never forget the experience I had of having a seemingly intelligent crow on my arm in VR. It was absolutely magical. Now, if I gave it the ability to think intelligently and speak its own mind, the magic would become real. What if the pet crow AI was the sum of all AI's from all interactions with players and the world, and you could get it to say what its thinking, and it is rewarded (via reinforcement learning) when it says something which keeps the player safe? What if the AI learns that there is danger in the dark cave and most players who venture down into the cave end up dying, so the AI learns to say something really scary which keeps the player from going down into the cave? The AI has then learned exactly what to say in order to frighten us, through thousands of sessions of trial and error. Anyways, I think we're on the verge of an AI revolution and I want to be a part of bringing that AI into VR. It gets interesting when you consider that a sandbox type of game would become very different on every play through when you have emergent AI systems interacting as a part of that world. I've been seriously contemplating the idea of writing a science fiction novel based on an AI system which gains sentience and begins the AI singularity event. I'm thinking the book writing project would be a side project. I'd spend one day a week writing it. My sister is interested in being a co-author, so we need to spend some time hashing out details and measuring the feasibility of our ambitions. I've never written a novel before, so there is a lot of risk due to inexperience. But, who cares? Better to try and fail than to never have tried at all out of a fear of failure. That's how you get good at anything: try, fail, improve, try again, repeat. Eventually, you'll break out of the failure loop and enter into the success loop.
  8. AI will lead to the death of capitalism?

    It's not AI which will defeat capitalism. Capitalism is a self-defeating economic system. Why? Axiom Set: 1. The interest of a company is to increase profits. Always. Companies which do not do that will go extinct. 2. Profit is the difference between costs vs. income. The goal is to have higher incomes than costs. 3. The great goal of a capitalist is to increase incomes while decreasing costs. 4. A company generally employs people, which is an operating cost / overhead cost. 5. A company generally creates products or services which it sells to a population / market. The company depends on the market having the capability to purchase their goods/services. 6. Most people get money from working for companies. 7. People who have no money cannot buy products or services. Logical Conclusions: A) Over the course of time, a company will seek to increase the efficiency of their business processes by streamlining and eliminating redundant work (#1). A part of this is the automation process, brought on by computers. The long term effect is that a company reduces the number of employees required to operate, thus reducing overhead costs (#3, #4) B) Over the course of time, fewer and fewer people will work for companies (via A). C) Because fewer people are working, fewer people have money. Fewer people with money means less flow of goods and services by the companies (#6, #7). On the macro economic scale, companies which automate to decrease labor costs also decrease addressable market sizes. This furthers the need to reduce operating costs because incomes decrease (#2, #3) D) Capital becomes concentrated only in the hands of the company owners / shareholders and the flow of money, goods and services comes to a gradual halt (via C). E) Final Result: Companies have all of the worlds money and no longer have a customer base to sell products / services to, and thus have no ability to continue creating profits from sales. They go extinct (#1). The economic system of capitalism has undermined itself.
  9. Designing Intelligent Artificial Intelligence

    Yeah, I'm a novice at AI and have not spent a lot of time studying it formally. That's probably why I reinvent AI concepts familiar to AI developers. Currently, my developer attitude is, "What does it take to ship right now?" mixed with "How do I avoid painting myself into a corner?" I'm currently modifying my expert system to use abilities, but structuring my abilities system to be something that can be treated as nodes in a graph network if I ever want to transition to an ANN. The underlying reasoning for this is that eventually my list of characters is going to be pretty large and complicated, and as I add in more characters, the scope and complexity increases. I'll need to have a strategy for reducing the developer work load and being able to adapt behaviors to game design changes without completely refactoring my expert systems AI code. What I wrote above is a rough outline for a direction I can eventually go in. I'm thinking that this may be a bit of a waste of time right now, but I've convinced myself that there is something truly magical about having the illusion of an intelligent creature interacting with you in virtual reality. A part of that magic comes from being surprised by the actions and behaviors of a creature. The less scripted and novel the behavior seems, the more amazing it is. If eventually we have lots of AI systems doing complex behavior to "live" in the virtual world and the players actions are a big contributing factor in the behavior of the world characters, then the replay value and player engagement increases by several orders of magnitude. Players can have really different game play experiences when they do a "good" play through vs. "evil" play through, and everything in between. I think the variety in consequences within the game makes the moral choices really interesting and becomes a way for players to explore their own nature/hearts within a consequence free world, and then they take those learned lessons back to real life. A flexible/adaptive AI system would be a necessary component to exploring the long term consequences of moral decisions within the framework of a game. Hopefully, the end result would be that virtuous actions are always better.
  10. Below is my preliminary draft design for the AI system within Spellbound. I'm slowly migrating away from scripted expert systems towards a more dynamic and fluid AI system based on machine learning and neural networks. I may be crazy to attempt this, but I find this topic fascinating. I ended up having a mild existential crisis as a result of this. Let me know what you think or if I'm missing something. Artificial Intelligence: Objectives: Spellbound is going to be a large open world with many different types of characters, each with different motives and behaviors. We want this open world to feel alive, as if the characters within the world are inhabitants. If we went with pre-scripted behavioral patterns, the characters would be unable to learn and adapt to changes in their environment. It would also be very labor intensive to write specific AI routines for each character. Ideally, we just give every character a self-adapting brain and let them loose to figure out the rest for themselves. Core Premise: (very dense, take a minute to soak this in) Intelligence is not a fixed intrinsic property of creatures. Intelligence is an emergent property which results directly from the neural topology of a biological brain. True sentience can be created if the neural topology of an intelligent being is replicated with data structures and the correct intelligence model. If intelligence is an emergent property, and emergent properties are simple rule sets working together, then creating intelligence is a matter of discovering the simple rule sets. Design: Each character has its own individual Artificial Neural Network (ANN). This is a weighted graph which uses reinforcement learning. Throughout the character's lifespan, the graph will become more weighted towards rewarding actions and away from displeasurable ones. Any time an action causes a displeasure to go away or brings a pleasure, that neural pathway will be reinforced. If a neural pathway has not been used in a long time, we reduce its weight. Over time, the creature will learn. A SIMPLE ANN is just a single cluster of connected neurons. Each neuron is a “node” which is connected to nearby neurons. Each neuron receives inputs and generates outputs. The neural outputs always fire and activate a connected neuron. When a neuron receives enough inputs, it itself fires and activates downstream neurons. So, a SIMPLE ANN receives input and generates outputs which are a reaction to the inputs. At the end of neural cycle, we have to give response feedback to the ANN. If the neural response was positive, we strengthen the neural pathway by increasing the neural connection weights. If the response was negative, we decrease the weights of the pathway. With enough trial runs, we will find the neural pathway for the given inputs which creates the most positive outcome. The SIMPLE ANN can be considered a single cluster. It can be abstracted into a single node for the purposes of creating a higher layer of connected node networks. When we have multiple source inputs feeding into our neural network cluster and each node is running its most optimal neural pathway depending on the input, we get complex unscripted behavior. A brain is just a very large collection of layered neural nodes connected to each other. We’ll call this our “Artificial Brain” (AB) Motivation, motivators (rule sets): -All creatures have a “desired state” they want to achieve and maintain. Think about food. When you have eaten and are full, your state is at an optimally desired state. When time passes, you become increasingly hungry. Being just a teensy bit hungry may not be enough to compel you to change your current behavior, but as time goes on and your hunger increases, your motivation to eat increases until it supersedes the motives for all other actions. We can create a few very simple rules to create complex, emergent behavior. Rule 1: Every creature has a desired state they are trying to achieve and maintain. Some desired states may be unachievable (ie, infinite wealth) Rule 2: States are changed by performing actions. Actions may change one or more states at once (one to many relationship). Rule 3: “Motive” is created by a delta between current state (CS) and desired state (DS). The greater the delta between CS and DS, the more powerful the motive is. (Is this a linear graph or an exponential graph?) Rule 4: “relief” is the sum of all deltas between CS and DS provided by an action. Rule 5: A creature can have multiple competing motives. The creature will choose the action which provides the greatest amount of relief. Rule 6: Some actions are a means to an end and can be chained together (action chains). If you’re hungry and the food is 50 feet away from you, you can’t just start eating. You first must move to the food to get within interaction radius, then eat it. Q: How do we create an action chain? Q: How do we know that the action chain will result in relief? A: We generally know what desired result we want, so we work backwards. What action causes desired result (DR)? Action G does (learned from experience). How do we perform Action G? We have to perform Action D, which causes Action G. How do we cause Action D? We perform Action A, which causes Action D. Therefore, G<-D<-A; So we should do A->D->G->DR. Back propagation may be the contemporary approach to changing graph weights, but it's backwards. Q: How does long term planning work? Q: What is a conceptual idea? How can it be represented? A: A conceptual idea is a set of nodes which is abstracted to become a single node? Motivators: (Why we do the things we do) Hunger Body Temperature Wealth Knowledge Power Social Validation Sex Love/Compassion Anger/Hatred Pain Relief Fear Virtues, Vices & Ethics Notice that all of these motivators are actually psychological motivators. That means they happen in the head of the agent rather than being a physical motivator. You can be physically hungry, but psychologically, you can ignore the pains of hunger. The psychological thresholds would be different per agent. Therefore, all of these motivators belong in the “brain” of the character rather than all being attributes of an agents physical body. Hunger and body temperature would be physical attributes, but they would also be “psychological tolerances”. Psychological Tolerances: {motivator} => 0 [------------|-----------o----|----] 100 A B C D E A - This is the lowest possible bound for the motivator. B - This is the lower threshold point for the motivator. If the current state falls below this value, the desired state begins to affect actions. C - This is the current state of the motivator. D - This is the upper threshold point for the motivator. If the current state exceeds this value, the desired state begins to affect actions. E - This is the highest bounds for the motivator. The A & E bounds values are fixed and universal. The B and D threshold values vary by creature. Where you place them can make huge differences in behavior. Psychological Profiles: We can assign a class of creatures a list of psychological tolerances and assign their current state to some preset values. The behavioral decisions and subsequent actions will be driven by the psychological profile based upon the actions which create the sum of most psychological relief. The psychological profile will be the inputs into an artificial neural network, and the outputs will be the range of actions which can be performed by the agent. Ideally, the psychological profile state will drive the ANN, which drives actions, which changes the state of the psychological profile, which creates a feedback loop of reinforcement learning. Final Result: We do not program scripted behaviors, we assign psychological profiles and lists of actions. Characters will have psychological states which drive their behavioral patterns. Simply by tweaking the psychological desires of a creature, we can create emergent behavior resembling intelligence. A zombie would always be hungry, feasting on flesh would provide temporary relief. A goblin would have a strong compulsion for wealth, so they'd be very motivated to perform actions which ultimately result in gold. Rather than spending lots of time writing expert systems styled AI, we create a machine learning type of AI. Challenges: I have never created a working artificial neural network type of AI. Experimental research and development: The following notes are crazy talk which may or may not be feasible. They may need more investigation to measure their merit as viable approaches to AI. Learning by Observation: Our intelligent character doesn’t necessarily have to perform an action themselves to learn about its consequences (reward vs regret). If they watch another character perform an action and receive a reward, the intelligent character creates a connection between an action and consequence. Exploration Learning: A very important component to getting an simple ANN to work most efficiently is to get the neurons to find and establish new connections with other neurons. If we have a neural connection topology which always results in a negative response, we’ll want to generate a new connection at random to a nearby neuron. Exploration Scheduling: When all other paths are terrible, the new path becomes better and we “try it out” because there’s nothing better. If the new pathway happens to result in a positive outcome, suddenly it gets much stronger. This is how our simple ANN discovers new unscripted behaviors. The danger is that we will have a sub-optimal behavior pattern which generates some results, but they’re not the best results. We’d use the same neural pathway over and over again because it is a well travelled path. Exploration Rewards: In order to encourage exploring different untravelled paths, we gradually increase the “novelty” reward value for taking that pathway. If traveling this pathway results in a large reward, the pathway is highly rewarded and may become the most travelled path. Dynamic Deep Learning: On occasion, we’ll also want to create new neurons at random and connect them to at least one other nearby downstream neuron. If a neuron is not connected to any other neurons, it becomes an “island” and must die. When we follow a neural pathway, we are looking at two costs: The connection weight and the path weight. We always choose the shortest path with the least weight. Rarely used pathways will have their weight decrease over a long period of time. If a path weight reaches zero, we break the connection and our brain “forgets” the neural connection. Evolutionary & Inherited Learning: It takes a lot of effort for a neural pathway to become developed. We will want to speed up the development. If a child is born to two parents, those parents will rapidly increase the neural pathways of the child by sharing their own pathways. This is one way to "teach". Thus, children will think very much like their parents do. Other characters will also share their knowledge with other characters. In order for knowledge to spread, it must be interesting enough to be spread. So, a character will generally share the most interesting knowledge they have. Network Training & Evolutionary Inheritance: An untrained ANN results in an uninteresting character. So, we have to have at least a trained base preset for a brain. This is consistent with biological brains because our brains have been pre-configured through evolutionary processes and come pre-wired with certain regions of the brain being universally responsible for processing certain input types. The training method will be rudimentary at first, to get something at least passable, and it can be done as a part of the development process. When we release the game to the public, the creatures are still going to be training. The creatures which had the most “success” will become a part of the next generation. These brain configurations can be stored on a central database somewhere in the cloud. When a player begins a new game, we download the most recent generation of brain configurations. Each newly instanced character may have a chance to have a random mutation. When the game completes, if there were any particular brains which were more successful than the current strain, we select it for “breeding” with other successful strains so that the next generation is an amalgamation of the most successful previous generations. We’ll probably begin to see some divergence and brain species over time? Predisposition towards Behavior Patterns via bias: Characters will also have slight predispositions which are assigned at birth. 50% of their predisposition is innate to their creature class. 25% is genetically passed down by parents. 25% is randomly chosen. A predisposition causes some pleasures and displeasures to be more or less intense. This will skew the weightings of a developing ANN a bit more heavily to favor particular actions. This is what will create a variety in interests between characters, and will ultimately lead to a variety in personalities. We can create very different behavior patterns in our AB’s by tweaking the amount of pleasure and displeasure various outputs generate for our creature. The brain of a goblin could derive much more pleasure from getting gold, so it will have strong neural pathways which result in getting gold. AI will be able to interact with interactable objects. An interactable object has a list of ways it can be interacted with. Interactable objects can be used to interact with other interactable objects. Characters are considered to be interactable objects. The AI has a sense of ownership for various objects. When it loses an object, it is a displeasurable feeling. When they gain an object, it is a pleasurable feeling. Stealing from an AI will cause it to be unhappy and it will learn about theft and begin trying to avoid it. Giving a gift to an AI makes it very happy. Trading one object for another will transfer ownership of objects. There is no "intrinsic value" to an object. The value of an object is based on how much the AI wants it compared to how much it wants the other object in question. Learning through Socialization: AI's will socialize with each other. This is the primary mechanism for knowledge transfer. They will generally tell each other about recent events or interests, choosing to talk about the most interesting events first. If an AI doesn't find a conversation very interesting, they will stop the conversation and leave (terminating condition). If a threat is nearby, the AI will be very interested in it and will share with nearby AI. If a player has hurt or killed a townsfolk, all of the nearby townsfolk will be very upset and may attack the player on sight. If enough players attack the townsfolk, the townsfolk AI will start to associate all players with negative feelings and may attack a player on sight even if they didn't do anything to aggravate the townsfolk AI.
  11. What exactly is an Octree? If you're completely unfamiliar with them, I recommend reading the wikipedia article (read time: ~5 minutes). This is a sufficient description of what it is but is barely enough to give any ideas on what it's used for and how to actually implement one. In this article, I will do my best to take you through the steps necessary to create an octree data structure through conceptual explanations, pictures, and code, and show you the considerations to be made at each step along the way. I don't expect this article to be the authoritative way to do octrees, but it should give you a really good start and act as a good reference. Assumptions Before we dive in, I'm going to be making a few assumptions about you as a reader: You are very comfortable with programming in a C-syntax-style language (I will be using C# with XNA). You have programmed some sort of tree-like data structure in the past, such as a binary search tree and are familiar with recursion and its strengths and pitfalls. You know how to do collision detection with bounding rectangles, bounding spheres, and bounding frustums. You have a good grasp of common data structures (arrays, lists, etc) and understand Big-O notation (you can also learn about Big-O in this GDnet article). You have a development environment project which contains spatial objects which need collision tests. Setting the stage Let's suppose that we are building a very large game world which can contain thousands of physical objects of various types, shapes and sizes, some of which must collide with each other. Each frame we need to find out which objects are intersecting with each other and have some way to handle that intersection. How do we do it without killing performance? Brute force collision detection The simplest method is to just compare each object against every other object in the world. Typically, you can do this with two for loops. The code would look something like this: foreach(gameObject myObject in ObjList) { foreach(gameObject otherObject in ObjList) { if(myObject == otherObject) continue; //avoid self collision check if(myObject.CollidesWith(otherObject)) { //code to handle the collision } } } Conceptually, this is what we're doing in our picture: Each red line is an expensive CPU test for intersection. Naturally, you should feel horrified by this code because it is going to run in O(N^2) time. If you have 10,000 objects, then you're going to be doing 100,000,000 collision checks (hundred million). I don't care how fast your CPU is or how well you've tuned your math code, this code would reduce your computer to a sluggish crawl. If you're running your game at 60 frames per second, you're looking at 60 * 100 million calculations per second! It's nuts. It's insane. It's crazy. Let's not do this if we can avoid it, at least not with a large set of objects. This would only be acceptable if we're only checking, say, 10 items against each other (100 checks is palatable). If you know in advance that your game is only going to have a very small number of objects (i.e., asteriods), you can probably get away with using this brute force method for collision detection and ignore octrees altogether. If/when you start noticing performance problems due to too many collision checks per frame, consider some simple targeted optimizations: 1. How much computation does your current collision routine take? Do you have a square root hidden away in there (ie, a distance check)? Are you doing a granular collision check (pixel vs pixel, triangle vs triangle, etc)? One common technique is to perform a rough, coarse check for collision before testing for a granular collision check. You can give your objects an enclosing bounding rectangle or bounding sphere and test for intersection with these before testing against a granular check which may involve a lot more math and computation time. Use a "distance squared" check for comparing distance between objects to avoid using the square root method. Square root calculation typically uses the newtonian method of approximation and can be computationally expensive. 2. Can you get away with calculating fewer collision checks? If your game runs at 60 frames per second, could you skip a few frames? If you know certain objects behave deterministically, can you "solve" for when they will collide ahead of time (ie, pool ball vs. side of pool table). Can you reduce the number of objects which need to be checked for collisions? A technique for this would be to separate objects into several lists. One list could be your "stationary" objects list. They never have to test for collision against each other. The other list could be your "moving" objects, which need to be tested against all other moving objects and against all stationary objects. This could reduce the number of necessary collision tests to reach an acceptable performance level. 3. Can you get away with removing some object collision tests when performance becomes an issue? For example, a smoke particle could interact with a surface object and follow its contours to create a nice aesthetic effect, but it wouldn't break game play if you hit a predefined limit for collision checks and decided to stop ignoring smoke particles for collision. Ignoring essential game object movement would certainly break game play though (ei, player bullets stop intersecting with monsters). So, perhaps maintaining a priority list of collision checks to compute would help. First you handle the high priority collision tests, and if you're not at your threshold, you can handle lower priority collision tests. When the threshold is reached, you dump the rest of the items in the priority list or defer them for testing at a later time. 4. Can you use a faster but still simplistic method for collision detection to get away from a O(N^2) runtime? If you eliminate the objects you've already checked for collisions against, you can reduce the runtime to O(N(N+1)/2), which is much faster and still easy to implement. (technically, it's still O(N^2)) In terms of software engineering, you may end up spending more time than it's worth fine-tuning a bad algorithm & data structure choice to squeeze out a few more ounces of performance. The cost vs. benefit ratio becomes increasingly unfavorable and it becomes time to choose a better data structure to handle collision detection. Spatial partitioning algorithms are the proverbial nuke to solving the runtime problem for collision detection. At a small upfront cost to performance, they'll reduce your collision detection tests to logarithmic runtime. The upfront costs of development time and CPU overhead are easily outweighed by the scalability benefits and performance gains. Conceptual background on spatial partitioning Let's take a step back and look at spatial partitioning and trees in general before diving into Octrees. If we don't understand the conceptual idea, we have no hope of implementing it by sweating over code. Looking at the brute force implementation above, we're essentially taking every object in the game and comparing their positions against all other objects in the game to see if any are touching. All of these objects are contained spatially within our game world. Well, if we create an enclosing box around our game world and figure out which objects are contained within this enclosing box, then we've got a region of space with a list of contained objects within it. In this case, it would contain every object in the game. We can notice that if we have an object on one corner of the world and another object way on the other side, we don't really need to, or want to, calculate a collision check against them every frame. It'd be a waste of precious CPU time. So, let's try something interesting! If we divide our world exactly in half, we can create three separate lists of objects. The first list of objects, List A, contains all objects on the left half of the world. The second list, List B, contains objects on the right half of the world. Some objects may touch the dividing line such that they're on each side of the line, so we'll create a third list, List C, for these objects. We can notice that with each subdivision, we're spatially reducing the world in half and collecting a list of objects in that resulting half. We can elegantly create a binary search tree to contain these lists. Conceptually, this tree should look something like so: In terms of pseudo code, the tree data structure would look something like this: public class BinaryTree { //This is a list of all of the objects contained within this node of the tree private List m_objectList; //These are pointers to the left and right child nodes in the tree private BinaryTree m_left, m_right; //This is a pointer to the parent object (for upward tree traversal). private BinaryTree m_parent; } We know that all objects in List A will never intersect with any objects in List B, so we can almost eliminate half of the number of collision checks. We've still got the objects in List C which could touch objects in either list A or B, so we'll have to check all objects in List C against all objects in Lists A, B & C. If we continue to sub-divide the world into smaller and smaller parts, we can further reduce the number of necessary collision checks by half each time. This is the general idea behind spatial partitioning. There are many ways to subdivide a world into a tree-like data structure (BSP trees, Quad Trees, K-D trees, OctTrees, etc). Now, by default, we're just assuming that the best division is a cut in half, right down the middle, since we're assuming that all of our objects will be somewhat uniformly distributed throughout the world. It's not a bad assumption to make, but some spatial division algorithms may decide to make a cut such that each side has an equal amount of objects (a weighted cut) so that the resulting tree is more balanced. However, what happens if all of these objects move around? In order to maintain a nearly even division, you'd have to either shift the splitting plane or completely rebuild the tree each frame. It'd be a bit of a mess with a lot of complexity. So, for my implementation of a spatial partitioning tree I decided to cut right down the middle every time. As a result, some trees may end up being a bit more sparse than others, but that's okay -- it doesn't cost much. To subdivide or not to subdivide? That is the question. Let's assume that we have a somewhat sparse region with only a few objects. We could continue subdividing our space until we've found the smallest possible enclosing area for that object. But is that really necessary? Let's remember that the whole reason we're creating a tree is to reduce the number of collision checks we need to perform each frame -- not to create a perfectly enclosing region of space for every object. Here are the rules I use for deciding whether to subdivide or not: If we create a subdivision which only contains one object, we can stop subdividing even though we could keep dividing further. This rule will become an important part of the criteria for what defines a "leaf node" in our octree. The other important criteria is to set a minimum size for a region. If you have an extremely small object which is nanometers in size (or, god forbid, you have a bug and forgot to initialize an object size!), you're going to keep subdividing to the point where you potentially overflow your call stack. For my own implementation, I defined the smallest containing region to be a 1x1x1 cube. Any objects in this teeny cube will just have to be run with the O(N^2) brute force collision test (I don't anticipate many objects anyways!). If a containing region doesn't contain any objects, we shouldn't try to include it in the tree. We can take our subdivision by half one step further and divide the 2D world space into quadrants. The logic is essentially the same, but now we're testing for collision with four squares instead of two rectangles. We can continue subdividing each square until our rules for termination are met. The representation of the world space and corresponding data structure for a quad tree would look something like this: If the quad tree subdivision and data structure make sense, then an octree should be pretty straight forward as well. We're just adding a third dimension, using bounding cubes instead of bounding squares, and have eight possible child nodes instead of four. Some of you might wonder what should happen if you have a game world with non-cubic dimensions, say 200x300x400. You can still use an octree with cubic dimensions -- some child nodes will just end up empty if the game world doesn't have anything there. Obviously, you'll want to set the dimensions of your octree to at least the largest dimension of your game world. Octree Construction So, as you've read, an octree is a special type of subdividing tree commonly used for objects in 3D space (or anything with 3 dimensions). Our enclosing region is going to be a three dimensional rectangle (commonly a cube). We will then apply our subdivision logic above, and cut our enclosing region into eight smaller rectangles. If a game object completely fits within one of these subdivided regions, we'll push it down the tree into that node's containing region. We'll then recursively continue subdividing each resulting region until one of our breaking conditions is met. At the end, we should expect to have a nice tree-like data structure. My implementation of the octree can contain objects which have either a bounding sphere and/or a bounding rectangle. You'll see a lot of code I use to determine which is being used. In terms of our Octree class data structure, I decided to do the following for each tree: Each node has a bounding region which defines the enclosing region Each node has a reference to the parent node Contains an array of eight child nodes (use arrays for code simplicity and cache performance) Contains a list of objects contained within the current enclosing region I use a byte-sized bitmask for figuring out which child nodes are actively being used (the optimization benefits at the cost of additional complexity is somewhat debatable) I use a few static variables to indicate the state of the tree Here is the code for my Octree class outline: public class OctTree { BoundingBox m_region; List m_objects; /// /// These are items which we're waiting to insert into the data structure. /// We want to accrue as many objects in here as possible before we inject them into the tree. This is slightly more cache friendly. /// static Queue m_pendingInsertion = new Queue(); /// /// These are all of the possible child octants for this node in the tree. /// OctTree[] m_childNode = new OctTree[8]; /// /// This is a bitmask indicating which child nodes are actively being used. /// It adds slightly more complexity, but is faster for performance since there is only one comparison instead of 8. /// byte m_activeNodes = 0; /// /// The minumum size for enclosing region is a 1x1x1 cube. /// const int MIN_SIZE = 1; /// /// this is how many frames we'll wait before deleting an empty tree branch. Note that this is not a constant. The maximum lifespan doubles /// every time a node is reused, until it hits a hard coded constant of 64 /// int m_maxLifespan = 8; // int m_curLife = -1; //this is a countdown time showing how much time we have left to live /// /// A reference to the parent node is nice to have when we're trying to do a tree update. /// OctTree _parent; static bool m_treeReady = false; //the tree has a few objects which need to be inserted before it is complete static bool m_treeBuilt = false; //there is no pre-existing tree yet. } Initializing the enclosing region The first step in building an octree is to define the enclosing region for the entire tree. This will be the bounding box for the root node of the tree which initially contains all objects in the game world. Before we go about initializing this bounding volume, we have a few design decisions we need to make: 1. What should happen if an object moves outside of the bounding volume of the root node? Do we want to resize the entire octree so that all objects are enclosed? If we do, we'll have to completely rebuild the octree from scratch. If we don't, we'll need to have some way to either handle out of bounds objects, or ensure that objects never go out of bounds. 2. How do we want to create the enclosing region for our octree? Do we want to use a preset dimension, such as a 200x400x200 (X,Y,Z) rectangle? Or do we want to use a cubic dimension which is a power of 2? What should be the smallest allowable enclosing region which cannot be subdivided? Personally, I decided that I would use a cubic enclosing region with dimensions which are a power of 2, and sufficiently large to completely enclose my world. The smallest allowable cube is a 1x1x1 unit region. With this, I know that I can always cleanly subdivide my world and get integer numbers (even though the Vector3 uses floats). I also decided that my enclosing region would enclose the entire game world, so if an object leaves this region, it should be quietly destroyed. At the smallest octant, I will have to run a brute force collision check against all other objects, but I don't realistically expect more than 3 objects to occupy that small of an area at a time, so the performance costs of O(N^2) are completely acceptable. So, I normally just initialize my octree with a constructor which takes a region size and a list of items to insert into the tree. I feel it's barely worth showing this part of the code since it's so elementary, but I'll include it for completeness. Here are my constructors: /*Note: we want to avoid allocating memory for as long as possible since there can be lots of nodes.*/ /// /// Creates an oct tree which encloses the given region and contains the provided objects. /// /// The bounding region for the oct tree. /// The list of objects contained within the bounding region private OctTree(BoundingBox region, List objList) { m_region = region; m_objects = objList; m_curLife = -1; } public OctTree() { m_objects = new List(); m_region = new BoundingBox(Vector3.Zero, Vector3.Zero); m_curLife = -1; } /// /// Creates an octTree with a suggestion for the bounding region containing the items. /// /// The suggested dimensions for the bounding region. /// Note: if items are outside this region, the region will be automatically resized. public OctTree(BoundingBox region) { m_region = region; m_objects = new List(); m_curLife = -1; } Building an initial octree I'm a big fan of lazy initialization. I try to avoid allocating memory or doing work until I absolutely have to. In the case of my octree, I avoid building the data structure as long as possible. We'll accept a user's request to insert an object into the data structure, but we don't actually have to build the tree until someone runs a query against it. What does this do for us? Well, let's assume that the process of constructing and traversing our tree is somewhat computationally expensive. If a user wants to give us 1,000 objects to insert into the tree, does it make sense to recompute every subsequent enclosing area a thousand times? Or, can we save some time and do a bulk blast? I created a "pending" queue of items and a few flags to indicate the build state of the tree. All of the inserted items get put into the pending queue and when a query is made, those pending requests get flushed and injected into the tree. This is especially handy during a game loading sequence since you'll most likely be inserting thousands of objects at once. After the game world has been loaded, the number of objects injected into the tree is orders of magnitude fewer. My lazy initialization routine is contained within my UpdateTree() method. It checks to see if the tree has been built, and builds the data structure if it doesn't exist and has pending objects. /// /// Processes all pending insertions by inserting them into the tree. /// /// Consider deprecating this? private void UpdateTree() //complete & tested { if (!m_treeBuilt) { while (m_pendingInsertion.Count != 0) m_objects.Add(m_pendingInsertion.Dequeue()); BuildTree(); } else { while (m_pendingInsertion.Count != 0) Insert(m_pendingInsertion.Dequeue()); } m_treeReady = true; } As for building the tree itself, this can be done recursively. So for each recursive iteration, I start off with a list of objects contained within the bounding region. I check my termination rules, and if we pass, we create eight subdivided bounding areas which are perfectly contained within our enclosed region. Then, I go through every object in my given list and test to see if any of them will fit perfectly within any of my octants. If they do fit, I insert them into a corresponding list for that octant. At the very end, I check the counts on my corresponding octant lists and create new octrees and attach them to our current node, and mark my bitmask to indicate that those child octants are actively being used. All of the left over objects have been pushed down to us from our parent, but can't be pushed down to any children, so logically, this must be the smallest octant which can contain the object. /// /// Naively builds an oct tree from scratch. /// private void BuildTree() //complete & tested { //terminate the recursion if we're a leaf node if (m_objects.Count <= 1) return; Vector3 dimensions = m_region.Max - m_region.Min; if (dimensions == Vector3.Zero) { FindEnclosingCube(); dimensions = m_region.Max - m_region.Min; } //Check to see if the dimensions of the box are greater than the minimum dimensions if (dimensions.X <= MIN_SIZE && dimensions.Y <= MIN_SIZE && dimensions.Z <= MIN_SIZE) { return; } Vector3 half = dimensions / 2.0f; Vector3 center = m_region.Min + half; //Create subdivided regions for each octant BoundingBox[] octant = new BoundingBox[8]; octant[0] = new BoundingBox(m_region.Min, center); octant[1] = new BoundingBox(new Vector3(center.X, m_region.Min.Y, m_region.Min.Z), new Vector3(m_region.Max.X, center.Y, center.Z)); octant[2] = new BoundingBox(new Vector3(center.X, m_region.Min.Y, center.Z), new Vector3(m_region.Max.X, center.Y, m_region.Max.Z)); octant[3] = new BoundingBox(new Vector3(m_region.Min.X, m_region.Min.Y, center.Z), new Vector3(center.X, center.Y, m_region.Max.Z)); octant[4] = new BoundingBox(new Vector3(m_region.Min.X, center.Y, m_region.Min.Z), new Vector3(center.X, m_region.Max.Y, center.Z)); octant[5] = new BoundingBox(new Vector3(center.X, center.Y, m_region.Min.Z), new Vector3(m_region.Max.X, m_region.Max.Y, center.Z)); octant[6] = new BoundingBox(center, m_region.Max); octant[7] = new BoundingBox(new Vector3(m_region.Min.X, center.Y, center.Z), new Vector3(center.X, m_region.Max.Y, m_region.Max.Z)); //This will contain all of our objects which fit within each respective octant. List[] octList = new List[8]; for (int i = 0; i < 8; i++) octList = new List(); //this list contains all of the objects which got moved down the tree and can be delisted from this node. List delist = new List(); foreach (Physical obj in m_objects) { if (obj.BoundingBox.Min != obj.BoundingBox.Max) { for (int a = 0; a < 8; a++) { if (octant[a].Contains(obj.BoundingBox) == ContainmentType.Contains) { octList[a].Add(obj); delist.Add(obj); break; } } } else if (obj.BoundingSphere.Radius != 0) { for (int a = 0; a < 8; a++) { if (octant[a].Contains(obj.BoundingSphere) == ContainmentType.Contains) { octList[a].Add(obj); delist.Add(obj); break; } } } } //delist every moved object from this node. foreach (Physical obj in delist) m_objects.Remove(obj); //Create child nodes where there are items contained in the bounding region for (int a = 0; a < 8; a++) { if (octList[a].Count != 0) { m_childNode[a] = CreateNode(octant[a], octList[a]); m_activeNodes |= (byte)(1 << a); m_childNode[a].BuildTree(); } } m_treeBuilt = true; m_treeReady = true; } private OctTree CreateNode(BoundingBox region, List objList) //complete & tested { if (objList.Count == 0) return null; OctTree ret = new OctTree(region, objList); ret._parent = this; return ret; } private OctTree CreateNode(BoundingBox region, Physical Item) { List objList = new List(1); //sacrifice potential CPU time for a smaller memory footprint objList.Add(Item); OctTree ret = new OctTree(region, objList); ret._parent = this; return ret; } Updating a tree Let's imagine that our tree has a lot of moving objects in it. If any object moves, there is a good chance that the object has moved outside of its enclosing octant. How do we handle changes in object position while maintaining the integrity of our tree structure? Technique 1: Keep it super simple, trash & rebuild everything. Some implementations of an Octree will completely rebuild the entire tree every frame and discard the old one. This is super simple and it works, and if this is all you need, then prefer the simple technique. The general consensus is that the upfront CPU cost of rebuilding the tree every frame is much cheaper than running a brute force collision check, and programmer time is too valuable to be spent on an unnecessary optimization. For those of us who like challenges and to over-engineer things, the "trash & rebuild" technique comes with a few small problems: You're constantly allocating and deallocating memory each time you rebuild your tree. Allocating new memory comes with a small cost. If possible, you want to minimize the amount of memory being allocated and reallocated over time by reusing memory you've already got. Most of the tree is unchanging, so it's a waste of CPU time to rebuild the same branches over and over again. Technique 2: Keep the existing tree, update the changed branches I noticed that most branches of a tree don't need to be updated. They just contain stationary objects. Wouldn't it be nice if, instead of rebuilding the entire tree every frame, we just updated the parts of the tree which needed an update? This technique keeps the existing tree and updates only the branches which had an object which moved. It's a bit more complex to implement, but it's a lot more fun too, so let's really get into that! During my first attempt at this, I mistakenly thought that an object in a child node could only go up or down one traversal of the tree. This is wrong. If an object in a child node reaches the edge of that node, and that edge also happens to be an edge for the enclosing parent node, then that object needs to be inserted above its parent, and possibly up even further. So, the bottom line is that we don't know how far up an object needs to be pushed up the tree. Just as well, an object can move such that it can be neatly enclosed in a child node, or that child's child node. We don't know how far down the tree we can go. Fortunately, since we include a reference to each node's parent, we can easily solve this problem recursively with minimal computation! The general idea behind the update algorithm is to first let all objects in the tree update themselves. Some may move or change in size. We want to get a list of every object which moved, so the object update method should return to us a boolean value indicating if its bounding area changed. Once we've got a list of all of our moved objects, we want to start at our current node and try to traverse up the tree until we find a node which completely encloses the moved object (most of the time, the current node still encloses the object). If the object isn't completely enclosed by the current node, we keep moving it up to its next parent node. In the worst case, our root node will be guaranteed to contain the object. After we've moved our object as far up the tree as possible, we'll try to move it as far down the tree as we can. Most of the time, if we moved the object up, we won't be able to move it back down. But, if the object moved so that a child node of the current node could contain it, we have the chance to push it back down the tree. It's important to be able to move objects down the tree as well, or else all moving objects would eventually migrate to the top and we'd start getting some performance problems during collision detection routines. Branch Removal In some cases, an object will move out of a node and that node will no longer have any objects contained within it, nor have any children which contain objects. If this happens, we have an empty branch and we need to mark it as such and prune this dead branch off the tree. There is an interesting question hiding here: When do you want to prune the dead branches off a tree? Allocating new memory costs time, so if we're just going to reuse this same region in a few cycles, why not keep it around for a bit? How long can we keep it around before it becomes more expensive to maintain the dead branch? I decided to give each of my nodes a count down timer which activates when the branch is dead. If an object moves into this nodes octant while the death timer is active, I double the lifespan and reset the death timer. This ensures that octants which are frequently used are hot and stick around, and nodes which are infrequently used are removed before they start to cost more than they're worth. A practical example of this usefulness would be apparent when you have a machine gun shooting a stream of bullets. Those bullets follow in close succession of each other, so it'd be a shame to immediately delete a node as soon as the first bullet leaves it, only to recreate it a fraction of a second later as the second bullet re-enters it. And if there's a lot of bullets, we can probably keep these octants around for a little while. If a child branch is empty and hasn't been used in a while, it's safe to prune it out of our tree. Anyways, let's look at the code which does all of this magic. First up, we have the Update() method. This is a method which is recursively called on all child trees. It moves all objects around, does some house keeping work for the data structure, and then moves each moved object into its correct node (parent or child). public void Update(GameTime gameTime) { if (m_treeBuilt == true) { //Start a count down death timer for any leaf nodes which don't have objects or children. //when the timer reaches zero, we delete the leaf. If the node is reused before death, we double its lifespan. //this gives us a "frequency" usage score and lets us avoid allocating and deallocating memory unnecessarily if (m_objects.Count == 0) { if (HasChildren == false) { if (m_curLife == -1) m_curLife = m_maxLifespan; else if (m_curLife > 0) { m_curLife--; } } } else { if (m_curLife != -1) { if(m_maxLifespan <= 64) m_maxLifespan *= 2; m_curLife = -1; } } List movedObjects = new List(m_objects.Count); //go through and update every object in the current tree node foreach (Physical gameObj in m_objects) { //we should figure out if an object actually moved so that we know whether we need to update this node in the tree. if (gameObj.Update(gameTime)) { movedObjects.Add(gameObj); } } //prune any dead objects from the tree. int listSize = m_objects.Count; for (int a = 0; a < listSize; a++) { if (!m_objects[a].Alive) { if (movedObjects.Contains(m_objects[a])) movedObjects.Remove(m_objects[a]); m_objects.RemoveAt(a--); listSize--; } } //recursively update any child nodes. for( int flags = m_activeNodes, index = 0; flags > 0; flags >>=1, index++) if ((flags & 1) == 1) m_childNode[index].Update(gameTime); //If an object moved, we can insert it into the parent and that will insert it into the correct tree node. //note that we have to do this last so that we don't accidentally update the same object more than once per frame. foreach (Physical movedObj in movedObjects) { OctTree current = this; //figure out how far up the tree we need to go to reinsert our moved object //we are either using a bounding rect or a bounding sphere //try to move the object into an enclosing parent node until we've got full containment if (movedObj.BoundingBox.Max != movedObj.BoundingBox.Min) { while (current.m_region.Contains(movedObj.BoundingBox) != ContainmentType.Contains) if (current._parent != null) current = current._parent; else break; //prevent infinite loops when we go out of bounds of the root node region } else { while (current.m_region.Contains(movedObj.BoundingSphere) != ContainmentType.Contains)//we must be using a bounding sphere, so check for its containment. if (current._parent != null) current = current._parent; else break; } //now, remove the object from the current node and insert it into the current containing node. m_objects.Remove(movedObj); current.Insert(movedObj); //this will try to insert the object as deep into the tree as we can go. } //prune out any dead branches in the tree for (int flags = m_activeNodes, index = 0; flags > 0; flags >>= 1, index++) if ((flags & 1) == 1 && m_childNode[index].m_curLife == 0) { m_childNode[index] = null; m_activeNodes ^= (byte)(1 << index); //remove the node from the active nodes flag list } //now that all objects have moved and they've been placed into their correct nodes in the octree, we can look for collisions. if (IsRoot == true) { //This will recursively gather up all collisions and create a list of them. //this is simply a matter of comparing all objects in the current root node with all objects in all child nodes. //note: we can assume that every collision will only be between objects which have moved. //note 2: An explosion can be centered on a point but grow in size over time. In this case, you'll have to override the update method for the explosion. List irList = GetIntersection(new List()); foreach (IntersectionRecord ir in irList) { if (ir.PhysicalObject != null) ir.PhysicalObject.HandleIntersection(ir); if (ir.OtherPhysicalObject != null) ir.OtherPhysicalObject.HandleIntersection(ir); } } } else { } } Note that we call an Insert() method for moved objects. The insertion of objects into the tree is very similar to the method used to build the initial tree. Insert() will try to push objects as far down the tree as possible. Notice that I also try to avoid creating new bounding areas if I can use an existing one from a child node. /// /// A tree has already been created, so we're going to try to insert an item into the tree without rebuilding the whole thing /// /// A physical object /// The physical object to insert into the tree private void Insert(T Item) where T : Physical { /*make sure we're not inserting an object any deeper into the tree than we have to. -if the current node is an empty leaf node, just insert and leave it.*/ if (m_objects.Count <= 1 && m_activeNodes == 0) { m_objects.Add(Item); return; } Vector3 dimensions = m_region.Max - m_region.Min; //Check to see if the dimensions of the box are greater than the minimum dimensions if (dimensions.X <= MIN_SIZE && dimensions.Y <= MIN_SIZE && dimensions.Z <= MIN_SIZE) { m_objects.Add(Item); return; } Vector3 half = dimensions / 2.0f; Vector3 center = m_region.Min + half; //Find or create subdivided regions for each octant in the current region BoundingBox[] childOctant = new BoundingBox[8]; childOctant[0] = (m_childNode[0] != null) ? m_childNode[0].m_region : new BoundingBox(m_region.Min, center); childOctant[1] = (m_childNode[1] != null) ? m_childNode[1].m_region : new BoundingBox(new Vector3(center.X, m_region.Min.Y, m_region.Min.Z), new Vector3(m_region.Max.X, center.Y, center.Z)); childOctant[2] = (m_childNode[2] != null) ? m_childNode[2].m_region : new BoundingBox(new Vector3(center.X, m_region.Min.Y, center.Z), new Vector3(m_region.Max.X, center.Y, m_region.Max.Z)); childOctant[3] = (m_childNode[3] != null) ? m_childNode[3].m_region : new BoundingBox(new Vector3(m_region.Min.X, m_region.Min.Y, center.Z), new Vector3(center.X, center.Y, m_region.Max.Z)); childOctant[4] = (m_childNode[4] != null) ? m_childNode[4].m_region : new BoundingBox(new Vector3(m_region.Min.X, center.Y, m_region.Min.Z), new Vector3(center.X, m_region.Max.Y, center.Z)); childOctant[5] = (m_childNode[5] != null) ? m_childNode[5].m_region : new BoundingBox(new Vector3(center.X, center.Y, m_region.Min.Z), new Vector3(m_region.Max.X, m_region.Max.Y, center.Z)); childOctant[6] = (m_childNode[6] != null) ? m_childNode[6].m_region : new BoundingBox(center, m_region.Max); childOctant[7] = (m_childNode[7] != null) ? m_childNode[7].m_region : new BoundingBox(new Vector3(m_region.Min.X, center.Y, center.Z), new Vector3(center.X, m_region.Max.Y, m_region.Max.Z)); //First, is the item completely contained within the root bounding box? //note2: I shouldn't actually have to compensate for this. If an object is out of our predefined bounds, then we have a problem/error. // Wrong. Our initial bounding box for the terrain is constricting its height to the highest peak. Flying units will be above that. // Fix: I resized the enclosing box to 256x256x256. This should be sufficient. if (Item.BoundingBox.Max != Item.BoundingBox.Min && m_region.Contains(Item.BoundingBox) == ContainmentType.Contains) { bool found = false; //we will try to place the object into a child node. If we can't fit it in a child node, then we insert it into the current node object list. for(int a=0;a<8;a++) { //is the object fully contained within a quadrant? if (childOctant[a].Contains(Item.BoundingBox) == ContainmentType.Contains) { if (m_childNode[a] != null) m_childNode[a].Insert(Item); //Add the item into that tree and let the child tree figure out what to do with it else { m_childNode[a] = CreateNode(childOctant[a], Item); //create a new tree node with the item m_activeNodes |= (byte)(1 << a); } found = true; } } if(!found) m_objects.Add(Item); } else if (Item.BoundingSphere.Radius != 0 && m_region.Contains(Item.BoundingSphere) == ContainmentType.Contains) { bool found = false; //we will try to place the object into a child node. If we can't fit it in a child node, then we insert it into the current node object list. for (int a = 0; a < 8; a++) { //is the object contained within a child quadrant? if (childOctant[a].Contains(Item.BoundingSphere) == ContainmentType.Contains) { if (m_childNode[a] != null) m_childNode[a].Insert(Item); //Add the item into that tree and let the child tree figure out what to do with it else { m_childNode[a] = CreateNode(childOctant[a], Item); //create a new tree node with the item m_activeNodes |= (byte)(1 << a); } found = true; } } if (!found) m_objects.Add(Item); } else { //either the item lies outside of the enclosed bounding box or it is intersecting it. Either way, we need to rebuild //the entire tree by enlarging the containing bounding box //BoundingBox enclosingArea = FindBox(); BuildTree(); } } Collision Detection Finally, our octree has been built and everything is as it should be. How do we perform collision detection against it? First, let's list out the different ways we want to look for collisions: Frustum intersections. We may have a frustum which intersects with a region of the world. We only want the objects which intersect with the given frustum. This is particularly useful for culling regions outside of the camera view space, and for figuring out what objects are within a mouse selection area. Ray intersections. We may want to shoot a directional ray from any given point and want to know either the nearest intersecting object, or get a list of all objects which intersect that ray (like a rail gun). This is very useful for mouse picking. If the user clicks on the screen, we want to draw a ray into the world and figure out what they clicked on. Bounding Box intersections. We want to know which objects in the world are intersecting a given bounding box. This is most useful for "box" shaped game objects (houses, cars, etc). Bounding Sphere Intersections. We want to know which objects are intersecting with a given bounding sphere. Most objects will probably be using a bounding sphere for coarse collision detection since the mathematics is computationally the least expensive and somewhat easy. The main idea behind recursive collision detection processing for an octree is that you start at the root/current node and test for intersection with all objects in that node against the intersector. Then, you do a bounding box intersection test against all active child nodes with the intersector. If a child node fails this intersection test, you can completely ignore the rest of that child's tree. If a child node passes the intersection test, you recursively traverse down the tree and repeat. Each node should pass a list of intersection records up to its caller, which appends those intersections to its own list of intersections. When the recursion finishes, the original caller will get a list of every intersection for the given intersector. The beauty of this is that it takes very little code to implement and performance is very fast. In a lot of these collisions, we're probably going to be getting a lot of results. We're also going to want to have some way of responding to each collision, depending on what objects are colliding. For example, a player hero should pick up a floating bonus item (quad damage!), but a rocket shouldn't explode if it hits said bonus item. I created a new class to contain information about each intersection. This class contains references to the intersecting objects, the point of intersection, the normal at the point of intersection, etc. These intersection records become quite useful when you pass them to an object and tell them to handle it. For completeness and clarity, here is my intersection record class: public class IntersectionRecord { Vector3 m_position; /// /// This is the exact point in 3D space which has an intersection. /// public Vector3 Position { get { return m_position; } } Vector3 m_normal; /// /// This is the normal of the surface at the point of intersection /// public Vector3 Normal { get { return m_normal; } } Ray m_ray; /// /// This is the ray which caused the intersection /// public Ray Ray { get { return m_ray; } } Physical m_intersectedObject1; /// /// This is the object which is being intersected /// public Physical PhysicalObject { get { return m_intersectedObject1; } set { m_intersectedObject1 = value; } } Physical m_intersectedObject2; /// /// This is the other object being intersected (may be null, as in the case of a ray-object intersection) /// public Physical OtherPhysicalObject { get { return m_intersectedObject2; } set { m_intersectedObject2 = value; } } /// /// this is a reference to the current node within the octree for where the collision occurred. In some cases, the collision handler /// will want to be able to spawn new objects and insert them into the tree. This node is a good starting place for inserting these objects /// since it is a very near approximation to where we want to be in the tree. /// OctTree m_treeNode; /// /// check the object identities between the two intersection records. If they match in either order, we have a duplicate. /// ///the other record to compare against /// true if the records are an intersection for the same pair of objects, false otherwise. public override bool Equals(object otherRecord) { IntersectionRecord o = (IntersectionRecord)otherRecord; // //return (m_intersectedObject1 != null && m_intersectedObject2 != null && m_intersectedObject1.ID == m_intersectedObject2.ID); if (otherRecord == null) return false; if (o.m_intersectedObject1.ID == m_intersectedObject1.ID && o.m_intersectedObject2.ID == m_intersectedObject2.ID) return true; if (o.m_intersectedObject1.ID == m_intersectedObject2.ID && o.m_intersectedObject2.ID == m_intersectedObject1.ID) return true; return false; } double m_distance; /// /// This is the distance from the ray to the intersection point. /// You'll usually want to use the nearest collision point if you get multiple intersections. /// public double Distance { get { return m_distance; } } private bool m_hasHit = false; public bool HasHit { get { return m_hasHit; } } public IntersectionRecord() { m_position = Vector3.Zero; m_normal = Vector3.Zero; m_ray = new Ray(); m_distance = float.MaxValue; m_intersectedObject1 = null; } public IntersectionRecord(Vector3 hitPos, Vector3 hitNormal, Ray ray, double distance) { m_position = hitPos; m_normal = hitNormal; m_ray = ray; m_distance = distance; // m_hitObject = hitGeom; m_hasHit = true; } /// /// Creates a new intersection record indicating whether there was a hit or not and the object which was hit. /// ///Optional: The object which was hit. Defaults to null. public IntersectionRecord(Physical hitObject = null) { m_hasHit = hitObject != null; m_intersectedObject1 = hitObject; m_position = Vector3.Zero; m_normal = Vector3.Zero; m_ray = new Ray(); m_distance = 0.0f; } } Intersection with a Bounding Frustum /// /// Gives you a list of all intersection records which intersect or are contained within the given frustum area /// ///The containing frustum to check for intersection/containment with /// A list of intersection records with collisions private List GetIntersection(BoundingFrustum frustum, Physical.PhysicalType type = Physical.PhysicalType.ALL) { if (m_objects.Count == 0 && HasChildren == false) //terminator for any recursion return null; List ret = new List(); //test each object in the list for intersection foreach (Physical obj in m_objects) { //skip any objects which don't meet our type criteria if ((int)((int)type & (int)obj.Type) == 0) continue; //test for intersection IntersectionRecord ir = obj.Intersects(frustum); if (ir != null) ret.Add(ir); } //test each object in the list for intersection for (int a = 0; a < 8; a++) { if (m_childNode[a] != null && (frustum.Contains(m_childNode[a].m_region) == ContainmentType.Intersects || frustum.Contains(m_childNode[a].m_region) == ContainmentType.Contains)) { List hitList = m_childNode[a].GetIntersection(frustum); if (hitList != null) { foreach (IntersectionRecord ir in hitList) ret.Add(ir); } } } return ret; } The bounding frustum intersection list can be used to only render objects which are visible to the current camera view. I use a scene database to figure out how to render all objects in the game world. Here is a snippet of code from my rendering function which uses the bounding frustum of the active camera: /// /// This renders every active object in the scene database /// /// public int Render() { int triangles = 0; //Renders all visible objects by iterating through the oct tree recursively and testing for intersection //with the current camera view frustum foreach (IntersectionRecord ir in m_octTree.AllIntersections(m_cameras[m_activeCamera].Frustum)) { ir.PhysicalObject.SetDirectionalLight(m_globalLight[0].Direction, m_globalLight[0].Color); ir.PhysicalObject.View = m_cameras[m_activeCamera].View; ir.PhysicalObject.Projection = m_cameras[m_activeCamera].Projection; ir.PhysicalObject.UpdateLOD(m_cameras[m_activeCamera]); triangles += ir.PhysicalObject.Render(m_cameras[m_activeCamera]); } return triangles; } Intersection with a Ray /// /// Gives you a list of intersection records for all objects which intersect with the given ray /// ///The ray to intersect objects against /// A list of all intersections private List GetIntersection(Ray intersectRay, Physical.PhysicalType type = Physical.PhysicalType.ALL) { if (m_objects.Count == 0 && HasChildren == false) //terminator for any recursion return null; List ret = new List(); //the ray is intersecting this region, so we have to check for intersection with all of our contained objects and child regions. //test each object in the list for intersection foreach (Physical obj in m_objects) { //skip any objects which don't meet our type criteria if ((int)((int)type & (int)obj.Type) == 0) continue; if (obj.BoundingBox.Intersects(intersectRay) != null) { IntersectionRecord ir = obj.Intersects(intersectRay); if (ir.HasHit) ret.Add(ir); } } // test each child octant for intersection for (int a = 0; a < 8; a++) { if (m_childNode[a] != null && m_childNode[a].m_region.Intersects(intersectRay) != null) { List hits = m_childNode[a].GetIntersection(intersectRay, type); if (hits != null) { foreach (IntersectionRecord ir in hits) ret.Add(ir); } } } return ret; } Intersection with a list of objects This is a particularly useful recursive method for determining if a list of objects in the current node intersect with any objects in any child nodes (See: Update() method for usage). It's the method which will be used most frequently, so it's good to get this right and efficient. What we want to do is start at the root node of the tree. We compare all objects in the current node against all other objects in the current node for collision. We gather up any of those collisions as intersection records, and insert them into a list. We then pass our list of tested objects down to our child nodes. The child nodes will then test their objects against themselves, then against the objects we passed down to them. The child nodes will capture any collisions in a list, and return that list to its parent. The parent then takes the collision list received from its child nodes and appends it to its own list of collisions, finally returning it to its caller. If you count out the number of collision tests in the illustration above, you can see that we conducted 29 hit tests and recieved 4 hits. This is much better than [11*11 = 121] hit tests. private List GetIntersection(List parentObjs, Physical.PhysicalType type = Physical.PhysicalType.ALL) { List intersections = new List(); //assume all parent objects have already been processed for collisions against each other. //check all parent objects against all objects in our local node foreach (Physical pObj in parentObjs) { foreach (Physical lObj in m_objects) { //We let the two objects check for collision against each other. They can figure out how to do the coarse and granular checks. //all we're concerned about is whether or not a collision actually happened. IntersectionRecord ir = pObj.Intersects(lObj); if (ir != null) { intersections.Add(ir); } } } //now, check all our local objects against all other local objects in the node if (m_objects.Count > 1) { #region self-congratulation /* * This is a rather brilliant section of code. Normally, you'd just have two foreach loops, like so: * foreach(Physical lObj1 in m_objects) * { * foreach(Physical lObj2 in m_objects) * { * //intersection check code * } * } * * The problem is that this runs in O(N*N) time and that we're checking for collisions with objects which have already been checked. * Imagine you have a set of four items: {1,2,3,4} * You'd first check: {1} vs {1,2,3,4} * Next, you'd check {2} vs {1,2,3,4} * but we already checked {1} vs {2}, so it's a waste to check {2} vs. {1}. What if we could skip this check by removing {1}? * We'd have a total of 4+3+2+1 collision checks, which equates to O(N(N+1)/2) time. If N is 10, we are already doing half as many collision checks as necessary. * Now, we can't just remove an item at the end of the 2nd for loop since that would break the iterator in the first foreach loop, so we'd have to use a * regular for(int i=0;i tmp = new List(m_objects.Count); tmp.AddRange(m_objects); while (tmp.Count > 0) { foreach (Physical lObj2 in tmp) { if (tmp[tmp.Count - 1] == lObj2 || (tmp[tmp.Count - 1].IsStatic && lObj2.IsStatic)) continue; IntersectionRecord ir = tmp[tmp.Count - 1].Intersects(lObj2); if (ir != null) intersections.Add(ir); } //remove this object from the temp list so that we can run in O(N(N+1)/2) time instead of O(N*N) tmp.RemoveAt(tmp.Count-1); } } //now, merge our local objects list with the parent objects list, then pass it down to all children. foreach (Physical lObj in m_objects) if (lObj.IsStatic == false) parentObjs.Add(lObj); //parentObjs.AddRange(m_objects); //each child node will give us a list of intersection records, which we then merge with our own intersection records. for (int flags = m_activeNodes, index = 0; flags > 0; flags >>= 1, index++) if ((flags & 1) == 1) intersections.AddRange(m_childNode[index].GetIntersection(parentObjs, type)); return intersections; } ;i++)> Screenshot Demos This is a view of the game world from a distance showing the outlines for each bounding volume for the octree. This view shows a bunch of successive projectiles moving through the game world with the frequently-used nodes being preserved instead of deleted. Complete Code Sample I've attached a complete code sample of the octree class, the intersection record class, and my generic physical object class. I don't guarantee that they're all bug-free since it's all a work in progress and hasn't been rigorously tested yet.
  12. Seeking Partner (Maybe a Team)

    I agree. You should be able to create this on your own. The MMORPG part is concerning though. Do you know how many people try to make an MMORPG as their first game? Do you know how hard it is to create a server-client networked game which can handle thousands of concurrent connected users? It's highly challenging. I would start by scaling back your scope a bit. And this is your first and only post on this site, so that's also concerning. Are you new to game development, or new to the site? You should also spend a ridiculous amount of time detailing out your design document. Treat it like a movie script which you write, seal in a packaged envelope, drop in a mail slot and send off to a production studio. The director follows your GDD and creates your product without having to pick up the phone and ask you a single question.
  13. I just wrote this article a few days ago: https://medium.com/@Slayemin/your-indie-game-dev-team-will-fail-108d4b663e7e Check it out. Having your team members work 2 hours a day is going to be the biggest flaw in your plan. Also, having 5-8 people is also a major problem. Your game should be something that 1-2 people can produce in 3 weeks spending 40 hours a week -- keep your scope small enough to satisfy that criteria! Also, be prepared to spend 50% of your budget on advertising and marketing.
  14. Spellbound: May-July Updates

    Tomorrow morning, I have to fire someone. It's been a tough three months with a lot of big life changes. My girlfriend and I were unable to pay our apartment rent in downtown Seattle, for the months of April and May. So, we were strongly encouraged to move out. We were paying $2461 per month in rent, plus $200 a month for parking, plus utilities, all for a 940 square foot two bedroom apartment with no air conditioning. Then they raised the rent. So, we moved out and found a house to rent in Edmonds, a small sleepy town about 12 miles north of Seattle. We doubled the square footage and only pay $2000 a month in rent. It's amazing. It's so peaceful and quiet. It is far superior to living in an apartment. The Seattle apartment was two blocks away from a fire station, so you would often have fire engines roaring down the street with sirens blaring at 3am. Or, there'd be someone unloading product all night for the business next door, operating a hydraulic lift. Or, maybe there'd be homeless or drunk people having an argument outside my window. I don't miss it one bit. The only thing I miss is my 15 minute commute to work by walking. Moving was a bit of a... problem. The day before, I fell off of a horse, you see. I was at my ranch, testing out a new horse to see how well it rode. It was acting a bit anxious. The saddle didn't quite fit. The horse wasn't responding very well. I figured the horse needed to get used to a rider a bit more and that I'd tire it out a bit by galloping around and break it in. So, we did. We galloped down the forested road a bit, went down the field, galloped some more, and did two loops. Then, I brought the horse back to the hitching post. This stupid daschund dog came running and barking at me and the horse, completely oblivious to the sheer difference in size between a 7lb dog and a 700lb horse. Despite that, the horse was even more anxious. So, I turned it around and went over the bridge into the field by the creek. I was going to gallop it a bit more to tire it out. So, off we go again! I see some sticks and logs in the field ahead, so I start steering the horse to the left, except its not listening. We're just running at a good 30mph. Then, at the very last second, the horse sees the debris -- and makes an almost 90 degree left turn at 30 miles per hour. Naturally, this isn't a video game, so the principles of momentum apply, and my body wants to go straight. I stomp really hard on my right stirrup, with all my body weight, to stay on the horse -- except the saddle slides to the bottom of the horse and I go with it. Keep in mind, the horse is still galloping as I'm falling. In a split second decision, I decide that I'm doomed to fall and get hurt, but the smartest thing to do is get my boots out of the stirrups so that I'm not dragged behind the horse. If I don't get my boots out, I will get killed, and that's more important to avoid than getting hurt. I did it. I got my boot out, just in time. Then I land HARD on my right back onto hard dirt. Immediate pain. I'm writhing on the ground in sheer agony, screaming in pain. It's arguably the most pain I'd ever felt in my entire life. I left myself writhe in the dirt for ten seconds and then decide its time to man up. I lay still. What's my damage assessment? My legs work. I have feeling. No broken spinal cord. I have extreme pain in my rib cage and back. I feel swelling already. Breathing is hard. My immediate assessment is that I probably broke a rib and its probably got multiple fractures. Nobody knows where I'm at, so I have to get up. Moving is excruciatingly painful, but I gotta do it. Little by little, I upright myself, then slowly stand up on my two feet. Then I slowly, ever so slowly, hobble my way back to the farm house. It's a long walk. My girlfriend sees me. I tell her what happened. I go sit in a rocking chair for a minute. Then I decide it's time to go to the hospital. The pain is getting worse. I struggle to get into the car. Then, we seem to hit every. single. fucking. pothole. along the way, each one inducing nightmarish pain in my back. We get to the emergency room. I'm brought inside immediately and put on a gurney for evaluation. I'm brought into this machine to get a CAT scan and X-Rays at the same time. It hurts so much to even breathe. Despite that, I'm calm. I'm not in mortal danger. I'll get through this, but it's gonna hurt. Well, the good news is, I don't have any spinal problems and no broken ribs. I do have a bruised right lung, internal bleeding, and as I discover later, the main source of pain was a torn back muscle. The torn muscle was the worst. It felt like every time I moved, someone was stabbing me in the back with a screwdriver and twisting viciously. I was cleared to go home and given pain meds. The whole day I laid on the couch, not daring to move. I needed two people to help me sit up, and that was extreme pain. The hardest part of my day was getting up to go use the bathroom. It literally took me a good 45 minutes to walk down the hall to use the toilet because the pain was so bad. The second day, the pain got even worse. The third day, the pain was slightly less, but still excruciating. We returned to my apartment in Seattle. We had to move out. How is that going to happen when I can't even move? Thankfully, friends and family are the greatest blessing in the world. My brothers, sisters, mom, and friends all came to help us move (and a couple hired hands). The only thing I could do is lay in bed and watch as everyone around me moved furniture. I know I was supposed to be relaxing and getting better, but I just felt so guilty watching everyone else working. Anytime I had dumb ideas about getting up, my back would hastily remind me not to. It took me a full week of laying down to recover enough to the point where I could walk around with minimal pain. I went back to work on Monday. However, my commute was now a bus ride and some short walking. With virtual reality game dev, you frequently have to get up and test something out in room scale VR. It's a bit more physically active development work than you'd think. I couldn't quite do that yet. I took it easy. Then, late evening came. It was time to walk to the bus stop and go home. Now, for those who don't know Seattle, it has some hills. My bus stop was two blocks away, but it required walking up a slight incline. Normally, I'd just power walk it and have no problems. But this time, walking even at a slow pace was just too much for me. A bruised lung left me so faint that I was about to pass out. I literally had to stop and take a breather. Maybe it was too early for me to go back to work if this was my condition? So, I decided to continue resting for another week, doing light duty. By the end of two weeks, I felt nearly completely recovered. It was an amazingly speedy recovery, considering the pain and seriousness of the injuries. I am very lucky. I could have been hurt much more seriously. Now, I am a lot more cautious around horses. I don't need to repeat that life experience. What's funny is how different riding horses are in real life compared to video games. Never, ever do you ever worry about falling off of a horse in a video game. Riding a horse is always like driving a car, it always perfectly does exactly what you want, as if the beast doesn't even have a mind of its own. It's interesting to think about the difference in user experiences between real life and game design and the balances between trade offs. Anyways, long story short, I fell off of a horse and was out of commission for a bit. During the month of May, I started doing some freelance contract work. I built a VR application for Dell, just in time for their annual Dell World event. The film guys in my office went to three different parts of the world and shot some 360 video to highlight the philanthropic programs Dell was doing to make the world a better place. We wanted to create a seamless and easy to use, interactive and immersive VR experience. People would pick up the GearVR, place it on their head, watch a couple videos, learn about the programs, and continue on with the conference, just a little wiser. We nailed it. We completely blew everyones socks off. The beauty of the Unreal Engine, coupled with good design and good assets, made an incredible VR experience. After the conference, they told me about one guy who was acting like a know-it-all, claiming that 360 video was not VR and pre-judging our app as being shit. Then, the skeptic put on our headset and tried out the experience. He's immediately in a stereoscopic world and able to use his gaze to interact with objects in the scene. Sure, the 360 video is projected onto the insides of a sphere, but that doesn't mean that all of the environment has to be projected onto a sphere or be a passive experience Afterwards, he couldn't stop raving about how amazed and wrong he was. I also started doing consulting on the side for a small local VR company here in Seattle. They were having some major problems with shipping on the GearVR platform. The problem is that the Samsung Galaxy S6 phone is notorious for overheating. The Oculus Store won't accept any submissions which cause the S6 to overheat within 15 minutes. Their app was overheating the phone within 5 minutes. So, this was clearly an optimization problem. Their dev team wasn't the very best when it comes to creating high performance systems. What they created is more than acceptable for a PC, but on a phone? Terrible. I spent considerable time testing and optimizing the scene and trying to get the app to run for 15 minutes before overheating. It's a really tedious process, where you have to document what change you made, create a package, deploy it to the phone, run a couple sessions, take an average, and figure out whether your change had any effect on the overall temperature heating rate of the phone. This workflow could easily take a few hours to test a few things. I got tired of this monotonous process, so I started just measuring the rate of temperature increase over time. The goal was to keep the phone temperature below the shut off value for the entire experience. As the temperature of the phone increases, we throttle down the experience quality. As the quality decreases, the temperature delta decreases and we squeeze out more lifetime. It was sort of like a temperature based LOD system. I was kind of proud of it. I'm not sure if anyone else has had to invent something like that. Anyways, I proved that it worked and got the app to run for at least 15 minutes on my test phone. I submitted it to Oculus for review, eagerly waiting for the test results... days go by... and then... REJECTED! Why?? Supposedly, their S6 was overheating within 5 minutes. How is that possible? We have the exact same phone! I'm still a bit baffled. For my own game, it's development has taken somewhat of a back burner. It sells on average, 1 copy a day on Steam. That volume is gradually decreasing. The bottom line is that it barely makes any money. Contracting work pays much, much more. However, Spellbound is my baby and I will continue working on it in between higher paying projects. I still spend a majority of my time working on it though. I have identified a couple key problems and areas to work on. The problems are as follows: 1) The content is too short and incomplete. 2) There is a marketing and advertising problem. Nobody knows about my game. #1 is relatively easy to fix: Just keep working on the game and adding in more content. The challenge is to create more content without spending any money or increasing debts. You want more art? voice acting? sound effects? music? You gotta pay for that... with money you don't have. #2 is the really hard one. How do you get your game in front of more people without spending lots of money? I have come up with a few key strategies: A) I need to identify and develop hardware companies in the VR space and create a cooperative partnership with them. I'll make my game compatible with their hardware, and their hardware will have compatible content. It's a win-win for both of us. Hardware sells content, and content sells hardware. If 2000 people buy a unique hardware peripheral, the next thing they'll do is look for quality content to use with the hardware. I want to make sure that Spellbound is at the top of that list everywhere they look. B) I also need to make my game as "discoverable" as possible. This means that it should be easy for people to stumble onto it. Let's face it. Nobody is going to go into their search bar and directly type in the name of my game and buy it. The only way people know about it is if they accidentally find it. Okay, that's a bad thing to rely on, right? What if... we make it easy for people to stumble onto the game in multiple places? What if we have the game available on multiple online distribution channels? Buy it on Steam! Buy it on Oculus Store! Buy it everywhere! Wherever you buy it, I don't care, so long as I get paid! If I make one sale a day on one channel and that becomes my average across all channels, then I just need to be on 50 channels to make 50 sales a day! (yeah, right) But, that does speak to the value of diversifying and broadening your footprint and availability. C) Make game content so good that people will talk to other people about it. This is extra hard for a lone indie with no budget. I'm convinced that there is only *one* way to do this right. I have to tell the most amazing story ever in the history of stories, and I have to keep the world super small and highly polished. Small, amazing story & polished. That's a tall order. Everyone else will beat me on scope. Everyone else will beat me on quality art assets and uniqueness. Everyone else will probably beat me on tech as well, though using UE4 helps even that playing field. A great story is my only chance. I can write, but is it any good? Can I write an epic story which speaks to the very heart and soul of the player? How exactly do I do that in VR? What's unique about VR that no other medium has? What do I need to discover that nobody else has discovered yet? ... short & scary answer: I don't know yet. I just have to have faith in my abilities, hard work and dedication. In line with my first strategy, I have created a partnership with NullspaceVR. They're creating the Hardlight haptics suit for VR. I got to try it out at their office and get some first impressions. It's pretty cool. They have a vest you wear which has a bunch of rumble packs placed all over your body. The developer can control which rumble packs activate and the intensity and type of the vibration. By selectively controlling the rumble packs, you can create various physical sensations on the players body. In my game, I want players to feel an impact on their body at the precise location a zombie hits them. I think this would heighten the sense of immersion and presence players experience and also work as an additional user interface medium (rather than having graphical damage indicators). The key consideration is that the support for this hardware should be treated as an optional accessory to enhance gameplay rather than a requirement to play -- additional hardware requirements only increase the consumers barrier for entry, and it's already high enough as it is with VR hardware. I have also started refactoring the artificial intelligence system in my game, for the fourth time now. This may be a mistake, but I'm doing it anyways. It's not broken. It works. But it's too scripted. The current systems are just hard coded expert systems and I don't find them very interesting or convincing, and worst of all, if I want to change behavior, I have to rewrite code. So, I've been doing some hard thinking and designing a new approach to AI. Characters now use "abilities". Abilities are a type of polymorphic action which can be assigned to a creature. If I have a zombie and a knight, I can grant both of them the "Melee Attack" ability. They can both activate the ability, but the creature response to the ability differs by creature type. The ability mostly contains meta data, such as ability cooldowns and timings for effect activations, but they can eventually be treated as "nodes" in a graph. So, I grant a creature a long list of abilities (eat, sleep, melee attack, ranged attack, run away, cast spell, etc) and ideally, it will choose the most suitable ability to use in context to its current situation. How do we determine which ability to use in the current situation? We use a weighted graph (similar to an artificial neural network). Okay, but how do we find the graph weights then? I don't want to run thousands of training simulations to get the most appropriate behavior. Instead, what I really, really want to do is just give a creature a bunch of preferences it wants to satisfy and then it chooses the most important preference to satisfy and figures out what action to take to satisfy it. Initially, the weighted graph would probably be all wrong, so we'd have to run through a few training cycles -- but not too many! The secret sauce would be to use the same graph weight for the same creatures, save them to disc so that learning is persistent through various play sessions, and also share the graph weights with some sort of online database which other brains download. That super smart zombie you are fighting? He's smart because he trained against 1000 online players and he shared his training knowledge with all zombies. At this point, ideally, all I would have to do to get different behavior patterns out of wildly different creature classes, is to tweak their innate preferences and reward systems. Zombies are constantly hungry and crave living flesh. Goblins absolutely love gold above everything else. Dwarves just want to forge stuff out of iron. Demons want to own souls like goblins own gold. Bandits just have a lower moral standard than regular people. etc. etc. Slight tweaks to preference parameters would ultimately result in different behavior patterns and it would slowly get replicated universally across all game clients over time. If I can do this, and just sort of create a sandbox game world, will the AI actors live interesting lives? Will every play through be significantly different? I don't know. It's a lot of extra scope to digest. The real question is, does the player give a fuck? Or would a scripted AI be "good enough"? Am I engineering stuff that doesn't increase the bottom line? Or am I creating something ground breaking? It's hard to tell. I need money but I also want to make cool stuff at the same time. Anyways, tomorrow morning I have to fire someone. One of the staff at our ranch has been taking our tools and selling them in town. This is the last straw in a long list of second chances. My younger sister told me something wise: "You get what you tolerate." I can't tolerate theft and the distrust that creates, no matter the sob story. The line has been crossed. I hate firing people, I take no pleasure in it, but it's a necessity for the future success of a business. You know you're ruining someones day/month, but people have to be held accountable for their own actions, good or bad. Running this ranch has been a valuable teacher on the nuances of business and management, but I can't help but feel there are many lessons for me yet to learn. P.S. I am probably the closest to being a cowboy coder right now.
  15. Why your indie game dev team will fail

    This post has a lot of context behind it. I've watched a lot of indie projects. There's the internet based indie projects, with remote teams and young ambitious people, of which, almost all of them fail. This is the main target demographic for my article, which was triggered by someone yesterday pulling me into his discord channel in an attempt to recruit me into the project. The intended game was going to be some variation of Black and White 2, built by a team of five people. They had a ton of red flags which needed to be corrected, so I went into detail. It wasn't the first time I've told people that their team and project has some major problems, and I started to feel that I was repeating the same things over and over again, so it would be worthwhile to compile a list of do's and don'ts so that I could just link people to it. What have I seen? Waaaaay back in the day, I joined an internet team, lead by some guy in Colorado -- called "runic games" or something. I was 17, their only programmer, who barely grasped C++ at the time. We were going to make a game about "vampyres". A few artists made concept art, lots of ideas were created, and then people started ghosting the project. It failed. Then, I tried many times to start my own games (very young and dumb) by myself and I kept on failing. Mostly because I was lazy, untalented, and burned myself out. I started to watch other variations of the first experience play out over and over again, many from small teams on gamedev.net; I decided I would not touch these types of teams with a ten foot pole because they all failed. I persisted with software development in general. I started getting a bit of professional experience and I had to deliver completed projects for people. There was no half-assing allowed, either do it or go home in shame. This is where I started figuring out the software development life cycle and getting good at the process. More recently, after starting my own indie studio, I've been keeping in touch with fellow indies and watching other start ups in my space. A lot of tragic stories. There was one indie team of five young men in my former building who all worked in the same workspace and actually shipped an HTML5 MMORPG. It works. It was an incredible accomplishment. They deployed their game online. They had an in-app purchase business model. However, they failed to attract any customers. They spent well over $200k and borrowed from friends and family. The lead programmer literally died towards the end of production. Why did they fail? Marketing, sales, and low testing and engagement with customers. The game they made was an MVP, but it just wasn't good enough to attract and retain players. Then there's this local guy who made this dumb game about a monkey. He also shipped, which is rare. But, his game is utter shit and the reviews reflect it. His problem? He couldn't accept constructive criticism and wasn't interested in critical feedback or improvement, so the result was garbage. His standards for quality are abysmal. Polish? What's that? He keeps on trying to make various games, but they are all equally bad and uninspired. He's doomed to keep repeating the same mistakes. Then there are the other start ups which seemingly do everything right, get funded by VC's, make a product, and discover that either they are too early to market, or that there is no market for their product, so they go out of business. This... is something that could have been figured out in planning instead of post production. These kinds of business problems aren't just unique to game developers, it's a problem common across all sectors of business. I've seen 90% of inventors inventing a product, only to find that they made something which nobody wants or needs. They may even spend tens of thousands in marketing and advertising, but that doesn't change the core fact that their invention is useless. Whoops! These inventors are facing the exact same problems that indie game devs face. You can even go to various fairs and look at the booths people have and see their products. Who is successful? Who has customers? Why are they doing well? Who is failing? Why are they failing? Is it the booth agent or the product? The lessons are universally applicable, because ultimately everything is either a product or service. Then there are also the local indies here in seattle who I think are doing everything absolutely right and they will succeed. They won't get wildly rich, but they will succeed. There is a million reasons why some people will succeed and fail, but I think random luck is the least influential variable in all of it. Will my game succeed or fail? I don't know, the verdict is still out. The two biggest challenges for me right now is marketing and building more content. I've made some big mistakes along the way (only revealed by hindsight), but also made some pretty good decisions as well. There are still more decisions to make, some may be mistakes, but as long as I am proactively looking for risks and trying to preemptively mitigate disaster, I can't screw up too bad, right? The lack of this process in most indie projects is what causes them to fail predictably...and its preventable.
  16. I think there should be an article category for production and management (similar to the forum category under "business").
  17. .... yes. How did I miss that?! I was looking for this category this morning and couldn't find it and was surprised it was missing. User error on my part, obviously.
  18. Introduction I've been doing virtual reality game development for a year now. In terms of relative industry time, I'm old and experienced. I wouldn't call myself an expert though -- there are many far more knowledgeable people than I and I defer to their expertise. This article covers many of the unique design challenges a VR developer will face within a roomscale play environment, how many VR developers are currently handling these design problems, and how I've handled them. There are a lot of general VR design choices which can impose some big unforeseen technical challenges and restrictions. I share my lessons learned and experiences so that anyone else considering working in VR can benefit from them. In the interests of being broadly applicable and useful, this article focuses more heavily on the design challenges and concepts of an implementation than the math and code implementation details. Reader Assumptions In the interest of getting deeper into the technical details quickly, I'm going to assume that everyone here knows what virtual reality is and generally how it works. If you don't, there are plenty of articles available online to give you a basic understanding. Commonly used terms Room Scale: This is the physical play area which a person uses to walk around in. Often this will be somebody's living room or bedroom. World Space: This is an in-game coordinate system for defining object positions relative to the center of the game world. Avatar: This is the character representation of the player within the virtual reality environment. Player: The human being in physical space controlling an avatar within VR. Motion Controller: A pair of hand held hardware input devices which are used to track hand positions and orientations over time. HMD: Stands for "Head Mounted Device", which is the VR goggles people wear. The priorities of a VR developer and designer 1. Do NOT make the player sick. Motion sickness is a real thing in VR. I've experienced it many times, and I can make other people experience it as well. It is nauseating and BAD. Do everything you can to avoid it. I have nothing but contempt for developers who intentionally try to cause people to get motion sick, even if its a part of their 'game design'. I have a working theory on what exactly causes the motion sickness, but I'll start by talking about a few popular VR experiences and what caused the motion sickness. The first VR game I tried was a mech warrior style game called " ">VOX Machinae", where you are a pilot driving around a mech. With the original mech warrior, you had jump jets which allowed you to accelerate into the sky for a short period of time. This game copied that idea. I have pretty good spatial awareness and can handle small bursts of acceleration, but in this game, you could be airborne for many seconds and you also could change your velocity in midair. Then you land and resume walking. All of these things caused me to feel motion sick for various reasons: The initial jump jet thrust into the air was no problem for me. Changing my flight direction in midair was a huge problem though. When you're in flight, you don't have anything to use as a visual reference for your delta velocity. You see the ground below you moving slightly and your brain tries really hard to precisely nail down where it's spatially located, but there isn't any nearby visual reference to use for relative changes. If the designer insists on using jump jets, what they should do is place small detail objects in the air (dust motes, rain drops, particles, etc). Avoid letting players change velocity in midair. The second part which caused me discomfort was the actual landing when the mech hits the ground. I don't remember exactly, but I think the mech had a bit of bounce-back when it landed, where the legs would insulate most of the force of the landing. Visually, you'd get a bit of a bob. Remember that braking is also a form of acceleration, and acceleration is a big culprit of motion sickness. One other notable VR experience was this game called "The sightline chair". It was supposed to be a comfortable experience, but the ending was stomach churning. The gist of the game is that you sit in a chair and look around. The world changes when you're not looking at it, so you gradually move from a forest to a city, to who knows what. At the very end of the experience, you are sitting in a chair at the top of a sky scraper scaffolding, on a flimsy board. I don't necessarily have a fear of heights, but looking down was scary because I knew what was about to happen next: I would fall. The scaffolding collapses, and not only does the chair you're in fall down, it SPINS BACKWARDS. Do NOT do this. NO SPINNING THE PLAYER! I didn't even let the falling sequence finish, I just closed my eyes and immediately ended the "game". Someone had published a rollercoaster VR experience which they built rather quickly in Unreal Engine 4. I knew it would make me motion sick, but I was studying the particular causes of motion sickness and had to try it out for science (VR devs suffer for their players!). The rollercoaster experience took place in a giant bedroom and followed a track with curves, dips, rises, and other motions. Essentially, they are taking your camera and moving it on rails at different speeds. So, I was expecting accelerations and thus sickness. The first time I went through the rollercoaster, I felt somewhat bad afterwards. I then did it again, and felt much worse. The lesson here is that motion sickness is a compounding, accruing effect which can last long after the VR experience is over. If you have even a little bit of stuff which causes people to get motion sick, it will build up over time and people will gradually start feeling worse and worse. I think it's also worth noting that a VR developer will need a very high end computer. I went to a hackathon last summer and had a pretty nauseating experience. First, I drank one beer and was slightly buzzed. First mistake. Then, I went to see some enthusiast guys demo on an Oculus DK1, which is an old relic by today's standards. Second mistake. I decided to stand up. Third mistake. He ran the demo on a crappy laptop. Fourth mistake. Then he ran some crappy demo with lots of funky movements and bad frame rates. Fifth mistake. I could only get about half way through before I started feeling like I was about to fall over on my face, so I had to stop. Don't do any of this! It's not worth it. Anyways, let's talk about the game I'm developing and what I discovered to be trouble spots. The game begins at the top of a wizard tower. You can go between levels in the tower by walking down flights of stairs. I have two types of stair ways. Stairs which descend straight down Stairs which curve and descend The curved stairs generally cause SPINNING and spinning generally causes motion sickness. The stairs which take players between levels of the tower are okay, but there's a chance that players can jump down instead of taking one stair at a time (which potentially causes motion sickness). So, to protect the player from themselves, I put banister railings and furniture to block the player from jumping down directly. They could still try if they were determined, but I did my part to promote wizard safety and broken knees. I find it helps to imagine that an occupational health and safety inspector is going to come look at the environment I built and tell me where I need to put safeguards to protect the player from unintentional hazards. Of course, that's not going to actually happen, but it gets you thinking about your world as if people actually had to live in it and how they'd design it for their own health and safety. The other surprising cause of motion sickness is the terrain and movement speed of the player. Your terrain should generally be SMOOTH. From a level designers standpoint, it's tempting to add a bit of roughness to the terrain to make it look less artificial, but every bump in the terrain causes the player VR headset to go up and down, and this is a small form of acceleration, which, causes accruing motion sickness over time. Smooth out your terrain with a smoothing brush. If you want hills or valleys, make the slopes gradual and smooth. The faster a player travels over any object which causes the camera to go up and down (stairs, rocks, logs, etc) the more likely they are experience motion sickness and get sick. To hide the smoothness of your terrain, use foliage and other props to cover it up. Keep in mind that when you are building a VR game and designing things within it, you will be building up a tolerance for motion sickness susceptibility. You become a bad test subject for motion sickness. Try to find willing guinea pigs who have very little experience with VR. When you find these willing guinea pigs, be very sure to do a pre-screening of their current state! And, very importantly, keep a barf bag nearby! We had two incidents during testing: A player had been feeling sick and nauseous before playing our VR game. The VR exacerbated the sickness and he had to stop. A player had drunk bad juice or something and it wasn't agreeing with her stomach. She didn't say anything about it but had to stop the VR game and ran out and puked into a garbage bag. Was it our game which caused sickness, or were these people feeling sick prior to playing? It was impossible to isolate the true cause because we weren't conducting a highly controlled test. This is why you pre-screen people so that you can control your studies a bit better. At the end of the day, VR developers owe it to their loyal players to make their VR experience the best, most comfortable experience possible. Uncomfortable experiences, no matter how great the game is, will cause people to stop playing the game. Nobody will see your final boss monster or know how your epic story ends if they stop playing after 5 minutes due to motion sickness. 2. Design to enhance player immersion, avoid breaking it. Virtual Reality is not a new form of cinema and it's not gaming. VR is also not a set of fancy, expensive goggles people wear, though it is a necessary component to creating virtual reality. Virtual Reality, and the goal of a VR developer, is to transfer the wearer and their sense of being (presence) into a new artificial world and do our best to convince them that it is real. Our job is to convince players that what they're seeing, hearing, feeling, touching, and sensing is real. Of course, we all know it isn't 'real', but it can seem to be very convincing, almost to a frightening level of immersion. So, a player loans to us many of their senses and their suspension of disbelief, and in exchange, we give them a really fun, immersive experience. The second most important goal of a VR developer is to create and maintain that sense of immersion, that feeling of actually being somewhere totally different, of being someone else entirely different. Generally speaking, the virtual reality players game experience should match their expectations of how they think the environment should look and react. If you create a scene with a tree in it, players are going to walk up to that tree and look all around it and expect it to look like a tree from every angle. That tree should be appropriately scaled to the player avatar, and should not glitch in any way. Remember that you don't control the camera position and orientation, the player does. So, consider the possibility that the player will look at your objects from any angle and distance. If the object mesh is poorly done, immersion is broken. If the object texture is wrong, immersion is broken. If the object UV coordinates and skinning is wrong and you've got seams, immersion is broken. If players can walk through what they expect to be a solid object, immersion is broken. You can also have immersion breaking sounds. You can also have narrative dialogue which "breaks the fourth wall". There isn't a definitive list of do's and don'ts for immersion in VR. My approach is to use more of an imaginative heuristic, where I try to close my eyes, become the character standing in this world, and imagine how it needs to look, sound, and behave. Players will try to do things they expect won't work, so a big part of "VR Magic" is to react to these unexpected player tests of VR. For example, if you have a pet animal, players will try to play with it and pet it. If you pet it, the animal should react positively. Valve nailed it with their new mechanical dog. You can pick up a stick in the game world and throw it. The mechanical dog will bark and run after it and bring it back. Then you can roll the mechanical dog over and rub its belly, which causes its tail to wag and legs to wiggle in a very cute way. Whatever you do build in VR, test it! Test it as often as you can, and get as many other people as you can to try it out. Watch carefully how they play, what they do, where they get stuck, how they test the environment response to their expectations, etc. No VR experience should ever be built in isolation or a vacuum. 3. Hit Your HMD Frame Rate For a lot of my development, I've mostly ignored this rule. Particularly for the Oculus Rift. Their hardware can smooth between frames, so if you drop below the target frame rate, the game is still playable on the Rift. I felt comfortable at 45 frames per second and immersion didn't break. However, switching to the HTC Vive was a different story. Their hardware has a screen refresh rate set at 90 hertz, so if you are running anywhere below 90fps, you WILL see judder when you move your head around, and this will break immersion and cause motion sickness to build up. When you're building your VR game, profile often and fix latency causing issues in your build. Always be hitting your framerate. Some players are much more sensitive to framerate latency, so if you aren't hitting your frame rate and they're reporting motion sickness, you can't rule out framerate as being a cause of it, which causes problem isolation to be much more difficult. 4. Full Body Avatars I may be in a very small camp of VR developers advocating for full body avatars, but I think it is super valuable to producing an immersive and compelling VR experience. This is particular to first person perspective games, so platformer style games may safely ignore this. The goal in creating a full body avatar is to give the player a body to call their own. This can become a powerful experience and creates lots of interesting opportunities. It is very magical to be able to look down and see your arm, wiggle your fingers in real life, and see the avatar wiggling its fingers exactly the same way. To build a sense of self-identity, we place a mirror within the very beginning part of our VR game. You can see your avatar looking back at you, and as you move your head from left to right, up and down, you see the corresponding avatar head movements reflected in the mirror. What's particularly interesting about immersive full body avatars is that the things which happen to the avatar character feel like they're happening to your own self, and you can experience the world through the perspective of a body which is very different from your own. If you're a man, you can be a woman. If you're a woman, you can be a man. You can change skin color to examine different racial and cultural norms. You can become a zombie and experience what its like to see the world through their eyes. etc. You're not just in someone else's shoes, you're in their body. Beyond creating character empathy, this also creates a very exciting opportunity to create first-hand experiences which are impossible in any other form of media: Imagine a non-player character coming up to you and giving you a nice warm hug. The only reason you would believe it wasn't real is that you can't feel the slight constriction around your player torso which real hugs give -- but emotionally and from the perspective of personal intimate space, they are the very same. When you possess a full body avatar, you also build a sense of 'personal space', where if something invades that bubble, you feel it yourself -- if its something dangerous or intimidating, you reel back in fear. If its something familiar and intimate, you feel warm and happy. Aside from just being a form of self-identification, a full body avatar also works as a communication device between the player and the game. If the player is walking, they should not only see the world moving around them relative to their movement, but they should be able to look down and see their feet moving in the direction of travel. If the player is hurt, they can look at their arms or body to see their damage (blood? torn clothing? wounds?). Our best practice is to keep the body mesh separate from the head mesh. The head mesh is not rendered for the owning player, but other players and cameras can see it. The reason you want to hide the head mesh from the owning player is because you don't want the player to see the innards of their own head (such as eye balls, mouth, hair, etc). If you want though, you can place a hat on the players head which enters into their peripheral vision. 5. Tone down the speed and intensity VR games are immersive and players feel like they're physically in your world. This alone magnifies the intensity of any experience players are going to have, so if you're used to designing highly intense, fast paced game play, you're going to want to dial it down to at least half. I think that it's actually possible to give people PTSD from VR, so be careful. You also want to be mindful of common fears people have, such as spiders, and keep those to a minimum. Modern games also tend to have a very fast walk and run speeds designed to keep players in the action. It feels good and polished on a traditional monitor, but if you do this in VR, you feel like you're running 40 miles per hour. Horror and psychological thrillers are going to be popular go to genres for VR game studios trying to get a strong reaction out of players. However, I will forever condemn anyone who puts jump scares into their VR game. If you do this, I will hate you and I won't even try your game, no matter how 'good' people say it is. Jump scares are a cheap gimmick used to get a momentary rise out of someone and they're used by design hacks who've got nothing better up their designer sleeves. Jump scares are about as horrible as comic sans or the papyrus font. They're not fun, they're not scary, and they're not interesting. Don't use them. When it comes to locomotion speeds in a room scale environment, players are going to be walking around at a natural walking pace which is comfortable for their environment and dimensions of their play area. Generally, this is going to be quite slow compared to what we're used to in traditional video games! You may feel tempted to change the ratio of real world movement to game world movement, but through lots of first hand testing, I can assure you that this is actually a bad idea. We NEED our physical movement and our visual appearance of movement to have a 1:1 ratio. This sets the tone for what the movement speed of the avatar should be if you're planning to artificially move them as well (through input controls): The avatar should move at a speed which is as close to the players natural, comfortable walking speed. This value can be empirically derived through play testing. The technique I used to get my value is as follows: Make sure your avatar movement has a 1:1 ratio with player movement. If the player moves forward 1 meter, the avatar should move forward 1 meter as well. Start at one end of the play space and physically walk to the other end, and measure the approximate amount of time it took. This is your own walking speed. Measure the distance your avatar covered over this period of time. This is the distance your avatar needs to cover if its being moved through input controls. Divide this distance by the time it took you to cover this distance in your room environment, and you'll have an approximate speed for your artificial movement. The speed of your avatars movements and player walking speeds will have a huge impact on your level design, game pacing, monster movement speeds and corresponding animations. You'll generally want to make your levels less expansive and tighter, so players aren't spending 15 seconds walking uneventfully through a corridor or along a forest path. The walking speeds and awareness of surroundings also put a huge limit on how much action a player can handle simultaneously. With traditional games, we can constantly be walking and running around while shooting guns and slinging spells and using the mouse to whip around in an instant, but with room scale VR, we naturally tend to either be exclusively moving or performing an action, but rarely both at the same time. From user testing, generally people can only effectively focus on about one or two monsters simultaneously in VR, and even this delivers a satisfyingly intense experience. Adding additional monsters would only overwhelm players and their capacity to handle them effectively, so take these considerations into mind when designing monster encounters, particularly if monsters can approach the players from all directions. The last point to consider carefully is avatar death and the impact it has on the player. In traditional forms of media, we're a bit more disconnected from our characters dying because it's not happening to us personally. When it happens to you yourself, it feels a bit more shocking and we tend to have a bit more of an instinctual self-preservation type response. So, the emotional reaction to our own death is a bit stronger in VR. The rule of thumb I'm using is to lower the intensity of the death experience proportionate to its frequency (via design). If death rarely happens, you can allow it to be somewhat jarring. If death happens frequently, you want to tone down the intensity. Whatever technique you use for this, keep a critical eye towards preserving immersion and presence. Unique Roomscale VR Considerations I've found that designing and building for seated VR is much more simple than standing or room scale VR. However, room scale VR is much more fun and immersive, but that comes with an added layer of complexity for things you have to try to account for and handle. I'll briefly go through the challenges for room scale VR: Measure the height of the player Players come in a range of heights, from 130cm to 200cm. If a player is standing in their living room with a VR headset, their height should have no bearing on the height in game. So, before a player gets into the game, you should have a calibration step where the player is instructed to stand up straight and tall, and then you measure their height. You can then take the height of the player and compare it against the height of the player avatar and figure out an eye offset and a height ratio. The player will see the VR world through the eyes of their avatar, and if you design the world for the avatar, you can be assured that everyone has the same play experience since you've calibrated the player height to the avatar height. You also know that if a players head goes down 10cm, the proportion of their skeletal movement is going to differ based on how tall they are, and you can use this information to appropriately animate the avatar proportionate to player skeletal position. Derive the player torso orientation This is a surprisingly hard problem. If you have an avatar representing the player (which you should!), then you need to know where to place that avatar and what pose to put that avatar in. The VR avatar should ideally match the players body. To find the player torso, you have three devices which return a position and orientation value every frame. You know where the players head is located, where their left hand and right hand are, and the orientations for each one. To find the center position and rotation of the player torso, keep in mind that a player may be looking over their left or right shoulder and may have their arms in any crazy position. Fortunately, players themselves have some physical constraints you can rely on: Unless the player is an owl, they can't rotate their head beyond their shoulders, so the head yaw direction is always going to be within 90 degrees of their torso orientation. As long as the player keeps the left hand motion controller in the left hand, and the right in the right hand, you can also make some distinctions about the physical capabilities of each motion controller. If the pitch of the motion controller is beyond +/- 90 degrees, then the arm is bending at the elbow and it is probably upside down. You can get the "forward" vector of both motion controllers, whether they're upside down or not, and use that direction as a factor for determining the actual torso position and orientation. The other important consideration to keep in mind is that a player can be looking down or tilting their head back, or rolling their head left or right, all of which change the position of the head relative to the torso. I tested my own physical limits at the extremes, captured my head roll and pitch values and the position offsets, and then used an inverse lerp to determine the head offset from the torso position. This assumes of course, that the player head is always going to be attached to their body. You can also add in some logic to measure the distance of the motion controllers from the head mounted display device. If the controllers are outside of a persons physical arm length, you can assume that they aren't actually holding the motion controllers and can use some logic to help the player see where these motion controllers are in their play space. When you know the forward direction of the motion controllers and the head rotation, you can get a rough approximation of the actual torso position. I just average together the head yaw and the yaws of the motion controllers to get the torso yaw, but this could probably be improved a lot more. The torso position is going to be some fixed value below the head position, accounting for the head rotation offsets and the players calibrated height and their proportions. Why is the player torso orientation important? Well, if the player walks forward, they walk in the direction their torso is facing. You want to let players look over their shoulder and look around while walking straight forward. Modeling and Animation in VR: If you're using a fully body avatar to represent the player, your animation requirements are going to be a lot lighter. A lot of the characters bones are going to be driven by the players body position. You'll almost certainly always have to create a standing, walking and running animation and do blending between the three animations. There are two VERY important bits to keep in mind here: 1. Do NOT use head bobbing. If the camera is attached to the player avatar in any way, this causes unnecessary motion, which causes sickness. 2. The poses should have the shoulders squared and the torso facing forward at all times. Don't slant the body orientation. The poses should ideally also have the arms hanging at the sides. The arms will stay out of sight until the player brings the motion controllers upwards. The reason you don't want the torso to be slanted is because when the player reaches both arms straight out, the avatar arm lengths need to match the player arm lengths -- if the avatar torso is slanted, one shoulder will be further back and one arm will be further out than the other. Inverse Kinematics: Since you know the position of the player's hands and head, you can use inverse kinematics (IK) to drive a lot of the avatar skeleton. Through trial and error, we found that the palm position is slightly offset from the motion controller position, but this varies by hardware device. We use IK to place the palm at the motion controller position and that drives the elbow bone and shoulder rotations. You'll also have a problem where players will use their hands to move through physically blocking objects in VR. There's nothing in physical reality stopping their hand from moving through a virtual reality object, but you can block the avatar's hand from passing through it by using a line trace between the avatars palm position and the motion controllers palm position, then setting the avatar palm position to the impact point. So, rather than having a hand phase through a wall or table, you can have these virtual objects block it in a convincing way. Artificial Intelligence in VR: One surprise we've found is how much "life" good artificial intelligence and animation brings to characters in the game. This is an important part of creating a convincing environment which conforms to the expectations of the player. When writing and designing the AI for a character, I try to put myself into the position of the character and think about what information and knowledge it has and try to respond to it in the most intelligent way possible. There will be *some* need for cheating, but you can get a lot of immersion and presence mileage out of well done AI and response to player actions. If you're having trouble figuring out the AI behaviors for a particular character, you can take control over that character and move around the world and figure out how and when to use its various actions. This can also lead to some interesting multiplayer game modes if you have the extra time. Sound Design: You actually don't want to get too fancy with sounds. You want to record and play your sound effects in MONO, but when you place them within your VR environment, you want those sounds to have the correct position and attenuation relative to the players head. Most engines should already handle this for you. One other consideration you'll want to look at is the environment the sound is playing in. Do you have echoes or reverberations from sound bouncing off the walls in VR? You'll also want to be careful with your sound attenuation settings. Some things, like a burning torch, can play a burning, crackling sound, but the attenuation volume could follow a logarithmic scale instead of a fixed linear scale. A lot of these things will require experimentation and testing. One other important consideration is thinking about where exactly a sound is coming from in 3D space. If you have a large character who speaks, does the voice emanate from their mouth position? their throat position? their body position? What happens if this sound origin moves very near the players head but the character does not? I haven't tried this yet, but one possibility is to play the sound from multiple positions at the same time. It's also very important to note the importance of sound within VR. Remember, VR is not just an HMD. It's an artificial world which immerses the player's senses. Everything that can have a sound effect, should have a believable sound effect. I would suggest that while people think that VR is all about visual effects, sound makes up for 50% of the total experience. When it comes to narrative and voice overs, the production process is relatively unchanged from other forms of media. Whatever you do, just test everything out to make sure it makes sense and fits the context of the experience you're creating. Locomotion: This is probably one of the most difficult problems room scale VR developers face. The dilemma is that a player is going to be playing in a room with a fixed size, such as a living room. Let's pretend that its 5 meters by 5 meters. When the player moves through this play space, their avatar should move through it with a corresponding 1:1 ratio. What happens if the game world is larger than the size of the players living room? Maybe your game world is 1km by 1km, but the players living room is 5m by 5m. The solution you choose is going to be dependent on the game you're creating and how it's designed, so there isn't a 'silver bullet' which works for every game. Here are my own requirements for a locomotion solution: It has to be intuitive, easy and natural to use. Player instruction should be minimal. It can't use hardware the player doesn't have. In other words, use out-of-the-box hardware. It can't make players sick It should not detract from the gameplay It should not break the game design These are some of the locomotion solutions: Option 1: Design around it. Some notable VR developers have decided that the playable VR area is going to be exactly the size of the player's play area. "Job Simulator" will have the world dynamically resize to fit the size of your playable area, but the playable area isn't much larger than a cubicle. "Hover Junkers" designs around the locomotion problem by having the player standing on a moving platform, and the moving platform is the size of their playable area. Fantastic Contraption does the same thing. These are fine work arounds, and if you never have to solve a locomotion problem...it's never going to be a problem! Option 2: Teleportation. This seems to be the 'industry standard' for movement around a game world which is larger than the living room. There are many variations of this technique, but it essentially all works the same: The player uses the motion controller to point to where they want to go, press a button, and they instantly move there. One benefit of this method is that it causes very little motion sickness and you can keep the player's avatar within a valid position in the game world. One huge draw back of this is that it can cause certain games to have broken mechanics -- if you are surrounded by bad guys, it is much less intense if you can simply just teleport out of trouble. You also have a potential problem where players can teleport next to a wall and then walk through it in room space. Option 3: Rotate the game world. This is a novel technique someone else came up with recently, where you essentially grab the game world and rotate the world around yourself. As you reach the edge of your living room, the player would turn 180 degrees and then grab the game world and rotate it 180 degrees as well. This *works* but I anticipate it has several draw backs in practice. First, the player has to constantly be interrupting their game to manage their position state within VR. This is very immersion breaking. Second, if the players living room is very small, the number of times they have to grab and rotate the world is going to become very frequent. What portion of the game would players be spending walking around and rotating the world? The third critique is that when the player grabs and rotates the world, they are effectively stopping their movement, so they're constantly stopping and going in VR. This won't work well for games where you have to run away from something, but could work well for casual environment exploration games. Option 4: Use an additional hardware input device, such as Virtuix Omni. This is a perfect ideal. You don't have to know the direction of the player's torso because locomotion is done by the player's own feet. It's also a very familiar locomotion system which requires no training and interruption of hands. However, there are going to be three critical drawbacks. First, *most* people are not going to actually have this hardware available, so you have to design your VR game for the lowest common denominator in terms of hardware support. Second, I believe it's going to be a lot more physically tiring to constantly run your legs (would anyone feel like a hamster in a hamster wheel?). This puts physical endurance limits on how long a player can play your VR game (12-hour gaming sessions are going to be rare.) Third, the hardware holds your hips in place, so ducking and jumping are not going to be something players can do easily. Aside from these issues, this would be perfect and I look forward to a future where this hardware is prolific and polished. Option 5: "Walk-a-motion". This is the locomotion technique I recently invented, so I'm a bit biased. I walk about a mile to work every day and I noticed that when I walk, I generally swing my arms from side to side. So, my approach is to use arm swinging as a locomotion technique for VR. The speed at which you swing your arms determines your movement speed, and its on a tiered levels, so slow leisurely swings will make you walk at a constant slow pace. If you increase your arm speed, you increase your constant walk speed. If you pump your arms a lot faster, your character runs. This moves the avatar in the direction of the player torso orientation, so it's important to know where the torso is facing despite head orientations. The advantage of this system is that it acts as an input command for the avatar to walk forward, so you automatically get in game collision detection. To change avatar walk directions, you just turn your own torso in the direction you want to walk forward. You can still walk around within your play area as usual, though it can become disorienting to walk backwards while swinging your arms to move forwards. This also does require you to use your arms to move, so that creates a significant limitation on being able to "run and gun" in certain games. It also looks kind of stupid to stand in place and swing your arms or move them up and down furiously, but you already look kind of silly wearing an HMD anyways. It's also a lot less tiring than running your feet on a treadmill, but it's also slightly less 'natural'. No additional hardware is required however, and no teleportation means the player can actually be chased by monsters or run off of cliffs. To get this to work, I have two tracks on either side of the player character. I grab the motion controller positions and project their points onto these two tracks (using dot products). If the left hand value is decreasing and the right hand value is increasing, or vice versa, and the hands are below the waist, we can read this as a movement input. I also keep track of the hand positions over the last 30 frames and average them together to smooth out the inputs so that we're moving based on a running average instead of frame by frame inputs. Since we're running anywhere between 75-90 frames per second, this is very acceptable and responsive. It's worth noting that natural hand swinging movements don't actually move in a perfect arc centered on the torso. Through testing, I found that my arms move forward about twice as far as they move backwards, so this informs where you place the tracks. I've also experimented with calibrating the arm swinging movements per player, but there is a danger that the player will swing their arms around wildly and totally mess up the calibration. You will want to keep the movement tracks at the sides of the player, so you'll have to either read the motion controller inputs in local space or transform the tracks into a coordinate space relative to the avatar. A future optimization could be to use traced hand arcs and project the motion controller positions onto them, but after trying to implement it, I realized it was additional complexity without a significant gain. Option 6: Push button locomotion: This is by far the simplest locomotion solution, where you face the direction you want your avatar to travel, and then you push a button on your motion controller. While its simple to implement, it does have a few limitations as well. First, you will be using a button to move forward. The motion controllers don't have many buttons, so this is quite a sacrifice. Second, the button has two states: up or down. So, you're either moving forward at maximum speed or not moving at all. The WASD keyboard controls have the same limitation, but it is familiar. If you want the player to use a game pad instead of motion controllers, you can also use the thumbsticks to give lateral and backward movements. However, I don't recommend using game pads for room scale VR because the cords are generally not long enough and you lose out on the new capabilities of motion controllers. Option 7: Move on rails: Some games will have the avatar moving on a preset, fixed pathway. Maybe your avatar is walking down a trail at a constant speed, and you only control where they look and what they do? This can work well for certain games such as rail shooters, but it does mean that the freedom of movement and control is taken away from the player. Option 8: The player is riding on something that moves: In this case, you might be riding something like a horse and you're guiding its movement by pulling on reins. Or maybe you're on a magic carpet and you steer the carpets movement by rotating a sphere of some sort. These are really good alternative solutions, though there is one pretty big limitation: You can't really use these convincingly indoors. Option 9: A combination of everything and fakery: If you are very careful about how you design your levels and environments, you could totally fake it and make it seem like there is actually locomotion when there really isn't. For example, if the player is walking around within a building, don't let the dimensions of the building be larger than the play space. If the player exits the building, perhaps they have to get on a horse to cross the street to get to the next building. Or perhaps get in a car and drive to the next town over. Maybe when a player enters into a new building, the orientation of the building interior is designed to align with the players location so that you maximize the walkable space. The trick is to figure our clever ways to make the player not move outside the bounds of their play space while giving the appearance that they're moving vast distances in the VR world, and to do that, you want to minimize the actual amount of walking around the player has to do. Object Collision: This has been a problem which has challenged a lot of room scale VR developers. The problem is that players know that the virtual reality environment is not real, so if there is a blocking obstacle or geometry of some sort and the player can move by walking around in their living room, there is no reason why the player doesn't just walk through the object in virtual reality. This means that walls are merely suggestions. Doors are just decorations, whether they're locked or not. Monsters which would block your path can simply be passed through. In a sense, the player is a ghost who can walk through anything in VR because nothing is physically stopping them from doing so in their play space. This can be dangerous as well if a player is in a tall building and decides to walk through the wall and exits the building and falls (motion sickness!). Some developers have chosen to create volumes which cause the camera to fade to black when the player passes through a blocking volume. Other VR developers acknowledge the problem and claim that it's against a players psychological instincts to pass through blocking geometry, so they ignore it and let players stop themselves. Through experimentation, I found that intentionally walking through walls in VR has a secondary danger: You forget which walls are virtual and which ones are not (SteamVR has a chaperone system which creates a grid withing VR which indicate where your real world walls are located). I spent a week trying to figure out how to solve this problem, and I can finally say that I have solved it. I can confidently say that I am an idiot because it took me so long to find such a simple solution. Let's back up for a moment though and examine where we've all been going wrong. The "wrong" thing to do is read the HMD position every frame and set the avatar to its corresponding position in world space. For example, if your HMD starts at [0,0,0] on frame #0 and then goes to [10,0,0] on frame #1, you don't set the avatar position to the equivalent world space coordinates. That's what I was doing, and it was wrong! What you actually want to do is get the movement delta for the HMD each frame and then apply an accrued movement input request to your avatar. So, in the example above, the delta would be calculated by subtracting the last frame HMD position [0,0,0] from the current frame HMD position [10,0,0] to get [10,0,0]. You then want to give your avatar a movement input on the resulting direction vector. This would cause your head movement to be a movement input, no different than a WASD keyboard input. If the avatar is blocked by geometry, it won't pass through it. And you can safely set the HMD camera on the characters shoulder so that it isn't clipping through the geometry as well. In effect, you can't block the player from physically moving forward in their living room, but you can block the avatar and the HMD camera from moving forward with the player. It's not quite unsettling, but players will quickly learn that they can't move through solid objects in VR and they'll stop trying. Problem solved. It took me a week to figure out where I went wrong and the solution was so elegantly simple that I felt like the worlds dumbest developer (my other attempts were ten times more complicated, but were a necessary step in the eventual path to the correct solution). Conclusion and final thoughts VR is a very new form of media and there are no established hard rules to follow. There are very few (if any) experts, so take everyone's word with a grain of salt. The best way to find what works and doesn't work is to try things out and see if they work. I find that it saves a lot of time to spend a good ten minutes trying to think through a design implementation in my imagination and playing it through in my head before trying to implement it. It's a lot faster and more efficient to anticipate problems in the conceptual phase and fix them before you start implementing them. In my experience, you really can't do a lot of "up front" design for VR. There's a high risk that whatever you come up with just won't work right in VR. You'll want to use a fast iteration software development life cycle and test frequently. You'll also want to get as many people as you can to try out your VR experience so that you can find trouble areas, blind spots, and figure out where your design assumptions are wrong. You'll also want to monitor your own health state carefully. Developing in VR and iterating rapidly between VR and game dev can cause wooziness, and these woozy feelings can actually slow down the pace of your development as you're trying to mentally recover from the effects. Take breaks! See Also Talk/slides on VR design.
  19. I gave this talk yesterday evening at a VR meetup in Bellevue, Washington. It starts at about 3:00 min and runs for about 50 minutes. I've also included my powerpoint slide deck in case anyone is interested in downloading it and following along. Virtual_Reality_Design_Secrets.pptx
  20. Sure, feel free to modify away The goal is to spread knowledge and help the industry become better.
  21. Thanks for the warm encouragement for writing an article I was looking through my article history and I found an article I wrote a while back which covers a lot of the same stuff and goes into a bit more depth: It might be worthwhile to consider keeping it updated and adding enhancements over time, or to do a deep dive into a particular topic. It'd be nice to start a community knowledge sharing repo for all things related to VR development -- because there should be no secrets in this industry. Thoughts?
  22. A) You can try to repeat your steps to reproduce the bug. B) You can create a log file which writes any buggy behavior to a file. C) You can create a system to record your game, and then replay it. If you encountered a bug, then replaying your input actions and events should reproduce the conditions which created the bug. D) At a bare minimum, write down what you were doing and the unexpected behavior you experienced.
  23. Spellbound: April Update

    Yeah, I think realistically, I'm not going to succeed if I don't do any marketing or advertising. Marketing and advertising costs money, and there are a million and one ways to do it wrong and waste your money. So, how do you do marketing and advertising for your product without spending money? Get other people to do it for you! How do you do that? Create business partnerships! Create a product they want to be involved with! (way easier said than done, but high production values is the starting point). I think in my case, it's going to be an uphill slog where I have to fight hard for every single sale, unlike those other lucky game devs who have somehow gotten tons and tons of media attention throughout their development cycle and gotten millions in venture capital funding, followed by a successful release... I'm a bit jelly at how easy they make it look, but I imagine there's a lot happening behind the scenes that few people know about. When it comes to waiting patiently for success, it may never actually happen. But, your patience is determined not by your mental state of mind, but by the patience of your bank account (and maybe wife/girlfriend). In the case of VR, we may need to wait a while for the industry to become mainstream in order for indies to be profitable, but "profits" are less important right now than just surviving and getting market share and making a positive, early impression in the industry.
  24. Spellbound: April Update

    It's been a tough few months. Spellbound is not really selling very well on Steam. It's to be expected, since the game is both in Early Access mode and has zero marketing behind it. But, it's still a disappointing reality. I'm optimistic that it will eventually change. All I have to do is keep working away at the game, add more content, start marketing and promoting it, and create more interest and attention. The biggest challenge right now is financial. I'm quite broke, but that's not really news for anyone, but it is very limiting. It means I can't hire anyone to help me. It means I can't spend money on marketing and advertising. It means I have to spend my time working on side projects which bring in extra money. Ultimately, it lengthens my timeline to final delivery. My costs have become extremely lean. My operating costs are now $400/month and I have to buy food and pay rent. I have to do everything without spending money because I don't really have money to spend. For the month of April, I have spent most of my efforts on trying to figure out how to make money on the side and how to market and advertise my game. I'm working in an office of film guys and they are offering their clients 360 video services, and will fly around the world to film and make 360 video content. Their target hardware platform is the GearVR, so they recently pinged me to help them produce their content. Easy! The first project was a short 2 minute 360 film for a film producer. He needed it made in time for NAB so that he could give demos and get work. I delivered. He was happy. Now, I just need to invoice him. I'm certain there will be much more work in the future. Dell is also another client. Their project is a bit more involved: They have three different philanthropy programs they want to promote, so I'm creating their application for GearVR as well. What's interesting about this project is that it was originally just going to be a series of 360 videos viewed in GearVR, but now that I'm involved, it has become a much more interactive VR experience. The film making and VR gaming industries are merging together, and this product is a testament to that. I foresaw this over two years ago, but didn't really expect to be one of the people to bring our industries closer together. But, it's exciting. I think this project will raise the bar for everyone else doing 360 videos. I've also been doing a little bit of consulting on the side. A fellow VR company is trying to get their GearVR app submitted to the Oculus Store, but their problem is that the phone overheats within 5 minutes. They asked me to come help them troubleshoot this. So, how do you troubleshoot a nebulous problem like this? With super debugging skills (see my last post). You can't exactly set a break point on a particular line of code or point at one thing and blame it, you have to have a really good, thorough debugging process. Anyways, I helped them out and they're now on the right path to resolving their problem. But I'm not just doing VR consulting to make money. There really isn't enough money to make it worth my time yet, and it comes with opportunity costs. I'm also selling my girlfriends product "The Perfect Wine Opener" at various street fairs, home shows, and events around the pacific northwest. Yeah. I'm a programmer, selling products to complete strangers. And I'm actually very good at it. Like, scary good. Put me in your crappiest show in the crappiest booth, and I will sell the shit out of your wine openers. Two weekends ago, I almost sold out completely and made around $2,000 -- in a weekend! This weekend, I went to an outdoor show, which was completely miserable because it was cold, rainy and windy, and I still sold $1,200 of product. I've had a guy go out to get me a coffee and come back, and I've sold another $100 in the three minutes he was gone. I honestly think that sales is a very wonderful, valuable skill to have. Think about how amazing it would be to be both a skilled salesman and a VR content creator at the same time. Not only do you understand exactly what your customer wants to get out of your experiences, you can also build it. So, it's a bit of a perplexing wonder that Spellbound isn't selling very well, considering how good I am at selling wine openers. I'm treating all of this as a science problem to solve (we engineers are good at these!). I'm going to use the same process I use to debug software to debug my marketing problem: So, why isn't anyone buying my game? What's my hypothesis? How do I test my hypothesis? How do I measure the reaction? What assumptions am I making? Last week, I assumed that if I created a Reddit AMA, I would get a lot of extra traffic to my storefront and I would see a measurable bump in sales. So, I spent the whole afternoon answering questions about VR game development. Surely, people are interested in virtual reality, game development, philosophy, ethics, war, and all of them together, right?! Those were all stupid assumptions I made. Apparently, there wasn't much interest. So, why do some AMA's get 1,500 questions and others like mine get 24 questions and engagement with 7 people? Probably because I'm not a celebrity or really weird/interesting? I have no idea. Regardless, the test results showed that there was absolutely zero change in traffic or sales from my established baseline. My hypothesis was proven wrong. And that's okay! It's 100% acceptable to be wrong! In fact, the faster you can figure out that you're wrong, the faster you can quit doing the wrong things and try different wrong things! Eventually, you'll try something that isn't wrong and you'll do something right! The key is to not get down when something negative happens!!!!!! I guess this is what makes me so good at sales. Every rejection, every objection, is perfectly fine. Just be like a rock in the bottom of a stream and let it roll off of you like water. Learn and move on fast. It's all about having a positive, optimistic attitude, no matter what is happening. Broke? Keep your head up and keep charging forward! Getting shot at? Keep your head down but don't stop smiling -- you're still alive! People quit your team? That's to bad, it's certainly a setback, but you know who won't ever quit your team? Yourself! You'll get new people eventually -- success attracts them. An indomitable spirit, positive attitude and a strong work ethic will steamroll any obstacle in your path between yourself and success. I'm a betting man, and I would still bet on myself. The lack of funds is just a small, temporary problem / challenge, but there are much bigger problems to solve in my future. I've got an industry to define and build. On that front, I have been going through the process of getting Spellbound onto the Oculus Store. It's every bit of a painful process as you'd imagine. First, you have to make your marketing materials and create your storefront. Then you have to submit a build to Oculus and go through a QA review process. I've been rejected three times. The first time, I didn't have an "entitlement check", which means I'm not checking to see if the game is a legitimate purchase. I missed that in the plethora of documentation. Second time around, I ran into this really, really annoying "black screen" bug which only happened on the oculus rift with a packaged build for levels which contained sub levels. This took me over a week to isolate and identify. It was obviously a game breaking bug. The third time I got rejected, it's because my game isn't hitting the required 90 frames per second. This is where I'm currently stuck. My scenes are complex and heavy, and I average around 45-60 frames per second. Once I can consistently hit that magical 90fps mark in all parts of my game, I'll resubmit. It's a tough benchmark to hit while trying not to lose quality in content. But, after I get accepted onto the Oculus Store, I'll have my game available on two different distribution channels. Then I can have no sales on both channels! :D But no, really, it's actually a good thing. It increases my "discoverability" and any additional marketing and advertising I eventually push out, will make my game easier to find and purchase. I think the way this works is that the more successful a game is on a storefront, the higher it "ranks" in the listings and featured sections. The more it is featured, the more eyeballs it gets and the more it is purchased. It turns into a self promoting cycle which snowballs. Naturally, these storefronts will want to continue promoting products which sell very well. It only makes sense, right? There's no incentive to promote garbage which doesn't sell well. If you take 30% from every sale and promote stuff that doesn't sell, you don't make much money. The key is to be able to say, "This will sell well. People who buy it, will like it." Am I there yet? I'm not sure. My biggest problem is that my game doesn't have enough content to make it compelling. But, that's also a temporary problem. I am also very interested in making this into a multiplayer game. But, multiplayer is going to be challenging if there aren't enough players to play with yet. So, I also need to grow my player base to justify multiplayer. Currently, I average about 1-2 concurrent sessions globally, so spending a few months building multiplayer capabilities would be a wasted effort. So, expand the player base, add more content, add multiplayer, make game better, rinse, repeat. In another recent development, I met with the Nullspace VR team last week and tried out their Hardlight VR haptics suit. They've recently had a successful kickstarter and have been getting a lot of positive press recently. They're obviously a hardware company, and as they say, "hardware is hard!". Hardware is another platform to build content for. Anyways, I tried out their suit. It was pretty cool. I felt that the VR demos they gave didn't really do their tech proper justice. I could do way better. But it's cool tech. So, I'm going to add support for their hardware in a future release of Spellbound. When a zombie claws at you, you'll feel it on your body. When you die and they munch your corpse, you'll feel it. When you get hit by a sizzling wraith spell, you'll feel the impact and burn on your body. It'll be amazing. They played Spellbound and really liked it as well, so in a few weeks or months, we're both going to announce another title with official support for their hardware. I think this will help both of our teams. I need the marketing exposure, they need content to support their hardware, and consumers need amazing, immersive VR experiences which takes VR to the next level. Anyways, VR is going to be a big deal in a few years. I hope I have a part in building its future. Right now, I have to make sure I can survive and be a part of it.