Jump to content
  • Advertisement

Search the Community

Showing results for tags 'Machine Learning'.

The search index is currently processing. Current results may not be complete.


More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Categories

  • Audio
    • Music and Sound FX
  • Business
    • Business and Law
    • Career Development
    • Production and Management
  • Game Design
    • Game Design and Theory
    • Writing for Games
    • UX for Games
  • Industry
    • Interviews
    • Event Coverage
  • Programming
    • Artificial Intelligence
    • General and Gameplay Programming
    • Graphics and GPU Programming
    • Engines and Middleware
    • Math and Physics
    • Networking and Multiplayer
  • Visual Arts
  • Archive

Categories

  • Audio
  • Visual Arts
  • Programming
  • Writing

Categories

  • Game Dev Loadout
  • Game Dev Unchained

Categories

  • Game Developers Conference
    • GDC 2017
    • GDC 2018
  • Power-Up Digital Games Conference
    • PDGC I: Words of Wisdom
    • PDGC II: The Devs Strike Back
    • PDGC III: Syntax Error

Forums

  • Audio
    • Music and Sound FX
  • Business
    • Games Career Development
    • Production and Management
    • Games Business and Law
  • Game Design
    • Game Design and Theory
    • Writing for Games
  • Programming
    • Artificial Intelligence
    • Engines and Middleware
    • General and Gameplay Programming
    • Graphics and GPU Programming
    • Math and Physics
    • Networking and Multiplayer
  • Visual Arts
    • 2D and 3D Art
    • Art Critique and Feedback
  • Community
    • GameDev Challenges
    • GDNet+ Member Forum
    • GDNet Lounge
    • GDNet Comments, Suggestions, and Ideas
    • Coding Horrors
    • Your Announcements
    • Hobby Project Classifieds
    • Indie Showcase
    • Article Writing
  • Affiliates
    • NeHe Productions
    • AngelCode
  • Topical
    • Virtual and Augmented Reality
    • News
  • Workshops
    • C# Workshop
    • CPP Workshop
    • Freehand Drawing Workshop
    • Hands-On Interactive Game Development
    • SICP Workshop
    • XNA 4.0 Workshop
  • Archive
    • Topical
    • Affiliates
    • Contests
    • Technical
  • GameDev Challenges's Topics
  • For Beginners's Forum
  • Unreal Engine Users's Unreal Engine Group Forum
  • Unity Developers's Forum
  • Unity Developers's Asset Share

Calendars

  • Community Calendar
  • Games Industry Events
  • Game Jams
  • GameDev Challenges's Schedule

Blogs

There are no results to display.

There are no results to display.

Product Groups

  • Advertisements
  • GameDev Gear

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


About Me


Website


Role


Twitter


Github


Twitch


Steam

Found 10 results

  1. what are your opinions ? is the industry being fair ?
  2. Hi guys, it's been some weeks since i started to think to create an AI for a mobile game that i play, which can be played on PC as well so that i can use softwares The problem is that i'm not really into "AI stuff" so i really don't know where to start, or how can my AI do what i want it to do. In this first post i just want some tips about something my AI should do. I'm not going to tell you all the game, because i prefer not to, and i think is not necessary either. Ok so, one of the things my AI should do is this: Recognise the map, the border of the map (basically the area where i can tap), it should recognise all the defenses in the map (which you can see, because you see the whole area from above), Just this for now, i don't really know how the AI can recognise all the different defenses in the area just by seeing them, and it need to be a precise thing, we are talking about millimiters. Maybe the AI can recreate the map in its software, but i don't know if im saying something right, so i'm just gonna leave this to you, hopefully someome will clarify thing to me. Thanks. Edit: just thought about the fact that i could even recreate the map by hand, with an ipotetic software with all the defenses and stats
  3. The world of games has changed a lot over the years. And as developers continue to innovate, games could change more in the next 5 years, than they did in the last 45. Games are already much more connected, more social, and ever changing as new content drops all the time. All of this generates enormous amounts of data every day. And to continue to reach and delight players, that data and the technologies you use to act on it, are critical. Here's a quick look at how EA uses AI and ML to improve the player experience through content recommendations, bad actor detection, and even AI-powered world building. Watch now: https://amzn.to/2W39USJ
  4. Below is my preliminary draft design for the AI system within Spellbound. I'm slowly migrating away from scripted expert systems towards a more dynamic and fluid AI system based on machine learning and neural networks. I may be crazy to attempt this, but I find this topic fascinating. I ended up having a mild existential crisis as a result of this. Let me know what you think or if I'm missing something. Artificial Intelligence: Objectives: Spellbound is going to be a large open world with many different types of characters, each with different motives and behaviors. We want this open world to feel alive, as if the characters within the world are inhabitants. If we went with pre-scripted behavioral patterns, the characters would be unable to learn and adapt to changes in their environment. It would also be very labor intensive to write specific AI routines for each character. Ideally, we just give every character a self-adapting brain and let them loose to figure out the rest for themselves. Core Premise: (very dense, take a minute to soak this in) Intelligence is not a fixed intrinsic property of creatures. Intelligence is an emergent property which results directly from the neural topology of a biological brain. True sentience can be created if the neural topology of an intelligent being is replicated with data structures and the correct intelligence model. If intelligence is an emergent property, and emergent properties are simple rule sets working together, then creating intelligence is a matter of discovering the simple rule sets. Design: Each character has its own individual Artificial Neural Network (ANN). This is a weighted graph which uses reinforcement learning. Throughout the character's lifespan, the graph will become more weighted towards rewarding actions and away from displeasurable ones. Any time an action causes a displeasure to go away or brings a pleasure, that neural pathway will be reinforced. If a neural pathway has not been used in a long time, we reduce its weight. Over time, the creature will learn. A SIMPLE ANN is just a single cluster of connected neurons. Each neuron is a “node” which is connected to nearby neurons. Each neuron receives inputs and generates outputs. The neural outputs always fire and activate a connected neuron. When a neuron receives enough inputs, it itself fires and activates downstream neurons. So, a SIMPLE ANN receives input and generates outputs which are a reaction to the inputs. At the end of neural cycle, we have to give response feedback to the ANN. If the neural response was positive, we strengthen the neural pathway by increasing the neural connection weights. If the response was negative, we decrease the weights of the pathway. With enough trial runs, we will find the neural pathway for the given inputs which creates the most positive outcome. The SIMPLE ANN can be considered a single cluster. It can be abstracted into a single node for the purposes of creating a higher layer of connected node networks. When we have multiple source inputs feeding into our neural network cluster and each node is running its most optimal neural pathway depending on the input, we get complex unscripted behavior. A brain is just a very large collection of layered neural nodes connected to each other. We’ll call this our “Artificial Brain” (AB) Motivation, motivators (rule sets): -All creatures have a “desired state” they want to achieve and maintain. Think about food. When you have eaten and are full, your state is at an optimally desired state. When time passes, you become increasingly hungry. Being just a teensy bit hungry may not be enough to compel you to change your current behavior, but as time goes on and your hunger increases, your motivation to eat increases until it supersedes the motives for all other actions. We can create a few very simple rules to create complex, emergent behavior. Rule 1: Every creature has a desired state they are trying to achieve and maintain. Some desired states may be unachievable (ie, infinite wealth) Rule 2: States are changed by performing actions. Actions may change one or more states at once (one to many relationship). Rule 3: “Motive” is created by a delta between current state (CS) and desired state (DS). The greater the delta between CS and DS, the more powerful the motive is. (Is this a linear graph or an exponential graph?) Rule 4: “relief” is the sum of all deltas between CS and DS provided by an action. Rule 5: A creature can have multiple competing motives. The creature will choose the action which provides the greatest amount of relief. Rule 6: Some actions are a means to an end and can be chained together (action chains). If you’re hungry and the food is 50 feet away from you, you can’t just start eating. You first must move to the food to get within interaction radius, then eat it. Q: How do we create an action chain? Q: How do we know that the action chain will result in relief? A: We generally know what desired result we want, so we work backwards. What action causes desired result (DR)? Action G does (learned from experience). How do we perform Action G? We have to perform Action D, which causes Action G. How do we cause Action D? We perform Action A, which causes Action D. Therefore, G<-D<-A; So we should do A->D->G->DR. Back propagation may be the contemporary approach to changing graph weights, but it's backwards. Q: How does long term planning work? Q: What is a conceptual idea? How can it be represented? A: A conceptual idea is a set of nodes which is abstracted to become a single node? Motivators: (Why we do the things we do) Hunger Body Temperature Wealth Knowledge Power Social Validation Sex Love/Compassion Anger/Hatred Pain Relief Fear Virtues, Vices & Ethics Notice that all of these motivators are actually psychological motivators. That means they happen in the head of the agent rather than being a physical motivator. You can be physically hungry, but psychologically, you can ignore the pains of hunger. The psychological thresholds would be different per agent. Therefore, all of these motivators belong in the “brain” of the character rather than all being attributes of an agents physical body. Hunger and body temperature would be physical attributes, but they would also be “psychological tolerances”. Psychological Tolerances: {motivator} => 0 [------------|-----------o----|----] 100 A B C D E A - This is the lowest possible bound for the motivator. B - This is the lower threshold point for the motivator. If the current state falls below this value, the desired state begins to affect actions. C - This is the current state of the motivator. D - This is the upper threshold point for the motivator. If the current state exceeds this value, the desired state begins to affect actions. E - This is the highest bounds for the motivator. The A & E bounds values are fixed and universal. The B and D threshold values vary by creature. Where you place them can make huge differences in behavior. Psychological Profiles: We can assign a class of creatures a list of psychological tolerances and assign their current state to some preset values. The behavioral decisions and subsequent actions will be driven by the psychological profile based upon the actions which create the sum of most psychological relief. The psychological profile will be the inputs into an artificial neural network, and the outputs will be the range of actions which can be performed by the agent. Ideally, the psychological profile state will drive the ANN, which drives actions, which changes the state of the psychological profile, which creates a feedback loop of reinforcement learning. Final Result: We do not program scripted behaviors, we assign psychological profiles and lists of actions. Characters will have psychological states which drive their behavioral patterns. Simply by tweaking the psychological desires of a creature, we can create emergent behavior resembling intelligence. A zombie would always be hungry, feasting on flesh would provide temporary relief. A goblin would have a strong compulsion for wealth, so they'd be very motivated to perform actions which ultimately result in gold. Rather than spending lots of time writing expert systems styled AI, we create a machine learning type of AI. Challenges: I have never created a working artificial neural network type of AI. Experimental research and development: The following notes are crazy talk which may or may not be feasible. They may need more investigation to measure their merit as viable approaches to AI. Learning by Observation: Our intelligent character doesn’t necessarily have to perform an action themselves to learn about its consequences (reward vs regret). If they watch another character perform an action and receive a reward, the intelligent character creates a connection between an action and consequence. Exploration Learning: A very important component to getting an simple ANN to work most efficiently is to get the neurons to find and establish new connections with other neurons. If we have a neural connection topology which always results in a negative response, we’ll want to generate a new connection at random to a nearby neuron. Exploration Scheduling: When all other paths are terrible, the new path becomes better and we “try it out” because there’s nothing better. If the new pathway happens to result in a positive outcome, suddenly it gets much stronger. This is how our simple ANN discovers new unscripted behaviors. The danger is that we will have a sub-optimal behavior pattern which generates some results, but they’re not the best results. We’d use the same neural pathway over and over again because it is a well travelled path. Exploration Rewards: In order to encourage exploring different untravelled paths, we gradually increase the “novelty” reward value for taking that pathway. If traveling this pathway results in a large reward, the pathway is highly rewarded and may become the most travelled path. Dynamic Deep Learning: On occasion, we’ll also want to create new neurons at random and connect them to at least one other nearby downstream neuron. If a neuron is not connected to any other neurons, it becomes an “island” and must die. When we follow a neural pathway, we are looking at two costs: The connection weight and the path weight. We always choose the shortest path with the least weight. Rarely used pathways will have their weight decrease over a long period of time. If a path weight reaches zero, we break the connection and our brain “forgets” the neural connection. Evolutionary & Inherited Learning: It takes a lot of effort for a neural pathway to become developed. We will want to speed up the development. If a child is born to two parents, those parents will rapidly increase the neural pathways of the child by sharing their own pathways. This is one way to "teach". Thus, children will think very much like their parents do. Other characters will also share their knowledge with other characters. In order for knowledge to spread, it must be interesting enough to be spread. So, a character will generally share the most interesting knowledge they have. Network Training & Evolutionary Inheritance: An untrained ANN results in an uninteresting character. So, we have to have at least a trained base preset for a brain. This is consistent with biological brains because our brains have been pre-configured through evolutionary processes and come pre-wired with certain regions of the brain being universally responsible for processing certain input types. The training method will be rudimentary at first, to get something at least passable, and it can be done as a part of the development process. When we release the game to the public, the creatures are still going to be training. The creatures which had the most “success” will become a part of the next generation. These brain configurations can be stored on a central database somewhere in the cloud. When a player begins a new game, we download the most recent generation of brain configurations. Each newly instanced character may have a chance to have a random mutation. When the game completes, if there were any particular brains which were more successful than the current strain, we select it for “breeding” with other successful strains so that the next generation is an amalgamation of the most successful previous generations. We’ll probably begin to see some divergence and brain species over time? Predisposition towards Behavior Patterns via bias: Characters will also have slight predispositions which are assigned at birth. 50% of their predisposition is innate to their creature class. 25% is genetically passed down by parents. 25% is randomly chosen. A predisposition causes some pleasures and displeasures to be more or less intense. This will skew the weightings of a developing ANN a bit more heavily to favor particular actions. This is what will create a variety in interests between characters, and will ultimately lead to a variety in personalities. We can create very different behavior patterns in our AB’s by tweaking the amount of pleasure and displeasure various outputs generate for our creature. The brain of a goblin could derive much more pleasure from getting gold, so it will have strong neural pathways which result in getting gold. AI will be able to interact with interactable objects. An interactable object has a list of ways it can be interacted with. Interactable objects can be used to interact with other interactable objects. Characters are considered to be interactable objects. The AI has a sense of ownership for various objects. When it loses an object, it is a displeasurable feeling. When they gain an object, it is a pleasurable feeling. Stealing from an AI will cause it to be unhappy and it will learn about theft and begin trying to avoid it. Giving a gift to an AI makes it very happy. Trading one object for another will transfer ownership of objects. There is no "intrinsic value" to an object. The value of an object is based on how much the AI wants it compared to how much it wants the other object in question. Learning through Socialization: AI's will socialize with each other. This is the primary mechanism for knowledge transfer. They will generally tell each other about recent events or interests, choosing to talk about the most interesting events first. If an AI doesn't find a conversation very interesting, they will stop the conversation and leave (terminating condition). If a threat is nearby, the AI will be very interested in it and will share with nearby AI. If a player has hurt or killed a townsfolk, all of the nearby townsfolk will be very upset and may attack the player on sight. If enough players attack the townsfolk, the townsfolk AI will start to associate all players with negative feelings and may attack a player on sight even if they didn't do anything to aggravate the townsfolk AI.
  5. Packt

    Unity ML Agents

    Learn about Unity ML-Agents in this article by Micheal Lanham, a tech innovator and an avid Unity developer, consultant, manager, and author of multiple Unity games, graphics projects, and books. Unity has embraced machine learning and deep reinforcement learning in particular, with the aim of producing a working seep reinforcement learning (DRL) SDK for game and simulation developers. Fortunately, the team at Unity, led by Danny Lange, has succeeded in developing a robust cutting-edge DRL engine capable of impressive results. Unity uses a proximal policy optimization (PPO) model as the basis for its DRL engine; this model is significantly more complex and may differ in some ways. This article will introduce the Unity ML-Agents tools and SDK for building DRL agents to play games and simulations. While this tool is both powerful and cutting-edge, it is also easy to use and provides a few tools to help us learn concepts as we go. Be sure you have Unity installed before proceeding. Installing ML-Agents In this section, we cover a high-level overview of the steps you will need to take in order to successfully install the ML-Agents SDK. This material is still in beta and has already changed significantly from version to version. Now, jump on your computer and follow these steps: Be sure you have Git installed on your computer; it works from the command line. Git is a very popular source code management system, and there is a ton of resources on how to install and use Git for your platform. After you have installed Git, just be sure it works by test cloning a repository, any repository. Open a command window or a regular shell. Windows users can open an Anaconda window. Change to a working folder where you want to place the new code and enter the following command (Windows users may want to use C:\ML-Agents): git clonehttps://github.com/Unity-Technologies/ml-agents This will clone the ml-agents repository onto your computer and create a new folder with the same name. You may want to take the extra step of also adding the version to the folder name. Unity, and pretty much the whole AI space, is in continuous transition, at least at the moment. This means new and constant changes are always happening. At the time of writing, we will clone to a folder named ml-agents.6, like so: git clone https://github.com/Unity-Technologies/ml-agents ml-agents.6 Create a new virtual environment for ml-agents and set it to 3.6, like so: #Windows conda create -n ml-agents python=3.6 #Mac Use the documentation for your preferred environment Activate the environment, again, using Anaconda: activate ml-agents Install TensorFlow. With Anaconda, we can do this using the following: pip install tensorflow==1.7.1 Install the Python packages. On Anaconda, enter the following: cd ML-Agents #from root folder cd ml-agents or cd ml-agents.6 #for example cd ml-agents pip install -e . or pip3 install -e . This will install all the required packages for the Agents SDK and may take several minutes. Be sure to leave this window open, as we will use it shortly. This should complete the setup of the Unity Python SDK for ML-Agents. In the next section, we will learn how to set up and train one of the many example environments provided by Unity. Training an agent We can now jump in and look at examples where deep reinforcement learning (DRL) is put to use. Fortunately, the new agent's toolkit provides several examples to demonstrate the power of the engine. Open up Unity or the Unity Hub and follow these steps: Click on the Open project button at the top of the Project dialog. Locate and open the UnitySDK project folder as shown in the following screenshot: Opening the Unity SDK Project Wait for the project to load and then open the Project window at the bottom of the editor. If you are asked to update the project, say yes or continue. Thus far, all of the agent code has been designed to be backward compatible. Locate and open the GridWorld scene as shown in this screenshot: Opening the GridWorld example scene Select the GridAcademy object in the Hierarchy window. Then direct your attention to the Inspector window, and beside the Brains, click the target icon to open the Brain selection dialog: Inspecting the GridWorld example environment Select the GridWorldPlayer brain. This brain is a player brain, meaning that a player, you, can control the game. Press the Play button at the top of the editor and watch the grid environment form. Since the game is currently set to a player, you can use the WASD controls to move the cube. The goal is much like the FrozenPond environment we built a DQN for earlier. That is, you have to move the blue cube to the green + symbol and avoid the red X. Feel free to play the game as much as you like. Note how the game only runs for a certain amount of time and is not turn-based. In the next section, we will learn how to run this example with a DRL agent. What's in a brain? One of the brilliant aspects of the ML-Agents platform is the ability to switch from player control to AI/agent control very quickly and seamlessly. In order to do this, Unity uses the concept of a brain. A brain may be either player-controlled, a player brain, or agent-controlled, a learning brain. The brilliant part is that you can build a game and test it, as a player can then turn the game loose on an RL agent. This has the added benefit of making any game written in Unity controllable by an AI with very little effort. Training an RL agent with Unity is fairly straightforward to set up and run. Unity uses Python externally to build the learning brain model. Using Python makes far more sense since as we have already seen several DL libraries are built on top of it. Follow these steps to train an agent for the GridWorld environment: Select the GridAcademy again and switch the Brains from GridWorldPlayer to GridWorldLearning as shown: Switching the brain to use GridWorldLearning Click on the Control option at the end. This simple setting is what tells the brain it may be controlled externally. Be sure to double-check that the option is enabled. Select the trueAgent object in the Hierarchy window, and then, in the Inspector window, change the Brain property under the Grid Agent component to a GridWorldLearning brain: Setting the brain on the agent to GridWorldLearning For this sample, we want to switch our Academy and Agent to use the same brain, GridWorldLearning. Make sure you have an Anaconda or Python window open and set to the ML-Agents/ml-agents folder or your versioned ml-agents folder. Run the following command in the Anaconda or Python window using the ml-agents virtual environment: mlagents-learn config/trainer_config.yaml --run-id=firstRun --train This will start the Unity PPO trainer and run the agent example as configured. At some point, the command window will prompt you to run the Unity editor with the environment loaded. Press Play in the Unity editor to run the GridWorld environment. Shortly after, you should see the agent training with the results being output in the Python script window: Running the GridWorld environment in training mode Note how the mlagents-learn script is the Python code that builds the RL model to run the agent. As you can see from the output of the script, there are several parameters, or what we refer to as hyper-parameters, that need to be configured. Let the agent train for several thousand iterations and note how quickly it learns. The internal model here, called PPO, has been shown to be a very effective learner at multiple forms of tasks and is very well suited for game development. Depending on your hardware, the agent may learn to perfect this task in less than an hour. Keep the agent training and look at more ways to inspect the agent's training progress in the next section. Monitoring training with TensorBoard Training an agent with RL or any DL model for that matter is not often a simple task and requires some attention to detail. Fortunately, TensorFlow ships with a set of graph tools called TensorBoard that we can use to monitor training progress. Follow these steps to run TensorBoard: Open an Anaconda or Python window. Activate the ml-agents virtual environment. Don't shut down the window running the trainer; we need to keep that going. Navigate to the ML-Agents/ml-agents folder and run the following command: tensorboard --logdir=summaries This will run TensorBoard with its own built-in web server. You can load the page using the URL that is shown after you run the previous command. Enter the URL for TensorBoard as shown in the window, or use localhost:6006 or machinename:6006 in your browser. After an hour or so, you should see something similar to the following: The TensorBoard graph window In the preceding screenshot, you can see each of the various graphs denoting an aspect of training. Understanding each of these graphs is important to understand how your agent is training, so we will break down the output from each section: Environment: This section shows how the agent is performing overall in the environment. A closer look at each of the graphs is shown in the following screenshot with their preferred trend: A closer look at the Environment section plots Cumulative Reward: This is the total reward the agent is maximizing. You generally want to see this going up, but there are reasons why it may fall. It is always best to maximize rewards in the range of 1 to -1. If you see rewards outside this range on the graph, you also want to correct this as well. Episode Length: It usually is a better sign if this value decreases. After all, shorter episodes mean more training. However, keep in mind that the episode length could increase out of need, so this one can go either way. Lesson: This represents which lesson the agent is on and is intended for Curriculum Learning. Losses: This section shows graphs that represent the calculated loss or cost of the policy and value. A screenshot of this section is shown next, again with arrows showing the optimum preferences: Losses and preferred training direction Policy Loss: This determines how much the policy is changing over time. The policy is the piece that decides the actions, and in general, this graph should be showing a downward trend, indicating that the policy is getting better at making decisions. Value Loss: This is the mean or average loss of the value function. It essentially models how well the agent is predicting the value of its next state. Initially, this value should increase, and then after the reward is stabilized, it should decrease. Policy: PPO uses the concept of a policy rather than a model to determine the quality of actions. The next screenshot shows the policy graphs and their preferred trend: Policy graphs and preferred trends Entropy: This represents how much the agent is exploring. You want this value to decrease as the agent learns more about its surroundings and needs to explore less. Learning Rate: Currently, this value is set to decrease linearly over time. Value Estimate: This is the mean or average value visited by all states of the agent. This value should increase in order to represent the growth of the agent's knowledge and then stabilize. 6. Let the agent run to completion and keep TensorBoard running. 7. Go back to the Anaconda/Python window that was training the brain and run this command: mlagents-learn config/trainer_config.yaml --run-id=secondRun --train 8. You will again be prompted to press Play in the editor; be sure to do so. Let the agent start the training and run for a few sessions. As you do so, monitor the TensorBoard window and note how the secondRun is shown on the graphs. Feel free to let this agent run to completion as well, but you can stop it now if you want to. In previous versions of ML-Agents, you needed to build a Unity executable first as a game-training environment and run that. The external Python brain would still run the same. This method made it very difficult to debug any code issues or problems with your game. All of these difficulties were corrected with the current method. Now that we have seen how easy it is to set up and train an agent, we will go through the next section to see how that agent can be run without an external Python brain and run directly in Unity. Running an agent Using Python to train works well, but it is not something a real game would ever use. Ideally, what we want to be able to do is build a TensorFlow graph and use it in Unity. Fortunately, a library was constructed, called TensorFlowSharp that allows .NET to consume TensorFlow graphs. This allows us to build offline TFModels and later inject them into our game. Unfortunately, we can only use trained models and not train in this manner, at least not yet. Let's see how this works using the graph we just trained for the GridWorld environment and use it as an internal brain in Unity. Follow the exercise in the next section to set up and use an internal brain: Download the TFSharp plugin from here From the editor menu, select Assets | Import Package | Custom Package... Locate the asset package you just downloaded and use the import dialogs to load the plugin into the project. From the menu, select Edit | Project Settings. This will open the Settings window (new in 2018.3) Locate under the Player options the Scripting Define Symbols and set the text to ENABLE_TENSORFLOW and enable Allow Unsafe Code, as shown in this screenshot: Setting the ENABLE_TENSORFLOW flag Locate the GridWorldAcademy object in the Hierarchy window and make sure it is using the Brains | GridWorldLearning. Turn the Control option off under the Brains section of the Grid Academy script. Locate the GridWorldLearning brain in the Assets/Examples/GridWorld/Brains folder and make sure the Model parameter is set in the Inspector window, as shown in this screenshot: Setting the model for the brain to use The Model should already be set to the GridWorldLearning model. In this example, we are using the TFModel that is shipped with the GridWorld example. Press Play to run the editor and watch the agent control the cube. Right now, we are running the environment with the pre-trained Unity brain. In the next section, we will look at how to use the brain we trained in the previous section. Loading a trained brain All of the Unity samples come with pre-trained brains you can use to explore the samples. Of course, we want to be able to load our own TF graphs into Unity and run them. Follow the next steps in order to load a trained graph: Locate the ML-Agents/ml-agents/models/firstRun-0 folder. Inside this folder, you should see a file named GridWorldLearning.bytes. Drag this file into the Unity editor into the Project/Assets/ML-Agents/Examples/GridWorld/TFModels folder, as shown: Dragging the bytes graph into Unity This will import the graph into the Unity project as a resource and rename it GridWorldLearning 1. It does this because the default model already has the same name. Locate the GridWorldLearning from the brains folder and select it in the Inspector windows and drag the new GridWorldLearning 1 model onto the Model slot under the Brain Parameters: Loading the Graph Model slot in the brain We won't need to change any other parameters at this point, but pay special attention to how the brain is configured. The defaults will work for now. Press Play in the Unity editor and watch the agent run through the game successfully. How long you trained the agent for will really determine how well it plays the game. If you let it complete the training, the agent should be equal to the already trained Unity agent. If you found this article interesting, you can explore Hands-On Deep Learning for Games to understand the core concepts of deep learning and deep reinforcement learning by applying them to develop games. Hands-On Deep Learning for Games will give an in-depth view of the potential of deep learning and neural networks in game development.
  6. Attention colony managers, you may have expected from update 42 the "Answer to the Ultimate Question of Life, the Universe and Everything"or have you ever sheltered a passing hitchhiker in one of your outposts? Anyways, what you get instead is of course just another bunch of problems which will make the colonizing business so much more interesting... TL;DR Catastrophes let colony shares drop Riots in cities and native camps Security Complex New item: Soma New city center animation Anti-aliasing Biome & forest diversification Fixes & improvements Catastrophes Let Colony Shares Drop For example a persisting epidemic in another companie's colony is lowering their colony's stock price. Use this opportunity to buy their shares at a discount! Riots, fires, hackings, flood, tornadoes, asteroids, meteors and other incidents have similar effects. Also threats like Assimilators or Xrathul will cause a drop. Your ability to artificially trigger these natural catastrophes through temples and items makes price drops an interesting prelude to the next strategic equity buying spree and the takeover of other colonies. The free market makes it possible. Riots in the Colony Riots are now a real threat to your colony! This is what happens when your colonists are unhappy for a long time. Under-supply of resources or negative external influences lead to a poor quality of life. If these circumstances continue for too long, people's misfortune increases to anger and a riot develops. It's too late to fight the causes of the uprising as soon as it breaks out. A riot can be pacified with the use of Soma and prevented by a security complex or a frontier tower. But the easiest thing to do is to provide good colony living conditions in the first place. Security Complex This building stations police units to maintain order in the colony. Riots of dissatisfied colonists in the surrounding area are prevented. This ensures peace and quiet, although this is associated with additional costs. The facility monitoring upgrade scans production facilities in the surrounding area and prevents incidents. The cyber security upgrade monitors the information networks in surrounding buildings and prevents hacker attacks. Soma We could finally synthesize Soma, the famous drug from Aldous Huxley. This is what the packing slip says: "To avoid major mood swings that can lead to negative moods, people regularly take Soma, a drug that has a mood-lifting and stimulating effect and is also used as an aphrodisiac. Unlike alcohol, it has no side effects at the usual dosage and is produced synthetically." To sum it up: it's raising the general life quality and helps against riots. New City Center Animation A new age of ingame animations awaits us! Starting with the colony center base which is expanding in various parts that screw, bend and turn out of the ground when the landing capsule is coming down. The worker drones are now stationed on landing pads from which they will lift off to build and repair. Anti-Aliasing Might sound a bit weird but we had to disable this basic graphics feature for ages due to framework issues. But what it adds to the game's look is awesomely amazing. Lovely, calm and soft edges everywhere especially when you zoom out from the planet, have a look. Biome & Forest Diversification Ralph sent us a very detailed feedback with improvement suggestions to make the environment and nature more realistic (Thanks!). Based on this we decided to further diversify the biomes and forests: Forests in fertile areas are often better at binding CO2. This is for example the case for rain forest and bog forest Big fish swarms are more common in cold areas Industrial farming and fishing reduce fertility Steppe is bad ground for forest growth, but quite good for farms and especially resistant, so fertility is only slowly decreased Rain forest ground in contrast is easily damageable and loses it's fertility fast Fixes & Improvements Added achievements for Edora, Iqunox and Thera and when reaching 40 points in the campaign Workshop queue is only available for workshops of level 2 now Camera position is restored when loading a savegame Priorization stores at least 10% of a resource's production. This amount is now shown in the list of consumed resources Fossil resources below 10% will decrease a fossil power plant's productivity (up tp 20%) Fixed bugs in highscore list New unlocked items are highlighted in the inventory Switched background music compression from ADPCM to WMA streaming Fixed game freeze and window disappearing problems
  7. We are now filling up the game base for http://probqa.com/ - an interactive game recommendation engine based on probabilistic question-asking system (ProbQA). In each step it asks the user about a game feature and offers multiple choices. The engine also lists top 10 recommended games in each step. Recommendations change as the user answers more questions. It's tolerant to errors in the user output. There is no script or decision tree behind, but rather a cube of statistics. Please, let us know if you want your game added to our base, by replying in this thread or contacting me directly. Your game doesn't need to be a top one to get ever recommended by the engine. Just it needs be likely a desired game for a user given his/her answers to the questions.This way the users can find your game even if they don't know about its existence in advance, don't know the keywords to search for or don't have any clear idea on what they want to play.
  8. Unity has jumped into the world of machine learning solutions with their open beta release of the Unity Machine Learning Agents SDK (ML-Agents SDK). Released on Github, the SDK brings machine learning to games, allowing researchers and developers to create intelligent agents that can be trained using reinforcement learning, neuroevolution, or other machine learning methods through a Python API. Features include: Integration with Unity Engine Multiple cameras Flexible Multi-agent support Discrete and continuous action spaces Python 2 and 3 control interface Visualizing network outputs in the environment Tensorflow Sharp Agent Embedding (Experimental) Learn more about these features and more at the Unity blog announcement.
  9. sveta_itseez3D

    Web SDK for user-generated game avatars

    Hi folks, If you ever tried to create a game character that looks exactly like you - this post is for you. We at itSeez3D have always been dreaming of a game where we can play as ourselves and look realistic and not uncanny at the same time. We have a pretty solid background in 3D scanning and computer vision so last year we decided to work on our dream. So based on our previous 3D scanning experience we've created a technology that allows to create a lifelike 3D model of a face from one selfie. See our demo here: The process is fully automatic and takes less than one minute. We use deep learning to predict the 3D geometry of a face and a head. Meaning, each avatar gets completely unique and realistic as in Mass Effect Andromeda :). Here are the examples of 3D models you can get with our tech: https://skfb.ly/6p7w7. We've just released the beta version of the SDK and are inviting gamedev folks to participate. By signing up you can get access to our web API and then embed user-generated 3D avatars in you