Jump to content
  • Advertisement

Search the Community

Showing results for tags 'Neural Networks'.



More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Categories

  • Audio
    • Music and Sound FX
  • Business
    • Business and Law
    • Career Development
    • Production and Management
  • Game Design
    • Game Design and Theory
    • Writing for Games
    • UX for Games
  • Industry
    • Interviews
    • Event Coverage
  • Programming
    • Artificial Intelligence
    • General and Gameplay Programming
    • Graphics and GPU Programming
    • Engines and Middleware
    • Math and Physics
    • Networking and Multiplayer
  • Visual Arts
  • Archive

Categories

  • Audio
  • Visual Arts
  • Programming
  • Writing

Categories

  • Game Dev Loadout
  • Game Dev Unchained

Categories

  • Game Developers Conference
    • GDC 2017
    • GDC 2018
  • Power-Up Digital Games Conference
    • PDGC I: Words of Wisdom
    • PDGC II: The Devs Strike Back
    • PDGC III: Syntax Error

Forums

  • Audio
    • Music and Sound FX
  • Business
    • Games Career Development
    • Production and Management
    • Games Business and Law
  • Game Design
    • Game Design and Theory
    • Writing for Games
  • Programming
    • Artificial Intelligence
    • Engines and Middleware
    • General and Gameplay Programming
    • Graphics and GPU Programming
    • Math and Physics
    • Networking and Multiplayer
  • Visual Arts
    • 2D and 3D Art
    • Critique and Feedback
  • Community
    • GameDev Challenges
    • GDNet+ Member Forum
    • GDNet Lounge
    • GDNet Comments, Suggestions, and Ideas
    • Coding Horrors
    • Your Announcements
    • Hobby Project Classifieds
    • Indie Showcase
    • Article Writing
  • Affiliates
    • NeHe Productions
    • AngelCode
  • Topical
    • Virtual and Augmented Reality
    • News
  • Workshops
    • C# Workshop
    • CPP Workshop
    • Freehand Drawing Workshop
    • Hands-On Interactive Game Development
    • SICP Workshop
    • XNA 4.0 Workshop
  • Archive
    • Topical
    • Affiliates
    • Contests
    • Technical
  • GameDev Challenges's Topics
  • For Beginners's Forum

Calendars

  • Community Calendar
  • Games Industry Events
  • Game Jams
  • GameDev Challenges's Schedule

Blogs

There are no results to display.

There are no results to display.

Product Groups

  • Advertisements
  • GameDev Gear

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


About Me


Website


Role


Twitter


Github


Twitch


Steam

Found 7 results

  1. I'm trying to figure out the best way to implement a neural network with a varying number of inputs. Because of an NDA, I can't post my specific issue or include my data, but I've come up with a scenario that is pretty close to my dilemma, though it is over simplified quite a bit. I'm just looking for a high-level suggestion of a possible way forward so hopefully it will be enough. Say I'm tasked with writing a neural network to automate evaluating employees in a manufacturing job. There's a set number of inputs for each employee such as: days called out, times late, hours of OT worked and so on. In additional to this data there's also the information about the items produced which will be the same for each item: time taken to complete the item, amount of extra materials used to make item, quality of the item, number of tweaks needed before the item can be shipped and stuff like that. The number of data inputs for each item is always the same. The issue is that each employee produces a different number of items and I'm not sure how to account for that in the neural network. So what I'm looking at for input data: - employee data with an equal number of inputs - item data with an equal number of inputs - varying amounts of items which need to be associated with each employee The output I need is a score that takes into account all the employee data and all the item data for the items that employee produced. I was looking at averaging the item 'scores' for all the items an employee makes, but this didn't work since each item has to be part of the employee's data. This is because each item could have certain criteria that needs to weigh heavier in the evaluation (say an item is made with all criteria being excellent or all criteria being exceptionally poor) so each item needs to be considered individually along side the employee's other numbers. Next I was thinking of was to create a static number of items for all employees and set all the inputs without an associated item to zero. This didn't work since the higher numbered item input weights got rated down since they weren't used on most employees and even items that were should have swayed the output wound up barely registering for the employee. I'm new to neural networking but my neural network does seem to be working, it's just not giving me the relevant output I was hoping for without being able to include the item data. I started looking into recurrent neural networks, but the more I read about them the less they seem to be something that would help in my situation. Is this something that wouldn't be a good application for a neural network because of the different amounts of item data? Is there some other method of implementing neural networks that would be better suited to my data? Edit: my current implementation is actually 2 neural networks. 1 for the data items to produce an item score and 1 for the employee data using output from the first as an input. To back propagate I'm just passing the MSE from the Employee neural network to the output of the Item neural network and then back propagating from there. Edited to make title clearer.
  2. Hello everyone, I am doing a survey to understand better what are the pain points in terms of music composition in video games, what are the game studios / developers expectations in the future and how the industry could be improve: https://docs.google.com/forms/d/1NycMla5fhQd1fMbLy3c28alxCbhIYvkD6Fv9lqhxEJ0 It is not a promotion of any kind, I am just interested in getting feedbacks from game developers, studios and gamers (and composers as well). The answers are completely confidential and no personal information will be published, or used for any other purpose. Thanks for your time and help Vincent
  3. Intention This article is intended to give a brief look into the logistics of machine learning. Do not expect to become an expert on the field just by reading this. However, I hope that the article goes into just enough detail so that it sparks your interest in learning more about AI and how it can be applied to various fields such as games. Once you finish reading the article, I recommend looking at the resources posted below. If you have any questions, feel free to message me on Twitter @adityaXharsh. How Neural Networks Work Neural networks work by using a system of receiving inputs, sending outputs, and performing self-corrections based on the difference between the output and expected output, also known as the cost. Neural networks are composed of neurons, which in turn compose layers, or collections of neurons. For example, there is an input layer and an output layer. In between the these two layers, there are layers known as hidden layers. These layers allow for more complex and nuanced behavior by the neural network. A neural network can be thought of as a multi-tier cake: the first tier of the cake represents the input, the tiers in between, or lack thereof, represent the hidden layers, and the last tier represents the output. The two mechanisms of learning are Forward Propagation and Backward Propagation. Forward Propagation uses linear algebra for calculating what the activation of each neuron of the next layer should be, and then pushing, or propagating, those values forward. Backward Propagation uses calculus to determine what values in the network need to be changed in order to bring the output closer to the expected output. Forward Propagation As can be seen from the gif above, each layer is composed of multiple neurons, and each neuron is connected to every other neuron of the following and previous layer, save for the input and output layers since they are not surrounding by layers from both sides. To put it simply, a neural network represents a collection of activations, weights, and biases. They can be defined as: Activation: A value representing how strongly a neuron is firing. Weight: How strong the connection is between two neurons. Affects how much of the activation is propagated onto the next layer. Bias: A minimum threshold for whether or not the current neuron's activation and weight should affect the next neuron's activation. Each neuron has an activation and a bias. Every connection to every neuron is represented as a weight. The activations, weights, biases, and connections can be represented using matrices. Activations are calculated using this formula: After the inner portion of the function has been computed, the resulting matrix gets pumped into a special function known as the Sigmoid Function. The sigmoid is defined as: The sigmoid function is handy since its output is locked between a range of zero and one. This process is repeated until the activations of the output neurons have been calculated. Backward Propagation The process of a neural network performing self-correction is referred to as Backward Propagation or backprop. This article will not go into detail about backprop since it can be a confusing topic. To summarize, the algorithm uses a technique in calculus known as Gradient Descent. Given a plane in an infinite number of dimensions, the direction of change that minimizes the error must be found. The goal of using gradient descent is to modify the weights and biases such that the error in the network approaches zero. Furthermore, you can find the cost, or error, of a network using this formula: Unlike forward propagation, which is done from input to output, backward propagation goes from output to input. For every activation, find the error in that neuron, how much of a role it played in the error of the output, and adjust accordingly. This technique uses concepts such as the chain rule, partial derivatives, and multi-variate calculus; therefore, it's a good idea to brush up on one's calculus skills. High Level Algorithm Initialize matrices for weights and biases for all layers to a random decimal number between -1 and 1. Propagate input through the network. Compare output with the expected output. Backwards propagate the correction back into the network. Repeat this for N number of training samples. Source Code If you're interested in looking into the guts of a neural network, check out AI Chan! It's a simple to integrate library for machine learning I wrote in C++. Feel free to learn from it and use it in your own projects. https://bitbucket.org/mrsaturnsan/aichan/ Resources http://neuralnetworksanddeeplearning.com/ https://www.youtube.com/channel/UCWN3xxRkmTPmbKwht9FuE5A
  4. Hey guys, I'm starting to get really interested in A.I and especially machine learning. However I can't find a good place to start. I have looked on the internet for tutorials but they are all useless to begginers because they all end up thinking that you have a lot of knowledge about machine learning already and that you know what you are doing. But for me (an absolute beginner) I find it hard to understand what they are saying in the tutorials. I have tried to make my own A.I by just playing around with some maths and the basics of machine learning that I already know like neurons and weights and baias. For the most part my little mini projects work but they are not proper A.I. Could anyone please recommend me some tutorials that I could watch for begginers. My main programming language is python however I am starting to learn java and c#. I have already looked at bigNeuralNetwork's tutorial, at the beginning it was great and I understood everything but then halfway through I didn't know what was going on.
  5. Below is my preliminary draft design for the AI system within Spellbound. I'm slowly migrating away from scripted expert systems towards a more dynamic and fluid AI system based on machine learning and neural networks. I may be crazy to attempt this, but I find this topic fascinating. I ended up having a mild existential crisis as a result of this. Let me know what you think or if I'm missing something. Artificial Intelligence: Objectives: Spellbound is going to be a large open world with many different types of characters, each with different motives and behaviors. We want this open world to feel alive, as if the characters within the world are inhabitants. If we went with pre-scripted behavioral patterns, the characters would be unable to learn and adapt to changes in their environment. It would also be very labor intensive to write specific AI routines for each character. Ideally, we just give every character a self-adapting brain and let them loose to figure out the rest for themselves. Core Premise: (very dense, take a minute to soak this in) Intelligence is not a fixed intrinsic property of creatures. Intelligence is an emergent property which results directly from the neural topology of a biological brain. True sentience can be created if the neural topology of an intelligent being is replicated with data structures and the correct intelligence model. If intelligence is an emergent property, and emergent properties are simple rule sets working together, then creating intelligence is a matter of discovering the simple rule sets. Design: Each character has its own individual Artificial Neural Network (ANN). This is a weighted graph which uses reinforcement learning. Throughout the character's lifespan, the graph will become more weighted towards rewarding actions and away from displeasurable ones. Any time an action causes a displeasure to go away or brings a pleasure, that neural pathway will be reinforced. If a neural pathway has not been used in a long time, we reduce its weight. Over time, the creature will learn. A SIMPLE ANN is just a single cluster of connected neurons. Each neuron is a “node” which is connected to nearby neurons. Each neuron receives inputs and generates outputs. The neural outputs always fire and activate a connected neuron. When a neuron receives enough inputs, it itself fires and activates downstream neurons. So, a SIMPLE ANN receives input and generates outputs which are a reaction to the inputs. At the end of neural cycle, we have to give response feedback to the ANN. If the neural response was positive, we strengthen the neural pathway by increasing the neural connection weights. If the response was negative, we decrease the weights of the pathway. With enough trial runs, we will find the neural pathway for the given inputs which creates the most positive outcome. The SIMPLE ANN can be considered a single cluster. It can be abstracted into a single node for the purposes of creating a higher layer of connected node networks. When we have multiple source inputs feeding into our neural network cluster and each node is running its most optimal neural pathway depending on the input, we get complex unscripted behavior. A brain is just a very large collection of layered neural nodes connected to each other. We’ll call this our “Artificial Brain” (AB) Motivation, motivators (rule sets): -All creatures have a “desired state” they want to achieve and maintain. Think about food. When you have eaten and are full, your state is at an optimally desired state. When time passes, you become increasingly hungry. Being just a teensy bit hungry may not be enough to compel you to change your current behavior, but as time goes on and your hunger increases, your motivation to eat increases until it supersedes the motives for all other actions. We can create a few very simple rules to create complex, emergent behavior. Rule 1: Every creature has a desired state they are trying to achieve and maintain. Some desired states may be unachievable (ie, infinite wealth) Rule 2: States are changed by performing actions. Actions may change one or more states at once (one to many relationship). Rule 3: “Motive” is created by a delta between current state (CS) and desired state (DS). The greater the delta between CS and DS, the more powerful the motive is. (Is this a linear graph or an exponential graph?) Rule 4: “relief” is the sum of all deltas between CS and DS provided by an action. Rule 5: A creature can have multiple competing motives. The creature will choose the action which provides the greatest amount of relief. Rule 6: Some actions are a means to an end and can be chained together (action chains). If you’re hungry and the food is 50 feet away from you, you can’t just start eating. You first must move to the food to get within interaction radius, then eat it. Q: How do we create an action chain? Q: How do we know that the action chain will result in relief? A: We generally know what desired result we want, so we work backwards. What action causes desired result (DR)? Action G does (learned from experience). How do we perform Action G? We have to perform Action D, which causes Action G. How do we cause Action D? We perform Action A, which causes Action D. Therefore, G<-D<-A; So we should do A->D->G->DR. Back propagation may be the contemporary approach to changing graph weights, but it's backwards. Q: How does long term planning work? Q: What is a conceptual idea? How can it be represented? A: A conceptual idea is a set of nodes which is abstracted to become a single node? Motivators: (Why we do the things we do) Hunger Body Temperature Wealth Knowledge Power Social Validation Sex Love/Compassion Anger/Hatred Pain Relief Fear Virtues, Vices & Ethics Notice that all of these motivators are actually psychological motivators. That means they happen in the head of the agent rather than being a physical motivator. You can be physically hungry, but psychologically, you can ignore the pains of hunger. The psychological thresholds would be different per agent. Therefore, all of these motivators belong in the “brain” of the character rather than all being attributes of an agents physical body. Hunger and body temperature would be physical attributes, but they would also be “psychological tolerances”. Psychological Tolerances: {motivator} => 0 [------------|-----------o----|----] 100 A B C D E A - This is the lowest possible bound for the motivator. B - This is the lower threshold point for the motivator. If the current state falls below this value, the desired state begins to affect actions. C - This is the current state of the motivator. D - This is the upper threshold point for the motivator. If the current state exceeds this value, the desired state begins to affect actions. E - This is the highest bounds for the motivator. The A & E bounds values are fixed and universal. The B and D threshold values vary by creature. Where you place them can make huge differences in behavior. Psychological Profiles: We can assign a class of creatures a list of psychological tolerances and assign their current state to some preset values. The behavioral decisions and subsequent actions will be driven by the psychological profile based upon the actions which create the sum of most psychological relief. The psychological profile will be the inputs into an artificial neural network, and the outputs will be the range of actions which can be performed by the agent. Ideally, the psychological profile state will drive the ANN, which drives actions, which changes the state of the psychological profile, which creates a feedback loop of reinforcement learning. Final Result: We do not program scripted behaviors, we assign psychological profiles and lists of actions. Characters will have psychological states which drive their behavioral patterns. Simply by tweaking the psychological desires of a creature, we can create emergent behavior resembling intelligence. A zombie would always be hungry, feasting on flesh would provide temporary relief. A goblin would have a strong compulsion for wealth, so they'd be very motivated to perform actions which ultimately result in gold. Rather than spending lots of time writing expert systems styled AI, we create a machine learning type of AI. Challenges: I have never created a working artificial neural network type of AI. Experimental research and development: The following notes are crazy talk which may or may not be feasible. They may need more investigation to measure their merit as viable approaches to AI. Learning by Observation: Our intelligent character doesn’t necessarily have to perform an action themselves to learn about its consequences (reward vs regret). If they watch another character perform an action and receive a reward, the intelligent character creates a connection between an action and consequence. Exploration Learning: A very important component to getting an simple ANN to work most efficiently is to get the neurons to find and establish new connections with other neurons. If we have a neural connection topology which always results in a negative response, we’ll want to generate a new connection at random to a nearby neuron. Exploration Scheduling: When all other paths are terrible, the new path becomes better and we “try it out” because there’s nothing better. If the new pathway happens to result in a positive outcome, suddenly it gets much stronger. This is how our simple ANN discovers new unscripted behaviors. The danger is that we will have a sub-optimal behavior pattern which generates some results, but they’re not the best results. We’d use the same neural pathway over and over again because it is a well travelled path. Exploration Rewards: In order to encourage exploring different untravelled paths, we gradually increase the “novelty” reward value for taking that pathway. If traveling this pathway results in a large reward, the pathway is highly rewarded and may become the most travelled path. Dynamic Deep Learning: On occasion, we’ll also want to create new neurons at random and connect them to at least one other nearby downstream neuron. If a neuron is not connected to any other neurons, it becomes an “island” and must die. When we follow a neural pathway, we are looking at two costs: The connection weight and the path weight. We always choose the shortest path with the least weight. Rarely used pathways will have their weight decrease over a long period of time. If a path weight reaches zero, we break the connection and our brain “forgets” the neural connection. Evolutionary & Inherited Learning: It takes a lot of effort for a neural pathway to become developed. We will want to speed up the development. If a child is born to two parents, those parents will rapidly increase the neural pathways of the child by sharing their own pathways. This is one way to "teach". Thus, children will think very much like their parents do. Other characters will also share their knowledge with other characters. In order for knowledge to spread, it must be interesting enough to be spread. So, a character will generally share the most interesting knowledge they have. Network Training & Evolutionary Inheritance: An untrained ANN results in an uninteresting character. So, we have to have at least a trained base preset for a brain. This is consistent with biological brains because our brains have been pre-configured through evolutionary processes and come pre-wired with certain regions of the brain being universally responsible for processing certain input types. The training method will be rudimentary at first, to get something at least passable, and it can be done as a part of the development process. When we release the game to the public, the creatures are still going to be training. The creatures which had the most “success” will become a part of the next generation. These brain configurations can be stored on a central database somewhere in the cloud. When a player begins a new game, we download the most recent generation of brain configurations. Each newly instanced character may have a chance to have a random mutation. When the game completes, if there were any particular brains which were more successful than the current strain, we select it for “breeding” with other successful strains so that the next generation is an amalgamation of the most successful previous generations. We’ll probably begin to see some divergence and brain species over time? Predisposition towards Behavior Patterns via bias: Characters will also have slight predispositions which are assigned at birth. 50% of their predisposition is innate to their creature class. 25% is genetically passed down by parents. 25% is randomly chosen. A predisposition causes some pleasures and displeasures to be more or less intense. This will skew the weightings of a developing ANN a bit more heavily to favor particular actions. This is what will create a variety in interests between characters, and will ultimately lead to a variety in personalities. We can create very different behavior patterns in our AB’s by tweaking the amount of pleasure and displeasure various outputs generate for our creature. The brain of a goblin could derive much more pleasure from getting gold, so it will have strong neural pathways which result in getting gold. AI will be able to interact with interactable objects. An interactable object has a list of ways it can be interacted with. Interactable objects can be used to interact with other interactable objects. Characters are considered to be interactable objects. The AI has a sense of ownership for various objects. When it loses an object, it is a displeasurable feeling. When they gain an object, it is a pleasurable feeling. Stealing from an AI will cause it to be unhappy and it will learn about theft and begin trying to avoid it. Giving a gift to an AI makes it very happy. Trading one object for another will transfer ownership of objects. There is no "intrinsic value" to an object. The value of an object is based on how much the AI wants it compared to how much it wants the other object in question. Learning through Socialization: AI's will socialize with each other. This is the primary mechanism for knowledge transfer. They will generally tell each other about recent events or interests, choosing to talk about the most interesting events first. If an AI doesn't find a conversation very interesting, they will stop the conversation and leave (terminating condition). If a threat is nearby, the AI will be very interested in it and will share with nearby AI. If a player has hurt or killed a townsfolk, all of the nearby townsfolk will be very upset and may attack the player on sight. If enough players attack the townsfolk, the townsfolk AI will start to associate all players with negative feelings and may attack a player on sight even if they didn't do anything to aggravate the townsfolk AI.
  6. Chimcham

    Testing a chatbot

    Hey surfers, we are creating a space flight simulator more or less, with goofy twists. We would like to have a chatbot in place for the ship communications, a resource that can help define or tell you about the galaxy as you fly through it. I'm just looking for advice on whether you think the chatbot is engaging, as well as if there any tips you may have in developing it further (which we absolutely intend to to do). Also, feel free to just chat with it and let me know what you think. You can find the chatbot here. Thanks!
  7. I did some evaluations on the time I need to spend on AI. It is not that difficult to create AI. It’s not even difficult to create impossibly difficult AI. What is really hard, is to create interesting AI. So – I started researching neural evolution. I spent a day reading through the Neural Evolution of Augmented Topologies (NEAT) by Kenneth O. Stanley a couple of times. A paper can be found here: http://nn.cs.utexas.edu/downloads/papers/stanley.ec02.pdf The algorithm is not that difficult to understand, but you need a bit of knowledge about neural networks and how they work to understand this paper. Luckily, I had already taken a course in Machine Learning by Andrew Ng and Stanford University. I would really recommend this to all interested in the field. Concretely for my case, I wanted to see if implementing this for the Moose in this game (and potentially all other species) would be a viable solution to create some interesting AI. I could then run a learning simulation for some time to create some base for these creatures, then add them with some mediocre intelligence to the version I ship to the end-users, but keep the ability to evolve the genome for the creatures alive in the game. NEAT in short For those who don’t know much about the learning algorithms, I just want to explain a little about how they work and the purpose of them. Neural networks are a computer model imitating the neurons, axons and dendrites that the brain consists of. Every neuron is a kind of computer that does a single calculation based on the input it gets. It then sends the calculated value to all the neurons it is connected to scaled by the connections. Here’s a picture of a classical neural network: This network can be trained over many iterations of measuring how far off from the goal the network performs, (called the error value or cost value). You then alter the connection values to try and better the performance for what you want. This could be looking at an image of a letter or number and trying to classify which number or letter it is. Or it could be analyzing an image of a road and training a car to drive itself. Common for all the purposes is that you have a large list of training data that tells the system how far off from the actual case the system is. The pixels of a picture here is the input values on the left, and the prediction of what the hand-written number is comes out on the right. For the case where you don’t have any training data, you can use something like neural evolution. The requirement here then is, that you supply the system with a Fitness function. The fitness function will supply the system by awarding “points” to the fitness of the performing genome every time it has randomly evolved to do something that has made it survive. For a very “simple” example, I’d recommend you watch Seth Bling’s version on YouTube: Here, he creates a system to teach a computer to play Mario. The fitness function is just a value of how far over time Mario has moved. This works fine for this case. The inputs are the fields of blocks on the screen which he has classified to be two different types (standable, and enemy). It is a rather impressive implementation. So - an evolution algorithm works by creating a genome, mutate it to add random neurons and random connections between them, then measure how well they do before they die (or some other kill function). We then take the best few of the saved genomes and fuse them together. The fusion compares the genes in the genome and adds the matching genes from the parents with a value from either parent. It then takes the genes, that doesn’t match up, from the best performing parent (the one with the highest fitness number) and puts it into the new child. What is essential to get NEAT to work, is understanding how a genome is built up. Kenneth Stanley has some quite smart ideas on tracking where the different genes comes from. A genome here is two lists describing the neurons in the system and the connections between the neurons in the brain. The connections have an extra feature. They have what he calls an Innovation number attached, which describes when in the simulation this connection has appeared. It is this value that is used to compare the genes when fusing the genomes. As the simulation runs, and you breed the best performing creatures/genomes, you get better and better performing creatures. How long this simulation will take to run depends on the complexity of how you want the system to perform. Implementing the System Running the Neural Network I started by planning the interface for the system that should be run. To make the evolution system generic, the “brainstem” must be defined for each creature. Analogous to the real brain, this system will need a coupling between the actual brain and the inputs/outputs that each creature can have. The brainstem works as a relay, interpreting the signals from the body and other senses and forwards these signals to the brain. Simultaneously, these signal are dampened or enhanced depending on the severity of the signal and possibly also per specific signal. The brainstem also converts signals from the brain to nerve impulses to the body and muscles. It is this module, that should be defined for each creature in order to make a generic learning algorithm that all creatures can be run through. So a general model for how the brain iteration could work is something like this: Every iteration start with the creature “Sensing” what is around it. For the Moose, I wanted to include stuff like its hunger and thirst level, because I wanted to see if it could evolve to be smart enough to find food or water when it was almost dead. I had a long list of stuff as the sensory. The Sensory is gathered and sent to an adapter that inputs these values to the neural network. The network is iterated through and the output values is again sent to an adapter that interprets each of the output signals. The output signals are translated into actions and sent to a Resonator module. The function of this is to save the last output from the neural network and perform the given actions until the next output from the neural network comes. The reason I created this module was that I wasn’t sure my little laptop could keep up with running the network every frame. This means I can turn down the speed to something like ten times per second, but the creature would still act in between iterations. The Evolution System When the creatures die or are terminated for some reason, the evolution system come into play. I started by making what you could call a cradle for the evolution to start in. This was just a terrain where I put up four walls to create and area of something like a square kilometer. To start with, I made 26 copies of the same creature that was “protected” by the evolution system – meaning that when they die, it’s body isn’t deleted, only its brain. This way, I made sure that no matter how much bad luck they would have, at least 26 entities would be simulating at any given moment. When the creatures did nothing for a wanted amount of time, I terminated their brain and created/fused/mutated a new one and injected this brain into the creature along with resetting its vitals. The Neural Network Manager attached to each creature records the performance of the performing creature and saves its fitness number to a file when it dies. The Neural Evolution Manager is then responsible for finding the genomes for the specific creature and breeding the best of them, mutating them and instantiating the new brain and injecting it into the creature again. The same algorithm is used when a natural birth occurs, only these individuals die for real when they run out of food or water. Presentation For debugging any system like this, testing and unit tests are necessary. Although, with real-time simulations, you cannot necessarily test every scenario that the system will experience, mainly because it can be hard to imagine every scenario. So visualizing the actual neural network is also vitally necessary. Here’s a video of the performing system from the beginning. https://drive.google.com/open?id=0BxnLa_qsqQBoR3Q5S2IzS2dzRlU And another session: https://drive.google.com/file/d/0BxnLa_qsqQBoVGpQd3ZEMTBsNzg/view?usp=sharing What you can see is me starting by setting some parameters for the evolution system. These are values such as how long the creature can stand still and do nothing before termination or how much its fitness has to change to be recorded. There’s also a value that controls how many loop iterations the performing neural network can take. This value is needed, because two nodes can feed each other, creating an infinite loop that never returns. The next thing that happens is… nothing. Nothing happens for the first 10-20 seconds. Eventually, a creature starts reproducing, running, turning or eating, which is awarded fitness points. So the next generation of creatures all inherit these features and half of them starts doing that. The green orbs in the system are input values and are shown for the current creature being viewed on the left. The red orbs are the output values for the current creature being viewed. The yellow orbs that begin to appear in between are “hidden neurons”. The lines between them are connections and its scaling value is representing by how much color it is shown in (black/grey means close to zero). Eventually they develop some behavior that gets them forward. at generation 10, they start to be able to detect obstacles in front of them, turn and conserve energy based on the steepness of the surface they stand on. This is very interesting to me. I really feel like this has great potential, once I clean up the input values and normalize them. If any of you are interested in seeing this early build and reading the source code, I’d be happy to share it. What's next? I need to supply a hunter/predator to the training simulation. This is just going to be a placeholder – most likely an animated red box with the “Player” tag. This being a hunter following and killing the creature when it is close enough. This will eventually train the creatures to flee, turn or attack at the right moment so that the creature doesn’t get killed, thus survive and increase its fitness. This requires the creatures to know the relative direction to the predator, and perhaps the distance. Another critical optimization for this system is normalizing the input values to be between zero and one. Some of these values are vectors and requires vector normalization, and I fear these calculations may be hard on the system and require me to turn down the iterations per second. I will optimize this further and begin some game tests when I have the time.
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!