Jump to content
  • Advertisement

Search the Community

Showing results for tags 'Behavior'.

The search index is currently processing. Current results may not be complete.


More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Categories

  • Audio
    • Music and Sound FX
  • Business
    • Business and Law
    • Career Development
    • Production and Management
  • Game Design
    • Game Design and Theory
    • Writing for Games
    • UX for Games
  • Industry
    • Interviews
    • Event Coverage
  • Programming
    • Artificial Intelligence
    • General and Gameplay Programming
    • Graphics and GPU Programming
    • Engines and Middleware
    • Math and Physics
    • Networking and Multiplayer
  • Visual Arts
  • Archive

Categories

  • Audio
  • Visual Arts
  • Programming
  • Writing

Categories

  • Game Dev Loadout
  • Game Dev Unchained

Categories

  • Game Developers Conference
    • GDC 2017
    • GDC 2018
  • Power-Up Digital Games Conference
    • PDGC I: Words of Wisdom
    • PDGC II: The Devs Strike Back
    • PDGC III: Syntax Error

Forums

  • Audio
    • Music and Sound FX
  • Business
    • Games Career Development
    • Production and Management
    • Games Business and Law
  • Game Design
    • Game Design and Theory
    • Writing for Games
  • Programming
    • Artificial Intelligence
    • Engines and Middleware
    • General and Gameplay Programming
    • Graphics and GPU Programming
    • Math and Physics
    • Networking and Multiplayer
  • Visual Arts
    • 2D and 3D Art
    • Art Critique and Feedback
  • Community
    • GameDev Challenges
    • GDNet+ Member Forum
    • GDNet Lounge
    • GDNet Comments, Suggestions, and Ideas
    • Coding Horrors
    • Your Announcements
    • Hobby Project Classifieds
    • Indie Showcase
    • Article Writing
  • Affiliates
    • NeHe Productions
    • AngelCode
  • Topical
    • Virtual and Augmented Reality
    • News
  • Workshops
    • C# Workshop
    • CPP Workshop
    • Freehand Drawing Workshop
    • Hands-On Interactive Game Development
    • SICP Workshop
    • XNA 4.0 Workshop
  • Archive
    • Topical
    • Affiliates
    • Contests
    • Technical
  • GameDev Challenges's Topics
  • For Beginners's Forum
  • Unreal Engine Users's Unreal Engine Group Forum
  • Unity Developers's Forum
  • Unity Developers's Asset Share

Calendars

  • Community Calendar
  • Games Industry Events
  • Game Jams
  • GameDev Challenges's Schedule

Blogs

There are no results to display.

There are no results to display.

Product Groups

  • Advertisements
  • GameDev Gear

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


About Me


Website


Role


Twitter


Github


Twitch


Steam

Found 55 results

  1. Hey guys a newbie game dev here. I was wondering if the totally awesome nemesis system for middle earth shadow of mordor was possible with unreal engine 4. I plan on making a game soon that has a MUCH SIMPLER version of it. Also which would be better for it? Blueprints or c++. I would really appreciate it if you could break it down into multiple concepts for a mere beginner such as myself :).
  2. theaaronstory

    Design Spicing up the AI (game mechanics)

    Above the usual tropes of having good design, character, specialty and synergy, thought I'd add some [basic] special cases to the mix: Angle of attack: Enemies would be able to come from all 360 degrees, and would not be limited to the ground plane. Player's cone of vision: It would inform the AI where the player is facing, allowing them to have more choices to attack. [This could also lead to changes in player awareness.] Sense of surroundings: Primarily to reduce kiting, but would be useful elsewhere. It would be concentric in design (Close > Mid > Range). Would double as a sensory limit, to call/hear sound cues. [In order to call for reinforcements for example.] Cover/fallback system: Keep ranged units out of harm's way, or get enemies to hide in tall grass to avoid detection. [Or joining up allies in order to survive.] Multiple attack modes: Might have a dagger and a knife, Why not use them? [Maybe both, at the same time? ] Reactive environment: Place traps, loot chests/resources, and take them with them, or to a camp. Player tracking: Either using environmental cues (footsteps in snow), or by player "crumbs" on the floor (if higher level AI). However, these would decay eventually. [Might add aerial tracking/wind direction.] [These also could be used by the player, if ability is available, or would refer to the lesser visual cues.] Different attack tactics: Encirclement, ambush, flank or corner. [Or grab you and try to drag you somewhere else.] Zone/Time awareness: Example: If weak, then would not venture close to an enemy, or would stay more in the shadows. [And come out more during the night; and be more shifty.] [Or would be more active or passive.] Own stamina: YES. Separate from the base aggro-timer. Auto pickup (for player as well): Maybe grab a weapon from the floor or two? [But mostly for the player, to customize their experience; like having auto gold pickup.] More synergy: Mobs would actively seek out tactics, when near each other. [Example: one throws oil on you, the other lights you on fire.] Extra scripted states of behavior: To fill in time between those "cumbersome" moments of having to fight! [Adding more randomness to enemies, by giving them "jobs" to do, like eating, resting, smelling of yellow flowers, etc.] Power sensitivity/Scaling: If player is too low, or too high level, they would either run or be more courageous. [Much like having an XP penalty for doing so.] Reduced predictability: By giving them extra mobility: dash or [trying to] avoid certain attacks, before happening (if smart enough). Whiskers path-finding: To give them more realistic movements. Chance of unpredictability: Enemies might go overdrive, or panic, or grab a random behavior from the pool. [From other monsters, within limits of course.] Intended weaknesses: Afraid of light, or to specific events, etc. Against the elements: Based on their ability to move, and their environment (snow, mud, rain, etc.), their movements would also vary [greatly].
  3. theaaronstory

    Design Decisions, Enemies, Bosses (game mechanics)

    The good, the bad and the in between Even though the core gameplay revolves around action, I still wanted to have that feeling of "Your actions affect the world" kind of vibe. Nothing too crazy, but just enough for you to get a kick out of it. But what do I mean by that? Well . . . EOTH has a world that lives on its own (day-night cycles, weather effects, etc.). During your playthrough, you'd often encounter random missions (and main quests), that would be either black, white or gray in nature. This means that whenever you chose an outcome, it would leave a mark on your surroundings: the color palette would change to, let's say, a darker one, and your enemies would get tougher/act more aggressively (because you parted with the devil), and so forth. Of course, min-maxing would have its own devastating and chaotic effect as well. This also means that you've different standings with the locals. As you do missions for them, some might lower their prices, others would cross you, and refuse to help you (if things would get to a certain point). Also there wouldn't be a scenario, where you'd be able to please everyone (again, to limit meta-gaming). Creatures and other beings of sort And this brings us to the other topic I wanted to talk about: the flora and fauna. EOTH would thrive to be a more D&D like experience, when it comes to stuff like this. I really wish to see all kinds of wild animals, races, creatures (may it be living or not), all the good stuff that I think should be included in this game. There's so much more out there in the world, and I wish to show that there's more to it, than having mostly bipedal objects in your game. This also begs for strange behaviors, such as morphing, or whatever magical ability you can think of. There is a whole world out there, wanting to be discovered! Boss fights Last but not least, I wanted to talk about two things: Boss arenas and enticing fights. Would love to see more open areas, especially when crossing the path of a chapter ending boss. No more tight quarters, and claustrophobic fighting pits, I say. Would love to implement a more dynamic approach, where you'd have to interact with your surroundings, and/or your enemy would do the same. My reasoning is simple: Don't want boring fights, and chapter ending battles are supposed to be something special, and should have something more to them.
  4. I ran into a problem when testing a program on an AMD GPU. When tested on Nvidia and Intel HD Graphics, everything works fine. On AMD, the problem occurs precisely when trying to bind the texture. Because of this problem, the shader has no shadow maps and only a black screen is visible. Id textures and other parameters are successfully loaded. Below are the code snippets: Here is the complete problem area of the rendering code: #define cfgtex(texture, internalformat, format, width, height) glBindTexture(GL_TEXTURE_2D, texture); \ glTexImage2D(GL_TEXTURE_2D, 0, internalformat, width, height, 0, format, GL_FLOAT, NULL); void render() { for (GLuint i = 0; i < count; i++) { // start id = 10 glUniform1i(samplersLocations[i], startId + i); glActiveTexture(GL_TEXTURE0 + startId + i); glBindTexture(GL_TEXTURE_CUBE_MAP, texturesIds[i]); } renderer.mainPass(displayFB, rbo); cfgtex(colorTex, GL_RGBA16F, GL_RGBA, params.scrW, params.scrH); cfgtex(dofTex, GL_R16F, GL_RED, params.scrW, params.scrH); cfgtex(normalTex, GL_RGB16F, GL_RGB, params.scrW, params.scrH); cfgtex(ssrValues, GL_RG16F, GL_RG, params.scrW, params.scrH); cfgtex(positionTex, GL_RGB16F, GL_RGB, params.scrW, params.scrH); glClear(GL_COLOR_BUFFER_BIT); glClearBufferfv(GL_COLOR, 1, ALGINE_RED); // dof buffer // view port to window size glViewport(0, 0, WIN_W, WIN_H); // updating view matrix (because camera position was changed) createViewMatrix(); // sending lamps parameters to fragment shader sendLampsData(); glEnableVertexAttribArray(cs.inPosition); glEnableVertexAttribArray(cs.inNormal); glEnableVertexAttribArray(cs.inTexCoord); // drawing //glUniform1f(ALGINE_CS_SWITCH_NORMAL_MAPPING, 1); // with mapping glEnableVertexAttribArray(cs.inTangent); glEnableVertexAttribArray(cs.inBitangent); for (size_t i = 0; i < MODELS_COUNT; i++) drawModel(models[i]); for (size_t i = 0; i < LAMPS_COUNT; i++) drawModel(lamps[i]); glDisableVertexAttribArray(cs.inPosition); glDisableVertexAttribArray(cs.inNormal); glDisableVertexAttribArray(cs.inTexCoord); glDisableVertexAttribArray(cs.inTangent); glDisableVertexAttribArray(cs.inBitangent); ... } renderer.mainPass code: void mainPass(GLuint displayFBO, GLuint rboBuffer) { glBindFramebuffer(GL_FRAMEBUFFER, displayFBO); glBindRenderbuffer(GL_RENDERBUFFER, rboBuffer); glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT, params->scrW, params->scrH); glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, rboBuffer); glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); } glsl: #version 400 core ... uniform samplerCube shadowMaps[MAX_LAMPS_COUNT]; There are no errors during the compilation of shaders. As far as I understand, the texture for some reason does not bind. Depth maps themselves are drawn correctly. I access the elements of the array as follows: for (int i = 0; i < count; i++) { ... depth = texture(shadowMaps[i], fragToLight).r; ... } Also, it was found that a black screen occurs when the samplerCube array is larger than the bound textures. For example MAX_LAMPS_COUNT = 2 and count = 1, then uniform samplerCube shadowMaps[2]; glUniform1i(samplersLocations[0], startId + 0); glActiveTexture(GL_TEXTURE0 + startId + 0); glBindTexture(GL_TEXTURE_CUBE_MAP, texturesIds[0]); In this case, there will be a black screen. But if MAX_LAMPS_COUNT = 1 (uniform samplerCube shadowMaps[1]) then shadows appear, but a new problem also arises: Do not pay attention to the fact that everything is greenish, this is due to incorrect color correction settings for the video card. Any ideas? I would be grateful for the help
  5. Hi there, Please help us evaluate three fighting-game streaming channels (Twitch) through the following SurveyMonkey page: https://jp.surveymonkey.com/r/8PR7QRD It takes around 15 to 20 minutes to finish this evaluation. Thank you so much in advance for your contributions to fighting-game research. If possible, please finish the survey by this Sunday. Team FightingICE http://www.ice.ci.ritsumei.ac.jp/~ftgaic/
  6. Have you ever played Starcraft, or games like it? Notice when you tell a group of air units to attack a target, they bunch up and move towards it, but once they're in range, they spread out around the target. They don't all just bunch up together, occupying the same spot, even though the game allows them to pass through each other. How would you approach this problem? I went with a "occupation grid": It's just a low-resolution 2D boolean array (640x480). Each ship (my game only has ships) has one, and updates it every frame. When attacking, they refer to the grid to figure out where they should move to. It works pretty good: The ships are nice and spread out, and don't just all occupy the same space, looking like they merged into one ship. The problem is is that this way is pretty inefficient. Just updating every ship's grid sometimes takes 24-31% of the CPU time. Using Bresenham line-drawing algorithm for every ship is the culprit. I'm thinking of having one shared grid for all the ships on a team, and instead of using a simple boolean 2D array, allow each square of the grid to keep track of every ship that is using it, by using a data structure with a linked list of references to the ships using that square. That way, I wouldn't have to update a grid for each and every ship. Maybe the solution would be to use a much simpler grid, minus the bresenham line drawing, just: a bunch of squares, try to stay in your grid square. Maybe allow larger ships to occupy more than one square. Another solution might be evading me completely, one that doesn't involve grids at all. Any thoughts?
  7. I wanted to get some thoughts on player character collisions in MMOs for my project. As I recall the original Everquest had them, however WoW didn’t. I also remember Age of Conan had them but I’m guessing they were done on the server which was highly annoying because you could collide with things that were unseen. I’m not sure how EQ did them but I don’t remember having this problem in EQ, but it was a long time ago. My general idea as to do player collisions on the client side. A collision would only affect a given clients player character as seem from that client. On the server, characters would pass right through each other. This means that because of lag, two players might see different things during a collision or one may think there was a collision while the other doesn’t, but I figure that’s less annoying than colliding with something you can’t see and everything should resolve at some point. There is one more case which I can see being a bit problematic but I think there might be a solution (actually my friend suggested it). Suppose two characters run to the same spot at the same time. At the time they reached the spot it was unoccupied but once the server updates each other’s position, they both occupy the same space. In this case after the update, a force vector is applied to each character that tries to push them away from each other. The vector is applied by each client to its own player. So basically player to player collisions aren’t necessarily absolute. I was also thinking you could generalize this and allow players to push each other. When two players collide their bounding capsule would be slightly smaller than the radius where the force vector would come into play. So if you stood next to another player and pushed he would move. By the vector rules he is pushing back on you (or rather your own client is pushing) but since your movement vector could overcome the collision force vector, only he moves unless he decides to push back. You could add mass calculation to this, so larger characters could push around smaller characters more easily. Of course there are griefing aspects to this, but I was thinking I would handle that with a reputation/faction system. Any thoughts?
  8. (Note to Mods: Could "Input" or "Peripherals" be a good Tag to add to the tag list? Just a thought.) Hey, I'm currently working on which keys on a keyboard that a user can rebind their actions to. The trick is that I use a Norwegian keyboard, so it's not obvious which keys correspond to the actual C#/XNA Keys enum values. Are there any keys from the XNA Keys enum that, in your opinion, I've neglected to add? I don't need all Keys to be bindable, only the "most commonly used keys on a keyboard". Thanks. https://pastebin.com/n1cz8Y0u
  9. I've just posted a pre-release edition of Curvature, my utility-theory AI design tool. Curvature provides a complete end-to-end solution for designing utility-based AI agents; you can specify the knowledge representation for the world, filter the knowledge into "considerations" which affect how an agent will make decisions and choose behaviors, and then plop a few agents into a dummy world and watch them run around and interact with things. Preview 1 (Core) contains the base functionality of the tool but leaves a lot of areas unpolished. My goal with this preview is to get feedback on how well the tool works as a general concept, and start refining the actual UI into something more attractive and fluid. The preview kit contains a data file with a very rudimentary "scenario" set up for you, so you can see how things work without cutting through a bunch of clutter. Give it a test drive, let me know here or via the Issue Tracker on GitHub what you like and don't like, and have fun!
  10. I'm building an American football simulation(think football manager), and am wondering about the best way of implementing AI based on various inputs which are weighted based on the personality of the NPC...I have a version of Raymond Cattell's16PF model built into the game to be able to use their various personality traits to help guide decisions. I am going to use this extensively so I need this to be both flexible and able to handle many different scenarios. For instance, a GM has to be able to decide whether he wants to resign a veteran player for big dollars or try and replace them through the draft. They need to be able to have a coherent system for not only making a decision in a vacuum as a single decision but also making a decision as part of a "plan" as to how to build the team...For instance it makes no sense for a GM to make decisions that don't align with each other in terms of the big picture. I want to be able to have the decisions take in a wide range of variables/personality traits to come up with a decision. There is no NPC per se...There isn't going to be any animations connected to this, no shooting, following, etc...just decisions which trigger actions. In a situation like a draft, there is one team "on the clock" and 31 other teams behind the scenes working on trying to decide if they want to try and trade up, trade down, etc which can change based on things like who just got picked, the drop off between the highest graded player at their position/group and the next highest graded player in that position/next group, if a player lasts past a certain point, etc... There needs to be all of these things going on simultaneous for all the teams, obviously the team on the clock is goifn to have to decide whether it wants to make a pick or take any of the offers to move down in the draft from other teams that might want to move up, etc.. So I am planning on making use of something called Behavior Bricks by Padaone Games( bb.padaonegames.com )which is a Behavior Tree but in conversations with others who have worked on AI in major projects like this(EA sports) they said to combine this with a State Machine. My question is would I be able to do this using just Behavior Bricks or would I need to build the state machine separately? Is there something else already created for this type of purpose that I could take advantage of?
  11. Hello, I have designed an AI system for games that replicates cognitive psychology models and theories, so that NPCs and other virtual characters can behave in more human-like and interesting ways. I have built a prototype in the Unity game engine, and it can produce quite complex behaviour, including learning and creativity. I am now wanting to develop it further and am looking for people or organisations to help. I am thinking about how I could present my AI system, and what would be a good way of demonstrating it. If you have any suggestions it would be great to hear them. I have a website that explains my AI system in detail: www.electronicminds.co.uk If you have any comments about the AI system, or know anyone who might be interested in helping to develop it, I would really appreciate hearing from you. Thanks for the help.
  12. MvGxAce

    Stop hacking

    Is there a program to ensure that the game I've created does not get hacked from third party apps such as lucky patcher, game guardian, game killer etc. If so, how I do I prevent this obstacle from ruining the game. The game is online based but I just recently found out there are hackers. Is there a program I could use to stop this or is it in the coding. Thankyou
  13. Viir

    2017-12-18.DRTS.play-with-bot.png

    This screenshot shows a game with the new bot and map. Screenshot from the DRTS Devlog at
  14. Following the plan from last week, I expanded the web port of the DRTS game. You can play this new version at https://distilledgames.itch.io/distilled-rts In the upper right corner of the screen, you will now find a button to access the main navigation, and from there, you can start new games. Besides the tutorial, you can also play on a larger map with a bot. To make this a bit more of a challenge, I spent a few hours on building a basic behavior for the bot. It will spread out, conquer more and more area on the map and eventually overrun you if you don’t act. I also started implementation of the map generator, which is used to generate random maps, and the new map you see is coming from this generator. A good part of the map generation functions are already implemented, but it needs some more work, most importantly to enable symmetrical maps. Symmetrical maps will come with a future release, as well as a user interface to customize the map generation process. The screenshot shows a game with the new bot and map.
  15. Below is my preliminary draft design for the AI system within Spellbound. I'm slowly migrating away from scripted expert systems towards a more dynamic and fluid AI system based on machine learning and neural networks. I may be crazy to attempt this, but I find this topic fascinating. I ended up having a mild existential crisis as a result of this. Let me know what you think or if I'm missing something. Artificial Intelligence: Objectives: Spellbound is going to be a large open world with many different types of characters, each with different motives and behaviors. We want this open world to feel alive, as if the characters within the world are inhabitants. If we went with pre-scripted behavioral patterns, the characters would be unable to learn and adapt to changes in their environment. It would also be very labor intensive to write specific AI routines for each character. Ideally, we just give every character a self-adapting brain and let them loose to figure out the rest for themselves. Core Premise: (very dense, take a minute to soak this in) Intelligence is not a fixed intrinsic property of creatures. Intelligence is an emergent property which results directly from the neural topology of a biological brain. True sentience can be created if the neural topology of an intelligent being is replicated with data structures and the correct intelligence model. If intelligence is an emergent property, and emergent properties are simple rule sets working together, then creating intelligence is a matter of discovering the simple rule sets. Design: Each character has its own individual Artificial Neural Network (ANN). This is a weighted graph which uses reinforcement learning. Throughout the character's lifespan, the graph will become more weighted towards rewarding actions and away from displeasurable ones. Any time an action causes a displeasure to go away or brings a pleasure, that neural pathway will be reinforced. If a neural pathway has not been used in a long time, we reduce its weight. Over time, the creature will learn. A SIMPLE ANN is just a single cluster of connected neurons. Each neuron is a “node” which is connected to nearby neurons. Each neuron receives inputs and generates outputs. The neural outputs always fire and activate a connected neuron. When a neuron receives enough inputs, it itself fires and activates downstream neurons. So, a SIMPLE ANN receives input and generates outputs which are a reaction to the inputs. At the end of neural cycle, we have to give response feedback to the ANN. If the neural response was positive, we strengthen the neural pathway by increasing the neural connection weights. If the response was negative, we decrease the weights of the pathway. With enough trial runs, we will find the neural pathway for the given inputs which creates the most positive outcome. The SIMPLE ANN can be considered a single cluster. It can be abstracted into a single node for the purposes of creating a higher layer of connected node networks. When we have multiple source inputs feeding into our neural network cluster and each node is running its most optimal neural pathway depending on the input, we get complex unscripted behavior. A brain is just a very large collection of layered neural nodes connected to each other. We’ll call this our “Artificial Brain” (AB) Motivation, motivators (rule sets): -All creatures have a “desired state” they want to achieve and maintain. Think about food. When you have eaten and are full, your state is at an optimally desired state. When time passes, you become increasingly hungry. Being just a teensy bit hungry may not be enough to compel you to change your current behavior, but as time goes on and your hunger increases, your motivation to eat increases until it supersedes the motives for all other actions. We can create a few very simple rules to create complex, emergent behavior. Rule 1: Every creature has a desired state they are trying to achieve and maintain. Some desired states may be unachievable (ie, infinite wealth) Rule 2: States are changed by performing actions. Actions may change one or more states at once (one to many relationship). Rule 3: “Motive” is created by a delta between current state (CS) and desired state (DS). The greater the delta between CS and DS, the more powerful the motive is. (Is this a linear graph or an exponential graph?) Rule 4: “relief” is the sum of all deltas between CS and DS provided by an action. Rule 5: A creature can have multiple competing motives. The creature will choose the action which provides the greatest amount of relief. Rule 6: Some actions are a means to an end and can be chained together (action chains). If you’re hungry and the food is 50 feet away from you, you can’t just start eating. You first must move to the food to get within interaction radius, then eat it. Q: How do we create an action chain? Q: How do we know that the action chain will result in relief? A: We generally know what desired result we want, so we work backwards. What action causes desired result (DR)? Action G does (learned from experience). How do we perform Action G? We have to perform Action D, which causes Action G. How do we cause Action D? We perform Action A, which causes Action D. Therefore, G<-D<-A; So we should do A->D->G->DR. Back propagation may be the contemporary approach to changing graph weights, but it's backwards. Q: How does long term planning work? Q: What is a conceptual idea? How can it be represented? A: A conceptual idea is a set of nodes which is abstracted to become a single node? Motivators: (Why we do the things we do) Hunger Body Temperature Wealth Knowledge Power Social Validation Sex Love/Compassion Anger/Hatred Pain Relief Fear Virtues, Vices & Ethics Notice that all of these motivators are actually psychological motivators. That means they happen in the head of the agent rather than being a physical motivator. You can be physically hungry, but psychologically, you can ignore the pains of hunger. The psychological thresholds would be different per agent. Therefore, all of these motivators belong in the “brain” of the character rather than all being attributes of an agents physical body. Hunger and body temperature would be physical attributes, but they would also be “psychological tolerances”. Psychological Tolerances: {motivator} => 0 [------------|-----------o----|----] 100 A B C D E A - This is the lowest possible bound for the motivator. B - This is the lower threshold point for the motivator. If the current state falls below this value, the desired state begins to affect actions. C - This is the current state of the motivator. D - This is the upper threshold point for the motivator. If the current state exceeds this value, the desired state begins to affect actions. E - This is the highest bounds for the motivator. The A & E bounds values are fixed and universal. The B and D threshold values vary by creature. Where you place them can make huge differences in behavior. Psychological Profiles: We can assign a class of creatures a list of psychological tolerances and assign their current state to some preset values. The behavioral decisions and subsequent actions will be driven by the psychological profile based upon the actions which create the sum of most psychological relief. The psychological profile will be the inputs into an artificial neural network, and the outputs will be the range of actions which can be performed by the agent. Ideally, the psychological profile state will drive the ANN, which drives actions, which changes the state of the psychological profile, which creates a feedback loop of reinforcement learning. Final Result: We do not program scripted behaviors, we assign psychological profiles and lists of actions. Characters will have psychological states which drive their behavioral patterns. Simply by tweaking the psychological desires of a creature, we can create emergent behavior resembling intelligence. A zombie would always be hungry, feasting on flesh would provide temporary relief. A goblin would have a strong compulsion for wealth, so they'd be very motivated to perform actions which ultimately result in gold. Rather than spending lots of time writing expert systems styled AI, we create a machine learning type of AI. Challenges: I have never created a working artificial neural network type of AI. Experimental research and development: The following notes are crazy talk which may or may not be feasible. They may need more investigation to measure their merit as viable approaches to AI. Learning by Observation: Our intelligent character doesn’t necessarily have to perform an action themselves to learn about its consequences (reward vs regret). If they watch another character perform an action and receive a reward, the intelligent character creates a connection between an action and consequence. Exploration Learning: A very important component to getting an simple ANN to work most efficiently is to get the neurons to find and establish new connections with other neurons. If we have a neural connection topology which always results in a negative response, we’ll want to generate a new connection at random to a nearby neuron. Exploration Scheduling: When all other paths are terrible, the new path becomes better and we “try it out” because there’s nothing better. If the new pathway happens to result in a positive outcome, suddenly it gets much stronger. This is how our simple ANN discovers new un