Jump to content
  • Advertisement

Search the Community

Showing results for tags 'Behavior'.

The search index is currently processing. Current results may not be complete.


More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Categories

  • Audio
    • Music and Sound FX
  • Business
    • Business and Law
    • Career Development
    • Production and Management
  • Game Design
    • Game Design and Theory
    • Writing for Games
    • UX for Games
  • Industry
    • Interviews
    • Event Coverage
  • Programming
    • Artificial Intelligence
    • General and Gameplay Programming
    • Graphics and GPU Programming
    • Engines and Middleware
    • Math and Physics
    • Networking and Multiplayer
  • Visual Arts
  • Archive

Categories

  • Audio
  • Visual Arts
  • Programming
  • Writing

Categories

  • Game Developers Conference
    • GDC 2017
    • GDC 2018
  • Power-Up Digital Games Conference
    • PDGC I: Words of Wisdom
    • PDGC II: The Devs Strike Back
    • PDGC III: Syntax Error

Forums

  • Audio
    • Music and Sound FX
  • Business
    • Games Career Development
    • Production and Management
    • Games Business and Law
  • Game Design
    • Game Design and Theory
    • Writing for Games
  • Programming
    • Artificial Intelligence
    • Engines and Middleware
    • General and Gameplay Programming
    • Graphics and GPU Programming
    • Math and Physics
    • Networking and Multiplayer
  • Visual Arts
    • 2D and 3D Art
    • Critique and Feedback
  • Community
    • GameDev Challenges
    • GDNet+ Member Forum
    • GDNet Lounge
    • GDNet Comments, Suggestions, and Ideas
    • Coding Horrors
    • Your Announcements
    • Hobby Project Classifieds
    • Indie Showcase
    • Article Writing
  • Affiliates
    • NeHe Productions
    • AngelCode
  • Topical
    • Virtual and Augmented Reality
    • News
  • Workshops
    • C# Workshop
    • CPP Workshop
    • Freehand Drawing Workshop
    • Hands-On Interactive Game Development
    • SICP Workshop
    • XNA 4.0 Workshop
  • Archive
    • Topical
    • Affiliates
    • Contests
    • Technical
  • GameDev Challenges's Topics
  • For Beginners's Forum

Calendars

  • Community Calendar
  • Games Industry Events
  • Game Jams
  • GameDev Challenges's Schedule

Blogs

There are no results to display.

There are no results to display.

Product Groups

  • GDNet+
  • Advertisements
  • GameDev Gear

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


About Me


Website


Role


Twitter


Github


Twitch


Steam

Found 33 results

  1. Have you ever played Starcraft, or games like it? Notice when you tell a group of air units to attack a target, they bunch up and move towards it, but once they're in range, they spread out around the target. They don't all just bunch up together, occupying the same spot, even though the game allows them to pass through each other. How would you approach this problem? I went with a "occupation grid": It's just a low-resolution 2D boolean array (640x480). Each ship (my game only has ships) has one, and updates it every frame. When attacking, they refer to the grid to figure out where they should move to. It works pretty good: The ships are nice and spread out, and don't just all occupy the same space, looking like they merged into one ship. The problem is is that this way is pretty inefficient. Just updating every ship's grid sometimes takes 24-31% of the CPU time. Using Bresenham line-drawing algorithm for every ship is the culprit. I'm thinking of having one shared grid for all the ships on a team, and instead of using a simple boolean 2D array, allow each square of the grid to keep track of every ship that is using it, by using a data structure with a linked list of references to the ships using that square. That way, I wouldn't have to update a grid for each and every ship. Maybe the solution would be to use a much simpler grid, minus the bresenham line drawing, just: a bunch of squares, try to stay in your grid square. Maybe allow larger ships to occupy more than one square. Another solution might be evading me completely, one that doesn't involve grids at all. Any thoughts?
  2. worldzpoc

    ZPOC Blog #3

    GENERATORS / MOUSE TRAPS Mouse trap visuals are functional and show a dead little rodent that your units can eat raw. Now we’ll just need to have them haul the raw food back to the storage area to enable them to be cooked. Delish! Units are now smart enough to go to the rain barrels for water when they get thirsty! Watch out, Einstein! COMBAT ANIMATIONS Sometimes our units just can’t put their battles behind them, and just keep on swinging their weapons as they wander, eat, and even in their sleep. At other times they just sort of spaz out and do the jitterbug in place for a second before continuing the attack. We are working in the code and Mechanim to troubleshoot the issues. CONSTRUCTION We’ve come to a compromise with our wayward units. We’re not ready to update the Task Dispatcher just yet, so they’ll have to continue to follow our commands. However, we’re going to split up the Construct Resource behavior into separate parts to allow for more flexibility. Hopefully our units will demonstrate more obedience when this is complete. Either way this will make the transition to the new Task Dispatcher go more smoothly whenever we do get to it. TL;DR: -Mouse traps show dead rodent, need hauling enabled -Units drink from rain barrels -Working on Combat glitches -Working on fixing bug where units get stuck in Construct Resource behavior when proxies are built before you have the resources.
  3. I wanted to get some thoughts on player character collisions in MMOs for my project. As I recall the original Everquest had them, however WoW didn’t. I also remember Age of Conan had them but I’m guessing they were done on the server which was highly annoying because you could collide with things that were unseen. I’m not sure how EQ did them but I don’t remember having this problem in EQ, but it was a long time ago. My general idea as to do player collisions on the client side. A collision would only affect a given clients player character as seem from that client. On the server, characters would pass right through each other. This means that because of lag, two players might see different things during a collision or one may think there was a collision while the other doesn’t, but I figure that’s less annoying than colliding with something you can’t see and everything should resolve at some point. There is one more case which I can see being a bit problematic but I think there might be a solution (actually my friend suggested it). Suppose two characters run to the same spot at the same time. At the time they reached the spot it was unoccupied but once the server updates each other’s position, they both occupy the same space. In this case after the update, a force vector is applied to each character that tries to push them away from each other. The vector is applied by each client to its own player. So basically player to player collisions aren’t necessarily absolute. I was also thinking you could generalize this and allow players to push each other. When two players collide their bounding capsule would be slightly smaller than the radius where the force vector would come into play. So if you stood next to another player and pushed he would move. By the vector rules he is pushing back on you (or rather your own client is pushing) but since your movement vector could overcome the collision force vector, only he moves unless he decides to push back. You could add mass calculation to this, so larger characters could push around smaller characters more easily. Of course there are griefing aspects to this, but I was thinking I would handle that with a reputation/faction system. Any thoughts?
  4. Steering behaviors are use to maneuver IA agents in a 3D environment. With these behaviors, agents are able to better react to changes in their environment. While the navigation mesh algorithm is ideal for planning a path from one point to another, it can't really deal with dynamic objects such as other agents. This is where steering behaviors can help. What are steering behaviors? Steering behaviors are an amalgam of different behaviors that are used to organically manage the movement of an AI agent. For example, behaviors such as obstacle avoidance, pursuit and group cohesion are all steering behaviors... Steering behavior are usually applied in a 2D plane: it is sufficient, easier to implement and understand. (However, I can think of some use cases that require the behaviors to be in 3D, like in games where the agents fly to move) One of the most important behavior of all steering behaviors is the seeking behavior. We also added the arriving behavior to make the agent's movement a whole lot more organic. Steering behaviors are described in this paper. What is the seeking behavior? The seeking behavior is the idea that an AI agent "seeks" to have a certain velocity (vector). To begin, we'll need to have 2 things: An initial velocity (a vector) A desired velocity (also a vector) First, we need to find the velocity needed for our agent to reach a desired point... This is usually a subtraction of the current position of the agent and the desired position. \(\overrightarrow{d} = (x_{t},y_{t},z_{t}) - (x_{a},y_{a},z_{a})\) Here, a represent our agent and t our target. d is the desired velocity Secondly, we must also find the agent's current velocity, which is usually already available in most game engines. Next, we need to find the vector difference between the desired velocity and the agent's current velocity. it literally gives us a vector that gives the desired velocity when we add it to that agent's current velocity. We will call it "steering velocity". \(\overrightarrow{s} = \overrightarrow{d} - \overrightarrow{c}\) Here, s is our steering velocity, c is the agent's current velocity and d is the desired velocity After that, we truncate our steering velocity to a length called the "steering force". Finally, we simply add the steering velocity to the agent's current velocity . // truncateVectorLocal truncate a vector to a given length Vector3f currentDirection = aiAgentMovementControl.getWalkDirection(); Vector3f wantedDirection = targetPosition.subtract(aiAgent.getWorldTranslation()).normalizeLocal().setY(0).multLocal(maxSpeed); // We steer to our wanted direction Vector3f steeringVector = truncateVectorLocal(wantedDirection.subtract(currentDirection), steeringForce); Vector3f newCurrentDirection = MathExt.truncateVectorLocal(currentDirection.addLocal(MathExt.truncateVectorLocal(wantedDirection.subtract(currentDirection), m_steeringForce).divideLocal(m_mass)), maxSpeed); This computation is done frame by frame: this means that the steering velocity becomes weaker and weaker as the agent's current velocity approaches the desired one, creating a kind of interpolation curve. What is the arriving behavior? The arrival behavior is the idea that an AI agent who "arrives" near his destination will gradually slow down until it gets there. We already have a list of waypoints returned by the navigation mesh algorithm for which the agent must cross to reach its destination. When it has passed the second-to-last point, we then activate the arriving behavior. When the behavior is active, we check the distance between the destination and the current position of the agent and change its maximum speed accordingly. // This is the initial maxSpeed float maxSpeed = unitMovementControl.getMoveSpeed(); // It's the last waypoint float distance = aiAgent.getWorldTranslation().distance(nextWaypoint.getCenter()); float rampedSpeed = aiAgentMovementControl.getMoveSpeed() * (distance / slowingDistanceThreshold); float clippedSpeed = Math.min(rampedSpeed, aiAgentMovementControl.getMoveSpeed()); // This is our new maxSpeed maxSpeed = clippedSpeed; Essentially, we slow down the agent until it gets to its destination. The future? As I'm writing this, we've chosen to split the implementation of the steering behaviors individually to implement only the bare necessities, as we have no empirical evidence that we'll need to implement al of them. Therefore, we only implemented the seeking and arriving behaviors, delaying the rest of the behaviors at an indeterminate time in the future,. So, when (or if) we'll need it, we'll already have a solid and stable foundation from which we can build upon. More links Understanding Steering Behaviors: Seek Steering Behaviors · libgdx/gdx-ai Wiki Understanding Steering Behaviors: Collision Avoidance
  5. As DMarket platform development continues, we would like to share a few case studies regarding the newest functionality on the platform. With these case studies we would like to illuminate our development process, user requirements gathering and analysis, and much more. The first case study we’re going to share is “DMarket Wallet Development”: how, when and why we decided to implement functionality which improved virtual items and DMarket Coins collection and transfer. DMarket cares about every user, no matter how big or small the user group is. And that’s why we recently updated our virtual item purchase rules, bringing a brand new “DMarket Wallet” feature to our users. So let’s take a retrospective look and find out what challenges were brought to the DMarket team within this feature and how these challenges were met. DMarket and Blockchain Virtual Items Trading Rules Within the first major release of the DMarket platform, we provided you with a wide range of possibilities and options, assuring Steam account connection within user profile, confirmation of account and device ownership via email for enhanced security, DMarket Coins, and DMarket Tokens exchanging, transactions with intermediaries on blockchain within our very own Blockchain system called “Blockchain Explorer”. And well, regarding Blockchain... While it has totally proved itself as a working solution, we were having some issues with malefactors, as many of you may already know. DMarket specialists conducted an investigation, which resulted in a perfect solution: we found out that a few users created bots to buy our Founder’s Mark, a limited special edition memorabilia to commemorate the launch of the platform, for lower prices and then sell them at higher prices. Sure thing, there was no chance left for regular users. A month ago we fixed the issue, to our great relief. We received real feedback from the community, a real proof-of-concept. The whole DMarket ecosystem turned out to be truly resilient, proving all our detractors wrong. And while we’ve got proof, we also studied how users feel about platform UX since blockchain requires additional efforts when buying or selling an item. With our first release of the Demo platform, we let users sign transactions with a private key from their wallet. In terms of user experience, that practice wasn’t too good. Just think about it: you should enter the private key each time you want to buy or sell something. Every transaction required a lot of actions from the user’s side, which is unacceptable for a great and user-friendly product like ours. That’s why we decided to move from that approach, and create a single unified “wallet” on the DMarket side in order to store all the DMarket Coins and virtual items on our side and let users buy or sell stuff with a few clicks instead of the previous lengthy process. In other words, every user received a public key which serves as a destination address, while private keys were held on the DMarket side in order to avoid transaction signing each time something is traded. This improved usability, and most of our users were satisfied with the update. But not all of them... You Can’t Make Everyone Happy….. Can You? By removing the transaction signing requirement we made most of our users happy. Of course, within a large number of happy people, we can always find those who are worried about owning a public key wallet. When you don’t own a public key, it may disturb you a little bit. Sure, DMarket is a trusted company, but there are people who can’t trust even themselves sometimes. So what were we gonna do? Ignore them? Roll back to the previous way of buying virtual items and coins? No! We decided to go the other way. Within the briefest timeline, the DMarket team decided on providing a completely new feature on Blockchain Explorer — wallet creation functionality. With this functionality, you can create a wallet with 2 clicks, getting both private and public keys and therefore ensuring your items’ and coins’ safety. Basically, we separated wallets on the marketplace and wallets on our Blockchain in order to keep great UX and reassure a small part of users with a needed option to keep everything in a separate wallet. You can go shopping on DMarket with no additional effort of signing every transaction, and at the same time, you are free to transfer all the goods to your very own wallet anytime you feel the need. Isn’t it cool? Outcome After implementation of a separate DMarket wallet creation feature, we killed two birds with one stone and made everyone satisfied. Though it wasn’t too easy since we had a very limited amount of time. So if you need it, you can try it. Moreover, the creation of DMarket wallet within Blockchain Explorer will let you manage your wallet even on mobile devices because with downloading private and public keys you also get a 12-word mnemonic phrase to restore your wallet on any mobile device, from smartphone to tablet. Wow, but that’s another story — a story about DMarket Wallet application which has recently become available for Android users in the Google Play. Stay tuned for more case studies and don't forget to check out our website and gain firsthand experience with in-game items trading!
  6. Monty Kiani

    Staller gameplay idea

    This idea comes from the concept of making a game that specifically, fills the time of a person in travel when the person might not enjoy the adrenaline factor that accompanies many games. I couldn't come up with anything so I thought about the general vibe I wanted and an action people did in general that emulated that. I found that it mostly happened, amongst other places no doubt, when people go through their messages panel on their devices; a plane traveler/businessperson perfectly calm for a minute eliminating messages, that moment extended. It's a kind of process of elimination. I don't know if this idea is common knowledge but I couldn't find anything and I'd love to see games based on this.
  7. Hello, without going into any details I am looking for any articles or blogs or advice about city building and RTS games in general. I tried to search for these on my own, but would like to see your input also. I want to make a very simple version of a game like Banished or Kingdoms and Castles, where I would be able to place like two types of buildings, make farms and cut trees for resources while controlling a single worker. I have some problem understanding how these games works in the back-end: how various data can be stored about the map and objects, how grids works, implementing work system (like a little cube (human) walks to a tree and cuts it) and so on. I am also pretty confident in my programming capabilities for such a game. Sorry if I make any mistakes, English is not my native language. Thank you in advance.
  8. Wow what a wild game by GalaXa Games Entertainment Interactive. Play now... it's really fun but IF you have epilepsy then don't play. It does not feature flashing pictures, but there is lots of animated stuff that might get ya. Anyway, 4 levels, 2 endings, insane action, BY INFERNAL. Please play it, right nao! Also , nice midi music composed by me is in the game. demons.rar
  9. Armantium

    Dismemberment and gore

    Killing Floor 2 has the best dismemberment/gore system by far. No other game comes even close; how enemies react, segmented body parts depending where you shoot them, how dead bodies react, etc. Is there an easy-to-set up system, but with scaled down features, for the Unreal Engine 4, while still offering the same visceral physicality?
  10. This is an extract from Practical Game AI Programming from Packt. Click here to download the book for free! When humans play games – like chess, for example – they play differently every time. For a game developer this would be impossible to replicate. So, if writing an almost infinite number of possibilities isn’t a viable solution game developers have needed to think differently. That’s where AI comes in. But while AI might like a very new phenomenon in the wider public consciousness, it’s actually been part of the games industry for decades. Enemy AI in the 1970s Single-player games with AI enemies started to appear as early as the 1970s. Very quickly, many games were redefining the standards of what constitutes game AI. Some of those examples were released for arcade machines, such as Speed Race from Taito (a racing video game), or Qwak (a duck hunting game using a light gun), and Pursuit (an aircraft fighter) both from Atari. Other notable examples are the text-based games released for the first personal computers, such as Hunt the Wumpus and Star Trek, which also had AI enemies. What made those games so enjoyable was precisely that the AI enemies that didn't react like any others before them. This was because they had random elements mixed with the traditional stored patterns, creating games that felt unpredictable to play. However, that was only possible due to the incorporation of microprocessors that expanded the capabilities of a programmer at that time. Space Invaders brought the movement patterns and Galaxian improved and added more variety, making the AI even more complex. Pac-Man later on brought movement patterns to the maze genre – the AI design in Pac-Man was arguably as influential as the game itself. After that, Karate Champ introduced the first AI fighting character and Dragon Quest introduced the tactical system for the RPG genre. Over the years, the list of games that has used artificial intelligence to create unique game concepts has expanded. All of that has essentially come from a single question, how can we make a computer capable of beating a human in a game? All of the games mentioned used the same method for the AI called finite-state machine (FSM). Here, the programmer inputs all the behaviors that are necessary for the computer to challenge the player. The programmer defined exactly how the computer should behave on different occasions in order to move, avoid, attack, or perform any other behavior to challenge the player, and that method is used even in the latest big budget games. From simple to smart and human-like AI One of the greatest challenges when it comes to building intelligence into games is adapting the AI movement and behavior in relation to what the player is currently doing, or will do. This can become very complex if the programmer wants to extend the possibilities of the AI decisions. It's a huge task for the programmer because it's necessary to determine what the player can do and how the AI will react to each action of the player. That takes a lot of CPU power. To overcome that problem, programmers began to mix possibility maps with probabilities and perform other techniques that let the AI decide for itself how it should react according to the player's actions. These factors are important to be considered while developing an AI that elevates a games’ quality. Games continued to evolve and players became even more demanding. To deliver games that met player expectations, programmers had to write more states for each character, creating new in-game and more engaging enemies. Metal Gear Solid and the evolution of game AI You can start to see now how technological developments are closely connected to the development of new game genres. A great example is Metal Gear Solid; by implementing stealth elements, it moved beyond the traditional shooting genre. Of course, those elements couldn't be fully explored as Hideo Kojima probably intended because of the hardware limitations at the time. However, jumping forward from the third to the fifth generation of consoles, Konami and Hideo Kojima presented the same title, only with much greater complexity. Once the necessary computing power was there, the stage was set for Metal Gear Solid to redefine modern gaming. Visual and audio awareness One of the most important but often underrated elements in the development of Metal Gear Solid was the use of visual and audio awareness for the enemy AI. It was ultimately this feature that established the genre we know today as a stealth game. Yes, the game uses Path Finding and a FSM, features already established in the industry, but to create something new the developers took advantage of some of the most cutting-edge technological innovations. Of course the influence of these features today expands into a range of genres from sports to racing. After that huge step for game design, developers still faced other problems. Or, more specifically, these new possibilities brought even more problems. The AI still didn't react as a real person, and many other elements were required, to make the game feel more realistic. Sports games This is particularly true when we talk about sports games. After all, interaction with the player is not the only thing that we need to care about; most sports involve multiple players, all of whom need to be ‘realistic’ for a sports game to work well. With this problem in mind, developers started to improve the individual behaviors of each character, not only for the AI that was playing against the player but also for the AI that was playing alongside them. Once again, Finite State Machines made up a crucial part of Artificial Intelligence, but the decisive element that helped to cultivate greater realism in the sports genre was anticipation and awareness. The computer needed to calculate, for example, what the player was doing, where the ball was going, all while making the ‘team’ work together with some semblance of tactical alignment. By combining the new features used in the stealth games with a vast number of characters on the same screen, it was possible to develop a significant level of realism in sports games. This is a good example of how the same technologies allow for development across very different types of games. How AI enables a more immersive gaming experience A final useful example of how game realism depends on great AI is F.E.A.R., developed by Monolith Productions. What made this game so special in terms of Artificial Intelligence was the dialog between enemy characters. While this wasn’t strictly a technological improvement, it was something that helped to showcase all of the development work that was built into the characters' AI. This is crucial because if the AI doesn't say it, it didn't happen. Ultimately, this is about taking a further step towards enhanced realism. In the case of F.E.A.R., the dialog transforms how you would see in-game characters. When the AI detects the player for the first time, it shouts that it found the player; when the AI loses sight of the player, it expresses just that. When a group of (AI generated) characters are trying to ambush the player, they talk about it. The game, then, almost seems to be plotting against the person playing it. This is essential because it brings a whole new dimension to gaming. Ultimately, it opens up possibilities for much richer storytelling and complex gameplay, which all of us – as gamers – have come to expect today.
  11. Solved: didn't think clearly and realized I can't just compare the cross-product with 0,0,0. Fixed by doing this: float3 originVector = float3(0.0, 0.0, 0.0) - v1.xyz; if (dot(cross(e1, e2).xyz, originVector) > 0.0) { //... } I'm trying to write a geometry shader that does backface culling. (Dont ask me why) What I'm doing is checking the cross-product of two edges of the triangle (in NDC space) and checking if it's facing 0,0,0 . The problem is when I compile I get this error: this is i guess because if it isn't facing us, I dont append any verts to the stream. I always assumed maxvertexcount implied I can emit as few verts as I like, but I suppose not. How do I get around this? Shader below: struct GS_IN_OUT { float4 Pos : SV_POSITION; float4 PosW : POSITION; float4 NorW : NORMAL; float2 UV : TEXCOORD; }; [maxvertexcount(3)] void GS_main( triangle GS_IN_OUT input[3], inout TriangleStream< GS_IN_OUT > output ) { //Check for backface float4 v1, v2, v3; v1 = input[0].Pos; v2 = input[1].Pos; v3 = input[2].Pos; float4 e1, e2; e1 = v1 - v2; e2 = v1 - v3; if (dot(cross(e1, e2).xyz, float3(0.0, 0.0, 0.0)) > 0.0) { //face is facing us, let triangle through for (uint i = 0; i < 3; i++) { GS_IN_OUT element; element = input[i]; output.Append(element); } } }
  12. Viir

    2017-12-18.DRTS.play-with-bot.png

    This screenshot shows a game with the new bot and map. Screenshot from the DRTS Devlog at
  13. Following the plan from last week, I expanded the web port of the DRTS game. You can play this new version at https://distilledgames.itch.io/distilled-rts In the upper right corner of the screen, you will now find a button to access the main navigation, and from there, you can start new games. Besides the tutorial, you can also play on a larger map with a bot. To make this a bit more of a challenge, I spent a few hours on building a basic behavior for the bot. It will spread out, conquer more and more area on the map and eventually overrun you if you don’t act. I also started implementation of the map generator, which is used to generate random maps, and the new map you see is coming from this generator. A good part of the map generation functions are already implemented, but it needs some more work, most importantly to enable symmetrical maps. Symmetrical maps will come with a future release, as well as a user interface to customize the map generation process. The screenshot shows a game with the new bot and map.
  14. (Note to Mods: Could "Input" or "Peripherals" be a good Tag to add to the tag list? Just a thought.) Hey, I'm currently working on which keys on a keyboard that a user can rebind their actions to. The trick is that I use a Norwegian keyboard, so it's not obvious which keys correspond to the actual C#/XNA Keys enum values. Are there any keys from the XNA Keys enum that, in your opinion, I've neglected to add? I don't need all Keys to be bindable, only the "most commonly used keys on a keyboard". Thanks. https://pastebin.com/n1cz8Y0u
  15. Hello, I'm a trainee for software development and in my free time I try to do various things. I decided I wanted to figure out how "Dual Contouring" works, but I haven't been successful yet. I could find some implementations in code which are very hard to follow along and same sheets of paper "explaining" it written in a way that I have a hard time even understanding what the input and output is. All I know that it is used to make a surface/mesh out of voxels. Is there any explanation someone can understand without having studied math/computer science? The problem is also that most of the words I can't even translate to German(like Hermite Data) nor I can find a explanation which I can understand. I know how Marching Cubes work but I just can't find anything to Dual Contouring also I don't quite get the sense of octrees. As far I'm aware of this is a set of data organized like a tree where each element points to 8 more elements. I don't get how that could be helpful in a infinite 3D world and why I shouldn't just use a List of coordinates with zeros and ones for dirt/air or something like that. Thanks A clueless trainee ^^
  16. I'm building an American football simulation(think football manager), and am wondering about the best way of implementing AI based on various inputs which are weighted based on the personality of the NPC...I have a version of Raymond Cattell's16PF model built into the game to be able to use their various personality traits to help guide decisions. I am going to use this extensively so I need this to be both flexible and able to handle many different scenarios. For instance, a GM has to be able to decide whether he wants to resign a veteran player for big dollars or try and replace them through the draft. They need to be able to have a coherent system for not only making a decision in a vacuum as a single decision but also making a decision as part of a "plan" as to how to build the team...For instance it makes no sense for a GM to make decisions that don't align with each other in terms of the big picture. I want to be able to have the decisions take in a wide range of variables/personality traits to come up with a decision. There is no NPC per se...There isn't going to be any animations connected to this, no shooting, following, etc...just decisions which trigger actions. In a situation like a draft, there is one team "on the clock" and 31 other teams behind the scenes working on trying to decide if they want to try and trade up, trade down, etc which can change based on things like who just got picked, the drop off between the highest graded player at their position/group and the next highest graded player in that position/next group, if a player lasts past a certain point, etc... There needs to be all of these things going on simultaneous for all the teams, obviously the team on the clock is goifn to have to decide whether it wants to make a pick or take any of the offers to move down in the draft from other teams that might want to move up, etc.. So I am planning on making use of something called Behavior Bricks by Padaone Games( bb.padaonegames.com )which is a Behavior Tree but in conversations with others who have worked on AI in major projects like this(EA sports) they said to combine this with a State Machine. My question is would I be able to do this using just Behavior Bricks or would I need to build the state machine separately? Is there something else already created for this type of purpose that I could take advantage of?
  17. MvGxAce

    Stop hacking

    Is there a program to ensure that the game I've created does not get hacked from third party apps such as lucky patcher, game guardian, game killer etc. If so, how I do I prevent this obstacle from ruining the game. The game is online based but I just recently found out there are hackers. Is there a program I could use to stop this or is it in the coding. Thankyou
  18. I've just posted a pre-release edition of Curvature, my utility-theory AI design tool. Curvature provides a complete end-to-end solution for designing utility-based AI agents; you can specify the knowledge representation for the world, filter the knowledge into "considerations" which affect how an agent will make decisions and choose behaviors, and then plop a few agents into a dummy world and watch them run around and interact with things. Preview 1 (Core) contains the base functionality of the tool but leaves a lot of areas unpolished. My goal with this preview is to get feedback on how well the tool works as a general concept, and start refining the actual UI into something more attractive and fluid. The preview kit contains a data file with a very rudimentary "scenario" set up for you, so you can see how things work without cutting through a bunch of clutter. Give it a test drive, let me know here or via the Issue Tracker on GitHub what you like and don't like, and have fun!
  19. Below is my preliminary draft design for the AI system within Spellbound. I'm slowly migrating away from scripted expert systems towards a more dynamic and fluid AI system based on machine learning and neural networks. I may be crazy to attempt this, but I find this topic fascinating. I ended up having a mild existential crisis as a result of this. Let me know what you think or if I'm missing something. Artificial Intelligence: Objectives: Spellbound is going to be a large open world with many different types of characters, each with different motives and behaviors. We want this open world to feel alive, as if the characters within the world are inhabitants. If we went with pre-scripted behavioral patterns, the characters would be unable to learn and adapt to changes in their environment. It would also be very labor intensive to write specific AI routines for each character. Ideally, we just give every character a self-adapting brain and let them loose to figure out the rest for themselves. Core Premise: (very dense, take a minute to soak this in) Intelligence is not a fixed intrinsic property of creatures. Intelligence is an emergent property which results directly from the neural topology of a biological brain. True sentience can be created if the neural topology of an intelligent being is replicated with data structures and the correct intelligence model. If intelligence is an emergent property, and emergent properties are simple rule sets working together, then creating intelligence is a matter of discovering the simple rule sets. Design: Each character has its own individual Artificial Neural Network (ANN). This is a weighted graph which uses reinforcement learning. Throughout the character's lifespan, the graph will become more weighted towards rewarding actions and away from displeasurable ones. Any time an action causes a displeasure to go away or brings a pleasure, that neural pathway will be reinforced. If a neural pathway has not been used in a long time, we reduce its weight. Over time, the creature will learn. A SIMPLE ANN is just a single cluster of connected neurons. Each neuron is a “node” which is connected to nearby neurons. Each neuron receives inputs and generates outputs. The neural outputs always fire and activate a connected neuron. When a neuron receives enough inputs, it itself fires and activates downstream neurons. So, a SIMPLE ANN receives input and generates outputs which are a reaction to the inputs. At the end of neural cycle, we have to give response feedback to the ANN. If the neural response was positive, we strengthen the neural pathway by increasing the neural connection weights. If the response was negative, we decrease the weights of the pathway. With enough trial runs, we will find the neural pathway for the given inputs which creates the most positive outcome. The SIMPLE ANN can be considered a single cluster. It can be abstracted into a single node for the purposes of creating a higher layer of connected node networks. When we have multiple source inputs feeding into our neural network cluster and each node is running its most optimal neural pathway depending on the input, we get complex unscripted behavior. A brain is just a very large collection of layered neural nodes connected to each other. We’ll call this our “Artificial Brain” (AB) Motivation, motivators (rule sets): -All creatures have a “desired state” they want to achieve and maintain. Think about food. When you have eaten and are full, your state is at an optimally desired state. When time passes, you become increasingly hungry. Being just a teensy bit hungry may not be enough to compel you to change your current behavior, but as time goes on and your hunger increases, your motivation to eat increases until it supersedes the motives for all other actions. We can create a few very simple rules to create complex, emergent behavior. Rule 1: Every creature has a desired state they are trying to achieve and maintain. Some desired states may be unachievable (ie, infinite wealth) Rule 2: States are changed by performing actions. Actions may change one or more states at once (one to many relationship). Rule 3: “Motive” is created by a delta between current state (CS) and desired state (DS). The greater the delta between CS and DS, the more powerful the motive is. (Is this a linear graph or an exponential graph?) Rule 4: “relief” is the sum of all deltas between CS and DS provided by an action. Rule 5: A creature can have multiple competing motives. The creature will choose the action which provides the greatest amount of relief. Rule 6: Some actions are a means to an end and can be chained together (action chains). If you’re hungry and the food is 50 feet away from you, you can’t just start eating. You first must move to the food to get within interaction radius, then eat it. Q: How do we create an action chain? Q: How do we know that the action chain will result in relief? A: We generally know what desired result we want, so we work backwards. What action causes desired result (DR)? Action G does (learned from experience). How do we perform Action G? We have to perform Action D, which causes Action G. How do we cause Action D? We perform Action A, which causes Action D. Therefore, G<-D<-A; So we should do A->D->G->DR. Back propagation may be the contemporary approach to changing graph weights, but it's backwards. Q: How does long term planning work? Q: What is a conceptual idea? How can it be represented? A: A conceptual idea is a set of nodes which is abstracted to become a single node? Motivators: (Why we do the things we do) Hunger Body Temperature Wealth Knowledge Power Social Validation Sex Love/Compassion Anger/Hatred Pain Relief Fear Virtues, Vices & Ethics Notice that all of these motivators are actually psychological motivators. That means they happen in the head of the agent rather than being a physical motivator. You can be physically hungry, but psychologically, you can ignore the pains of hunger. The psychological thresholds would be different per agent. Therefore, all of these motivators belong in the “brain” of the character rather than all being attributes of an agents physical body. Hunger and body temperature would be physical attributes, but they would also be “psychological tolerances”. Psychological Tolerances: {motivator} => 0 [------------|-----------o----|----] 100 A B C D E A - This is the lowest possible bound for the motivator. B - This is the lower threshold point for the motivator. If the current state falls below this value, the desired state begins to affect actions. C - This is the current state of the motivator. D - This is the upper threshold point for the motivator. If the current state exceeds this value, the desired state begins to affect actions. E - This is the highest bounds for the motivator. The A & E bounds values are fixed and universal. The B and D threshold values vary by creature. Where you place them can make huge differences in behavior. Psychological Profiles: We can assign a class of creatures a list of psychological tolerances and assign their current state to some preset values. The behavioral decisions and subsequent actions will be driven by the psychological profile based upon the actions which create the sum of most psychological relief. The psychological profile will be the inputs into an artificial neural network, and the outputs will be the range of actions which can be performed by the agent. Ideally, the psychological profile state will drive the ANN, which drives actions, which changes the state of the psychological profile, which creates a feedback loop of reinforcement learning. Final Result: We do not program scripted behaviors, we assign psychological profiles and lists of actions. Characters will have psychological states which drive their behavioral patterns. Simply by tweaking the psychological desires of a creature, we can create emergent behavior resembling intelligence. A zombie would always be hungry, feasting on flesh would provide temporary relief. A goblin would have a strong compulsion for wealth, so they'd be very motivated to perform actions which ultimately result in gold. Rather than spending lots of time writing expert systems styled AI, we create a machine learning type of AI. Challenges: I have never created a working artificial neural network type of AI. Experimental research and development: The following notes are crazy talk which may or may not be feasible. They may need more investigation to measure their merit as viable approaches to AI. Learning by Observation: Our intelligent character doesn’t necessarily have to perform an action themselves to learn about its consequences (reward vs regret). If they watch another character perform an action and receive a reward, the intelligent character creates a connection between an action and consequence. Exploration Learning: A very important component to getting an simple ANN to work most efficiently is to get the neurons to find and establish new connections with other neurons. If we have a neural connection topology which always results in a negative response, we’ll want to generate a new connection at random to a nearby neuron. Exploration Scheduling: When all other paths are terrible, the new path becomes better and we “try it out” because there’s nothing better. If the new pathway happens to result in a positive outcome, suddenly it gets much stronger. This is how our simple ANN discovers new unscripted behaviors. The danger is that we will have a sub-optimal behavior pattern which generates some results, but they’re not the best results. We’d use the same neural pathway over and over again because it is a well travelled path. Exploration Rewards: In order to encourage exploring different untravelled paths, we gradually increase the “novelty” reward value for taking that pathway. If traveling this pathway results in a large reward, the pathway is highly rewarded and may become the most travelled path. Dynamic Deep Learning: On occasion, we’ll also want to create new neurons at random and connect them to at least one other nearby downstream neuron. If a neuron is not connected to any other neurons, it becomes an “island” and must die. When we follow a neural pathway, we are looking at two costs: The connection weight and the path weight. We always choose the shortest path with the least weight. Rarely used pathways will have their weight decrease over a long period of time. If a path weight reaches zero, we break the connection and our brain “forgets” the neural connection. Evolutionary & Inherited Learning: It takes a lot of effort for a neural pathway to become developed. We will want to speed up the development. If a child is born to two parents, those parents will rapidly increase the neural pathways of the child by sharing their own pathways. This is one way to "teach". Thus, children will think very much like their parents do. Other characters will also share their knowledge with other characters. In order for knowledge to spread, it must be interesting enough to be spread. So, a character will generally share the most interesting knowledge they have. Network Training & Evolutionary Inheritance: An untrained ANN results in an uninteresting character. So, we have to have at least a trained base preset for a brain. This is consistent with biological brains because our brains have been pre-configured through evolutionary processes and come pre-wired with certain regions of the brain being universally responsible for processing certain input types. The training method will be rudimentary at first, to get something at least passable, and it can be done as a part of the development process. When we release the game to the public, the creatures are still going to be training. The creatures which had the most “success” will become a part of the next generation. These brain configurations can be stored on a central database somewhere in the cloud. When a player begins a new game, we download the most recent generation of brain configurations. Each newly instanced character may have a chance to have a random mutation. When the game completes, if there were any particular brains which were more successful than the current strain, we select it for “breeding” with other successful strains so that the next generation is an amalgamation of the most successful previous generations. We’ll probably begin to see some divergence and brain species over time? Predisposition towards Behavior Patterns via bias: Characters will also have slight predispositions which are assigned at birth. 50% of their predisposition is innate to their creature class. 25% is genetically passed down by parents. 25% is randomly chosen. A predisposition causes some pleasures and displeasures to be more or less intense. This will skew the weightings of a developing ANN a bit more heavily to favor particular actions. This is what will create a variety in interests between characters, and will ultimately lead to a variety in personalities. We can create very different behavior patterns in our AB’s by tweaking the amount of pleasure and displeasure various outputs generate for our creature. The brain of a goblin could derive much more pleasure from getting gold, so it will have strong neural pathways which result in getting gold. AI will be able to interact with interactable objects. An interactable object has a list of ways it can be interacted with. Interactable objects can be used to interact with other interactable objects. Characters are considered to be interactable objects. The AI has a sense of ownership for various objects. When it loses an object, it is a displeasurable feeling. When they gain an object, it is a pleasurable feeling. Stealing from an AI will cause it to be unhappy and it will learn about theft and begin trying to avoid it. Giving a gift to an AI makes it very happy. Trading one object for another will transfer ownership of objects. There is no "intrinsic value" to an object. The value of an object is based on how much the AI wants it compared to how much it wants the other object in question. Learning through Socialization: AI's will socialize with each other. This is the primary mechanism for knowledge transfer. They will generally tell each other about recent events or interests, choosing to talk about the most interesting events first. If an AI doesn't find a conversation very interesting, they will stop the conversation and leave (terminating condition). If a threat is nearby, the AI will be very interested in it and will share with nearby AI. If a player has hurt or killed a townsfolk, all of the nearby townsfolk will be very upset and may attack the player on sight. If enough players attack the townsfolk, the townsfolk AI will start to associate all players with negative feelings and may attack a player on sight even if they didn't do anything to aggravate the townsfolk AI.
  20. Hello, I have designed an AI system for games that replicates cognitive psychology models and theories, so that NPCs and other virtual characters can behave in more human-like and interesting ways. I have built a prototype in the Unity game engine, and it can produce quite complex behaviour, including learning and creativity. I am now wanting to develop it further and am looking for people or organisations to help. I am thinking about how I could present my AI system, and what would be a good way of demonstrating it. If you have any suggestions it would be great to hear them. I have a website that explains my AI system in detail: www.electronicminds.co.uk If you have any comments about the AI system, or know anyone who might be interested in helping to develop it, I would really appreciate hearing from you. Thanks for the help.
  21. AlphaSilverback

    AI and Machine Learning

    So - the last couple of weeks I have been working on building a framework for some AI. In a game like the one I'm building, this is rather important. I estimate 40% of my time is gonna go into the AI. What I want is a hunting game, where the AI learns from the players behaviour. This is actually what is gonna make the game fun to play. This will require some learning from the creatures that the player hunt and some collective intelligence per species. Since I am not going to spend oceans of Time creating dialogue, tons of cut-scenes and an epic story-line and multiple levels (I can't make something interesting enough to make it worth the time - I need more man-power for that), what I can do, is create some interesting AI and the feeling of being an actual hunter, that has to depend on analysis of the animals and experimentation on where to attack from. SO.. To make it as generic as possible, I mediated everything, using as many interfaces a possible for the system. You can see the general system here in the UML diagram. I customized it for Unity so that it is required to add all the scripts to GameObjects in the game world. This gives a better overview, but requires some setup - not that bothersome. If you add some simple Game Objects and some colors, it could look like this in Unity3D: Now, this system works beautifully. The abstraction of the Animation Controller and Movement Controller assumes some standard stuff that applies for all creatures. For example that they all can move, have eating-, sleeping and drinking animations, and have a PathFinder script attached somewhere in the hierarchy. It's very generic and easy to customize. At some point I'll upload a video of the flocking behavior and general behavior of this creature. For now, I'm gonna concentrate on finishing the Player model, creating a partitioned terrain for everything to exist in. Finally and equally important, I have to design a learning system for all the creatures. This will be integrated into the Brain of all the creatures, but I might separate the collective intelligence between the species. It's taking shape, but I still have a lot of modelling to do, generating terrain and modelling/generating trees and vegetation. Thanks for reading, Alpha-
  22. Hello, I'm using BMfont to create a Bitmap font for my text Rendering. I noticed some strange behaviour when I use the option "force Offsets to Zero". If I use this option my rendering resultions looks ok, but without it, there is some missing space between some characters. I attached the BMFont configuration files and font that I used. In the rendering result with variable Offset you can see that there is missing space right next to the "r" letter. To get the source and destination of my render rectangles I basically to following: void getBakedQuad(const fontchar_t* f, int* x_cursor, int * y_cursor, SDL_Rect* src, SDL_Rect* dest) { dest->x = *x_cursor + f->xoffset; dest->y = *y_cursor + f->yoffset; dest->w = f->width; dest->h = f->height; src->x = f->x; src->y = f->y; src->w = f->width; src->h = f->height; *x_cursor += f->xadvance; } Has somebody noticed a similar behaviour? orbitron-bold.otf SIL Open Font License.txt variable_offset.bmfc variable_offset.fnt zero_offset.bmfc zero_offset.fnt
  23. What worries me is fairness, I don't want an AI with "god eyes". In a first attempt, the AI wasn't smart at all. It has a set of rules, that could be customized per possible enemy formation, that makes the AI acts as it has personality. For example: Wolves had a 80% prob of using Bite and 20% of using Tackle. Then I tried to make the AI do things with sense, by coding a reusable algorithm, then I can use rules to give different enemies formations different personalities, like this one is more raw damage focused and this other more status ailment focused. To achieve this I was trying to make the AI player to collect stats about the human player skills and characters by looking at the log, my game has a console that logs everything like most RPGs (think Baldur's Gate). An attack looks like this in the console: Wolf uses Bite Mina takes 34 points of damage Mina uses Fire Ball Wolf takes 200 points of damage The AI player has a function, run(), that is triggered each time a character it controls is ready to act. Then it analyzes the log and collect stats to two tables. One with stats per skills, like how many times it was used and how many times it did hit or was dodged, so the AI player knows what skills are more effective again the human player's team. These skills stats are per target. The second table is for human player's character stats. The AI player is constantly trying to guess the armor, attack power, etc, the enemies have. But coding that is quite difficult so I switched to a simpler method. I gave the AI player "god eyes". It has now full access to human player's stats, two tables aren't required anymore as the real character structure is accessible to the AI player. But the AI player pretends that it doesn't have "god eyes" by, for example, acting like it ignores a character armor again fire, until a fire attack finally hit that character, then a boolean flag is set, and the AI player can stop pretending it doesn't know the target exact armor again fire. Currently, the AI player can assign one of two roles to the characters it controls. Attacker and Explorer. More will be developed, like Healer, Tank, etc. For now there are Attackers and Explorers. These roles has nothing to do with character classes. They are only for the use of AI player. Explorer: will try to hit different targets with different skills to "reveal" their stats, like dodge rate, armor again different types of damage. They must chose which skills to use by considering all skills of all characters in the team, but I'm still figuring the algorithm so right now they are random but at least they take care to not try things that they already tried on targets that they already hit. Explorers will switch to attackers at some time during a battle. Attacker: will try to hit the targets that can be eliminated in less turns. When no stats are known they will chose own skills based on raw power, once the different types of armors are known to the AI, they will switch to skills that exploits this info, if such skills are available to them. Attackers will try different skills if a skill chosen based on raw power was halved by half or more and all enemy armors aren't known yet, but won't try for the best possible skill if one that isn't halved by as much as half its power is already known. Attackers may be lucky an select the best possible skill in the first attempt, but I expect a formation to work better if there are some explorers. This system opens up interesting gameplay possibilities. If you let a wolf escape, the next pack may come already knowing your stats, and that implies that their attackers will take better decisions sooner as the AI requires now less exploration. So, the description of an enemy formation that you can encounter during the game may be: PackOfWolves1: { formation: ["wolf", null, "wolf", null, "wolf", null, "wolf", null, "wolf", null, null, null, null, null, null], openStatrategy: ["Explorer", null, "Explorer", null, "Explorer", null, "Attacker", null, "Attacker", null, null, null, null, null, null] } What do you think about these ideas? Was something similar tried before? Would you consider unfair an AI with "god eyes"?
  24. Hey all, As the heading says I'm trying to get my head around Goal Objective Action Planning AI. However I'm having some issues reverse engineering Brent Owens code line-by-line (mainly around the recursive graph building of the GOAPPlanner). I'm assuming that reverse engineering this is the best way to get a comprehensive understanding... thoughts? Does anyone know if an indepth explanation on this article (found here: https://gamedevelopment.tutsplus.com/tutorials/goal-oriented-action-planning-for-a-smarter-ai--cms-20793), or another article on this subject? I'd gladly post my specific questions here (on this post, even), but not too sure how much I'm allowed to reference other sites... Any pointers, help or comments would be greatly appreciated. Sincerely, Mike
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

Participate in the game development conversation and more when you create an account on GameDev.net!

Sign me up!