Jump to content
  • Advertisement


  • Content Count

  • Joined

  • Last visited

Community Reputation

155 Neutral

About WireZapp

  • Rank
  1. WireZapp

    Data Oriented Design for AI?

    Glad that worked out for you! :D
  2. Edit: I should say that this is for something open source. The code that I'm sharing here is for a version of DOT that I won't be releasing until January/Febuary   Once again, this method can be extremely crude. Its just to give the AI a general path that it can fix at runtime.   All these methods sound interesting, but the conditions have changed a bit now. Islands offer both negative and positive aspects, and entities can jump backwards.     So jumping to an island foward may give 3 gold and 4 water per turn, but it takes away 2 food.   If a resource ever hits zero, the entity dies.   Entities can backtrack as far as they want in order to stay alive. So they need to remember what is behind them.   After working with what Andy said, I derived a quite nice method. The entity looks at the straight line path between where it is right now, and its final objective. From there it sums all the (positive and negative) costs.     Working backwards, it figures out where blockages (That is to say Islands it can't jump between with the attributes it would have at the time) may be, and saves that as critical points. Blockages are found by comparing the positive sum from Start to Point X to the negative sum of End to Point X.   PositiveScore is basically the A* Score function.   The first itteratation returns the critical points between where it is now, to where it wants to be. Every itteration after that is computed over the critical point.     Start -> Objective Start -> Critical -> Critical -> Objective Start-> Critical_2 -> Critical_2 -> Critical_1 -> Critical_2 -> Critical_1 -> Objective     Its just normal A* still but slightly twisted. Some source code: TContainer<IndirectAd> IEntityGroup::Plan(short Index, AdvertisementBase *Goal, TContainer<UNibble> ReferenceTableSum) { TContainer<IndirectAd> SolutionSet; //The summation of all advertisements we've confirmed so far TContainer<CostBase> NeededAttributes = TContainer<CostBase>(ReferenceTableSum.size()); int m = 0; for (int i = BlockSize * Index; i < BlockSize * (Index+1); i++) { NeededAttributes[m].Value = RuntimeBody[i].Value; m++; } //TOOD: Get local islands from the world structure (In this case, the islands along the path) TContainer<AdvertisementBase*> LocalAds; bool Finished = false; //When this is true, all of the needs for my objective have been met. while (!Finished) { int i = 0; //The value cannot get greater than -1 float Lowest = 1; UNibble index = 0; //Cycle through all the local goals. See what score is the lowest from after this advertisement -> Destination. Basic path finding for (AdvertisementBase * CurrentAd : LocalAds) { float AverageScore = 0; auto NeededTemp = NeededAttributes; //Go through every positive cost. Do not worry about negative effects right now. Thats managed as error correcting for (const CostBase & CurrentCost : CurrentAd->GetPositiveEffects()) { //How much does this add increase our attributes by? NeededTemp[ReferenceTableSum[CurrentCost.GlobalAdr]].Value += CurrentCost.Value; } //Does this ad let us meet the requirements? for (const CostBase & DestinationCost : Goal->GetNegativeEffects()) { Gradient grad = TemplateBody[ReferenceTableSum[DestinationCost.GlobalAdr]]; grad.Q3 = DestinationCost.Value; //Anything that would throw us over the value is rated to be negative AverageScore += Scoring::PositiveScore_Linear((float)NeededTemp[ReferenceTableSum[DestinationCost.GlobalAdr]].Value, (float)DestinationCost.Value, grad); //Find the score between our destinatin cost, and our temporary attribute value } //Divide by the full size AverageScore /= Goal->GetNegativeEffects().size(); if (AverageScore < Lowest) { //Its our new lowest score, so lets change the value of Lowest Lowest = AverageScore; index = i; } if (AverageScore <= 0) { //We've finished the chain. More than half of the needs are met. They rest are computed during runtime to allow for some dynamic response. index = i; Finished = true; break; } i++; } /*Queue is not an order dependent list. Infact, after running the plan function, it is suggested that Queue be ran through a more extensive scheduling function in order to retain realism.*/ IndirectAd ad; ad.Address = LocalAds[index]->ReturnAddress(); ad.TimeConstant = LocalAds[index]->TimeConstant; SolutionSet.push_back(ad); //Make sure that our summation is kept up to date for (auto Cost : LocalAds[index]->GetPositiveEffects()) { NeededAttributes[ReferenceTableSum[Cost.GlobalAdr]].Value += Cost.Value; } } return SolutionSet; } //Note that this approximation is VERY VERY crude. Its used for distant scheduling among AIs static float PositiveScore_Linear(float CurrentValue, float FutureValue, Gradient grad) { float Q3_Factor = (float)grad.Q3 / (float)grad.Max; Q3_Factor *= 2; CurrentValue /= (float)grad.Max; //Scale the number to a 0 - 2 range CurrentValue *= 2; FutureValue /= (float)grad.Max; //Scale the numnber to a 0 - 2 range FutureValue *= 2; return (CurrentValue + FutureValue) - Q3_Factor; } Any opinions?
  3. I should have added that there can be up to 256 different types costs that the agent would need to take into account.   (ie: Wealth, Hunger, Water, etc)   The A* is what I'm doing now, but your approximation method sounds much better than mine :)
  4. http://www.reddit.com/r/gamedev/comments/2rjgwi/path_finding_question/   "Lets say I have a very large amount of islands, and my agent wants to get to a certain island. Islands usually can allow the agent to jump to the 6 closest other islands; however, sometimes a path may be blocked (ie: 2 islands are fighting and the agent can not easily pass between them) and the agent may be either required to go around or wait until they have the required resources to pass. There are some things that make this really tricky for me: The islands are not grid based; rather, their distribution is random. The agent cannot travel back to an island its already been at; however, it can stay at an island for a finite number of turns. Every turn it collects more resource, and can pass through island barriers if the resource requirement is met. This is just an analogy for an issue I'm having for my engine. But the solution to this problem should suffice very well. Broken down: Agent starts at island A with 5 resource. Island B requires 4. Agent Jumps to Island B. Agent now has 1 resource. Island C requires 7 and is the next obvious destination for the agent in order to reach its final goal. Agent waits at B for X turns until it has the required resource. Agent jumps to island C. Etc I don't want to use anything more complicated than basic arithmetic since this method will only be used for approximation of actual paths. Any help on how to approach this?"
  5. WireZapp

    Data Oriented Design for AI?

    First of all this is a very nice compilation of sources! Awesome!   Secondly, it just comes down to how you design it like any normal system. I've been working on a DOD based multiagent fuzzy logic sdk for a while now. While my system is more oriented around prediction modeling than the average AI engine for a game, DOD is a near perfect match for most things.   However, there are still times that I can't use Data Oriented Design in my system. While I'd love to have everything based around it, look up tables for entity attributes, as well as gradients for the NNs are stored hiearchly.   I have a nice wiki page for my engine summing this up quite well: https://github.com/MatrixCompSci/DOT/wiki/Basics-to-DOT%27s--AI (Don't bother looking through the source. This is all pre-alpha and was mostly a proof of concept for the editor. The alpha releases sometime in January)   The hiearchies are just for specifing template objects that the instances of entities are copied from. Tagging on the other hand is nearly 100% DOD, and the performance increase for caching is highly evident.   After an entity is instanced, there is no further need in the system for OOP. Everything gets handled through the tag system. Updates, LOD shifts, blackboards. Even machine learning and scheduling get handled through tags.   One other thing. Pre-DOD the memory requirement of an Entity was 117 bytes. I just took a remesure 15min ago, and its now down to about 30 bytes. From DOD alone, we dropped from 117 bytes to 44 bytes.   DOD is awesome for AI, if your entire system is not done with DODs. Sometimes you'll actually end up using less memory, and be generally more efficient, when sticking to OOP.
  6. For dynamic points, basically they can travel to any point that has a viable path to it (Just a normal A* implementation). The trajectories can be predicted, but not that far before hand. If an NPC grabs food on the way to work, theres a pretty good chance that it will actually get to his job before he eats it.   Some could be confined. Most couldn't though. I could say that an advertisement causes strain on an NPC after he's held it for a while (That would make a really nice bounding sphere around cities and stuff). But as you'd imagine, the results would vary
  7. So this for an AI engine. And by player, I mean NPCs and Players. Processes are referring to advertisements.   I just didn't want to go into extraneous detail, since I was hoping for a general answer. (I'm seeing now that this isn't a problem where I'd end up getting a general answer)   The distribution is a bit interesting. It depends on the scenario, but I would imagine that most of the time advertisements are close together IE: In a kitchen there would be 6 advertisements all relating to cooking in one way or another.   The total number of ads is around 50K. Not all of these advertisements are dynamic. Only 20% of them move about. The rest are stationary.   What I am using this for is finding a collection of advertisements around the (Player/NPC) to filter through and score. I only care about those within its general proximity (Which is usually about 100 - 200); however, I don't have time to score all 200 of them. If I try, the system just goes to crap.
  8. http://stackoverflow.com/questions/27204935/unsure-of-which-multidimensional-nearest-neighbour-algorithm-to-use   " I'm currently working on a project that requires a thread to construct a queue of 30(ish) nearest processes closest to the player within a 3D environment. All of these processes can move about the environment, as well as leave their starting nodes that they were placed in. I have considered using R trees, but due to its ludicrously high insert times, it does not seem very viable. KD- Trees would not work, since they tend to only work for static environments. Note also that this will be running async to the main update thread, so an atomic approach would work best. Can someone suggest an approach? "  
  9. I made various other optimizations to the point where calculating a score takes so little time that it isn't really worth it for me to implement this anymore
  10. WireZapp

    Crowd simulation

    I wasn't stating that "This is the way that advanced AI work" I was giving a comparison that a system with various features would be most likely more complex than a system that consisted only of a roaming algorithm implementation. But yeah, I see what you mean, the reply I made was worded poorly. Obviously extremely complicated behavior can be made with finite state systems, and/or various other methods.   Also, I completely forgot there was a difference between accuracy and detail (My physics professor would have killed me haha)
  11. WireZapp

    Crowd simulation

    Well, how accurate are you going for? On one end of the spectrum, you have a fuzzy logic control system with clustering, opinions, and remedial information propagation (Whether through Backpropogation with NN, or just super basic AI one approaches AI two, AI one says "The player did this", AI two now dislikes player). On the other side the spectrum, you have a super elementary Roaming algorithm. It all comes down to how detailed you want it to be
  12. I did some research into stability theory. I didn't realize how huge this particular field of research is; however, I still don't know how to approach this. There are countless different theorems and algorithms, all of which I imagine have varying complexities for a C++ implementation
  13.   I hadn't fully considered that for the advertisements. Jeez.   And yeah, I've considered a lot more optimizations that can be made. We're trying to redesign the scoring system to be a lot more parallel. That won't give the full performance increase we want.   We're using the engine to create a political simulator (Just as a tech demo, not a final game). The player tries to design a new law (through a flow graph system), raise taxes, start new cities, etc to keep his citizens happy.We also figured out for this to be enjoyable we would need a few hundred thousand entities, and perhaps anywhere from 50k - 60k advertisements. We're aiming for a 10 second delay between turns. Where our AI engine stands right now, our in house testing rig can run through all those entities in about 98 seconds. We're hoping adding GPGPU will bring that time down to a more reasonable 50 seconds but, that's still too long.   We also noticed that close to 80% of the time (when the player is playing well) the system was just creating repetitive patterns, and was highly stable. When the player didn't do well, that number was closer to 50%.   We've tried various prebaking methods, and while they greatly reduce computation time, they don't give as enjoyable of a result.   On a side note: We have various other optimizations that were made. For instance, originally each entity held their own copy of the AI events relating to their nearest neighbor. Like neighbor A would remember if neighbor B committed some crime, and would view the event in different light based on their opinion on the player and that particular neighbor. When we first implemented this, it was brutally slow, increasing computation time by a huge factor. After various optimizations and what not, it no longer effects performance (welll... it adds a second or two).
  14. Huh... thats a really good point. People though believed they were stable until right before the recession, so I imagine that you can know if you're stable from a history of data as well. Obviously that would be an approximation, and a false positive at that, but at least it could be correct over a constant time interval.
  15. I didn't actually mean the graph. I meant the next "Values" for the next iteration.   But I solved it. (Well, I didn't. I sent the code to a friend who is a specialist in Econometrics). I'm storing Markov Chains of the advertisements the agent picks. So if it decides to eat, from a track record, there is a 15% chance that it'll want to sleep next. Role a random number generation, and execute the probability it lands on. Large number theorem allows these probabilities to approach their true value as the sample size tends to infinity, so I imagine this would produce quite accurate results. I could even just throw an if statement or two in there for good measure.  
  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!