WilliamvanderSterren

Members
  • Content count

    42
  • Joined

  • Last visited

Community Reputation

250 Neutral

About WilliamvanderSterren

  • Rank
    Member
  1. Multiagent Planning in RTS

    I'd appreciate if you can clarify whether your focus is mainly on (1) creating plans by multiple agents or (2) creating plans for multiple agents, or (3) both. Most commercial RTS games don't employ planners. Several academic RTS AI's use planners, but these are rarely benchmarked against commercial RTS AI's or graded by experienced RTS players. As a results, it is unclear how much planners can do for commercial RTS games and where the state-of-the-art is. If you open up the scope of your question slightly to real-time (or "we-go" style) war games, then you should look into: - Creative Assembly's 'Total War' series, from the 'Empire Total War' game onwards is using a two-level GOAP-based planner (see this interview ) - Panther Games' Command Ops: Battle of the Bulges and earlier Airborne Assault game use a planner to direct large battaillon scale military operations (see an old presentation) - my presentation on an off-line HTN planner for coordinated multi-agent maneuvers (Multi-Unit Planning with HTN and A*) Note that game AI developers in the industry don't have much incentive to publish the inner workings of their AI, so you might not find much detail to work with. Lack of info doesn't mean that RTS and wargame AI's aren't efficiently handling problems with large unit counts, large move sets per unit, multiple objectives per side and subtle qualitative differences in between potential plans. For more information on AI for commercial RTS games in general, use this query to find the right articles from the AI Game Programming Wisdom series.
  2. What's the absolute state of the art for planning in games?

    IMO, the most advanced planning is being implemented in Panther Games' Airborne Assault series of war games. In these games, the planner is able to set-up and execute complex division and corps maneuvers with unit counts and complexity exceeding most academic problems. Dave O'Connor shows a little bit of the planners' implementation in presentation for Canberra AIE in 2007. See: - ftp://ftp.wargamer.com/pub/Dropzone/depot/CreativeAI.zip (presentation in zip file) - http://www.youtube.com/watch?v=HVIOfl7Gt8g (Youtube fan-made guide for small scenario; it gets interesting from 2:30 on. The game's scenarios typically are much larger) Their new game 'Command Ops: Battles from the Bulge' is expected beginning of May, with richer AI behavior. Another game series to look at is Creative Assembly's Total War series. The 'Empire' and presumably 'Napoleon' games use GOAP for land battles.
  3. Combat AI: navigating/covering

    Quote:Original post by spek ... That would indeed discard quite some situations, though walls with windows and doors can still deliver problems sometimes... The system should work just fine with windows and doors (since it was designed for those frequent situations), provided you address the following: - waypoint placement / mesh creation should recognize the importance and doors and windows in combat and put waypoints / mesh cells in the door and in front of / behind the window. This might increase the waypoint / mesh density beyond what is required for path-finding, but since this approach aims to reduce ray casts consumption and ray casts typically consume more CPU than path finding consumes, this will reduce CPU load despite path-finding becoming slightly more expensive. - during the LoF table construction, perform multi-sampling on each waypoint by testing LoF from another waypoint to the waypoint at various offsets. You want the LoF value to be robust against characters being at minor offsets from waypoints.
  4. Combat AI: navigating/covering

    Quote:Original post by spek Hi again, I've been playing around with the "distance radials" that are stored for each waypoint. So each point calculates the maximum "threat distance" for 8 radials. It works ok for wider/outdoor area's, but in my (indoor) test scene there are quite much situations that the radials also overlap other sectors. Also height differences can bring problems. [... snip ...] How did Killzone handle indoor complex situations like these? Rick Great to see you experiment with these ideas! Wrt effectiveness of this approach in Killzone, there's a subtle detail in the paper (bottom of page 9) that's tough to explain but quite essential. With an attacker at position A and target at position C, there can only be a LoF in the game world if the table indicates a potential LoF from A to C AND a potential LoF from C to A. And there is guaranteed to be NO LoF if there's no LoF from A to C OR no LoF from C to A. (The actual system takes postures into account as well). This addressed the problem you sketched in your post (and many other situations as well): - the table would indicate a potential LoF from B to A, since there is a true LoF from C to A and B and C are at a similar distance from A - the table would indicate no potential LoF from A to B, since no position near A will have a true LoF through the cover separating A and B. Nevertheless, the approximation of LoF in table may yield consistently hit false positives (LoF where there isn't a true LoF). For Killzone, these were rare and typically involved balconies. One way to avoid them is to add more resolution in the look-up table (and increase its size). Arjen Beij at Guerrilla Games took another approach for Killzone 2, but I can't tell you about it...
  5. Combat AI: navigating/covering

    Quote:Original post by Leartes How expensive is this in comparison to usual approaches? As the paper indicates, performance was good enough to run the game with 14 autonomously operating bots on a Playstation 2. That's due to the AI design and due to Guerrilla Games' game engine, which left a good amount of CPU for the AI. I'm not sure there is a 'usual' approach. Wrt to path-finding in FPS combat situations, you can: - use shortest-paths (which is what most games used to do at that time); - fake tactical movement by preferring paths next to walls and cover, which looks OK most of the time but may result in some weird choices. I suspect Company of Heroes does this. 'Nearby wall/covers' checks are cheap. - do tactical movement based on approximated pre-computed LoF, which is what Killzone does in response to land-based threats. Not cheap but doable on a PS2. - do tactical movement based on real-time ray casts, which is what some games do, and what Killzone did in response to (rare) flying threats. Typically too expensive to use them for everything all the time (LoF checks are generally more expensive than path-finding). The paper mentions an additional advantage of approximated pre-computed LoF: the error from approximating the LoF was chosen to coincide with possible movement by the threat. The look-up table size of 64kB was key to use it on a PS2 (32MB RAM). Small tables still are relevant for today's processors, since they fit easily within the cache, and don't kick too much other relevant data out of the cache. Quote: Anyway, would it be easier to leave out units that are far away or that haven't been seen for some time ? Though the later would require to check and remember where you have seen your oponents the last time. Something along that lines was done, and primarily for reasons other than LoF checks: we humans don't base our decisions on more than a few threats (even when playing something less lethal such as soccer or football, we don't track all 11 opponents in detail). Quote: I am just asking myself if something like this is viable in a RTS because you have many more units but you can make lots of stuff much simpler. Like store a threat zone for a whole army (like an overlay on the whole map) adjusting it whenever you get new information but you can leave out all the LoF calculations for Individuals. Check Chris Jurney & Shelby Hubick's GDC2007 presentation 'Dealing with Destruction: AI From the Trenches of Company of Heroes' where he describes how they tackled path-finding, LoF and cover in Company of Heroes, a WWII RTS with destructible terrain/buildings (great game!). My interpretation is that they don't do too many LoF checks but simply put soldiers in positions that offer generally good cover. Battlefront's Combat Mission: Shock Force is newer PC game (2007) which uses a somewhat similar approach for a real-time tactical modern combat wargame, featuring hundreds of individuals and vehicles on a 12km^2 piece of open terrain. It uses pre-computed LoF check (at different height levels) for the 8x8m tiles occupied by the attacker and target, and only performs real LoF ray-cast if the LoF check passes. My understanding (from Steve Grammont's explanations at their forum) is that LoF checks dominate the AI performance. The game's AI is done by Charles Moylan. (Btw, I recommend the game - it's way better than now (v1.10) than originally released and reviewed).
  6. Combat AI: navigating/covering

    Quote:Original post by spek If I understand right, to test visibility, each waypoint stores the maximum potentional "threat distance" for 8 global directions and 2 poses (stand / crouch). And then it comes, only 64k needed for 4.000 weaypoints. If I store 2 sets of 8 bytes(2 poses, 8 directions, each holding a byte for the maximum distance), I would need 500 kb for 4000 waypoints. It's not much, but a whole lot more than 64K! 64K = 2 bytes per waypoint. How can you store 16 directions into that? Probably I understood something wrong there... 64KB / 4,000 waypoints is 16 bytes p waypoint. Quote:When A* generates a path, I assume it will calculate its costs depending on coverage. If 1 or more enemies can see you at a certain waypoint, it means no bonus points for that piece of path. The Lookup table can speed up this test greatly, but still... I mean, you must then check for each potentional enemy from which global direction he comes, calculate its distance towards the player, and then compare it with the Lookup table. On itself quite easy tasks, but when doing it for hundreds of points (on longer routes) it still might be much, right? Or do you guys split long paths into smaller ones? Or is the way of testing I wrote incorrect? It (obviously) turned out not to be a problem: in the presence of threats, the individual AI on average performs short moves (to a next nearby cover/attack) position. Longer routes are rare but may happen for squad maneuvers. I don't remember all the details, but cutting off all LoF checks once you're more than 15s distance into finding a potential path would be a viable performance improvement (assumptions about the current enemy positions will not be valid 15s into the future against smart opponents).
  7. Combat AI: navigating/covering

    Hi Rick, this paper/presentation probably answers some of your questions (and raises a few new ones). Killzone's AI : Dynamic Procedural Combat Tactics, Beij, Straatman, Van der Sterren (Game Developer Conference 2005) - slides - paper Have fun, William
  8. Wargame AI

    Two (concrete) suggestions: - pick up Chris Crawford's book 'On game design' (2003), and read the chapters describing the design of his wargames (Tanktics, Eastern Front 1941, Patton vs Rommel, Patton Strikes Back). These are games from the late '70s, '80s and early '90s. Chris describes how to use position evaluation to generate moves. Although you can find some of Chris' articles on the internet (and even source code for Eastern Front), the book covers wargames in a bit more detail, and is an inspiring read. - download LGeneral and study its source code. LGeneral is an open source reimplementation of SSI's Panzer General wargame (1994). The AI consists of some 6 C files. I haven't compared LGeneral's AI to the original game's AI, but the original game was quite fun. Keep in mind that part of the 'AI' in a wargame is the result of scenario designers carefully tweaking and setting objective values and scheduling reinforcements. William
  9. Soccer Defense

    Jedd, have a look at the work done by Peter Stone (while at CMU) for synthetic robosoccer (RoboCup). Although the game differs a bit from real soccer (no vertical dimension), they share many of the tactics. Peter has applied learning techniques to create winning robocup AI. http://www.cs.cmu.edu/afs/cs/usr/pstone/mosaic/thesis/index.html In addition, Jack van Rijswijck published his efforts of using force-fields to improve EA (Toronto's?) FIFA AI at GDC 2003: http://www.gamasutra.com/features/gdcarchive/2003/Van_Ryswyck_Jack.doc
  10. A*, best path when there is no path?

    Quote:Original post by Ralph Trickey I'm interested [in tactical pathfinding for UAV references by Timkin], thanks! Ralph, you might also be interested in toying with my "tactical A* explorer". It's a small app that lets you experiment with A*, cover, threat placement, visibility/influence maps, and gives you path for you selection of costs functions (nodes in open and closed list). Comes with source code (C++/MFC). Is tile-based. Described in more detail in Game Programming Gems 3, 2002. For a game such as TOAW, I'd consider introducing some phase lines / divisional sectors to prevent units from straying too far from their zone (wide detours through positions occupied by other divisions on their flanks). Boxing the movement of a unit (through artificial cost measures for visiting any hex outside the box) has two advantages: it reduces the search space (except for worst-case situations), and results in more realistic movements. William btw, what is your opinion on the original TOAW AI (by Norm Korger)?
  11. Adaptive Game AI

    Quote:Original post by circuitboarding Also, just to add, I am looking at squad-based allied AI at the moment, rather than enemy AI. Generally allied AIs are around longer than enemies, and therefore adaptation seems more oppropriate for this aspect of the game. Also the player has more exposure to allied AI and us such are more likely to notice any unrealistic behaviours. Just quoted the part of your message that I have troubles understanding (I agree with your main message). In my experience, dealing with allied (squad-based) AI involves: a. showing exciting and appropriate behavior to immerse the player (for example, authentic tactics, witty sound bites, coordination within the squad) b. switching smoothly between scripted story telling and autonomous behavior c. balancing allied AI kills versus player kills, so the allied AI assists the player without taking away too much of the fun. I would expect that adaptation brings little benefits to (a) and (b), and (c) typically is dealt with by aim profiles rather than behavior. Additionally, if I would be able to create an evaluation function capable of adapting the behavior to be more 'realistic', I probably would also be able to get the behavior 'right' during development without adaptive behavior. Any different thoughts? William
  12. Turn-Based Operaional Wargame AI (Part 2)

    An old source discussing strategic turn-based AI is Chris Crawford's "Chris Crawford on Game Design". Much of the material referred to earlier probably has built on these and similar ideas. However, the nice thing about Chris' descriptions is that they describe which ideas and mechanisms worked or didn't work, and why. Specifically, this book discusses: - tanktics (1979): map representation, Los/Lof and position evaluation heuristics for tank moves; 7p - eastern front (1981): front line selection, ai play analysis and tuning; 7p - patton vs rommel (1986): front line selection; 10p Some of this material is available on the internet, but the book contains a bit more detail. http://www3.sympatico.ca/maury/other_stuff/eastern_front.html http://www.erasmatazz.com/library/JCGD_Volume_2/Geometric_AI.html http://en.wikipedia.org/wiki/Chris_Crawford_%28game_designer%29 @Timkin: I'm aware there's literature describing how hierarchical planning theoretically should be able to deal with 'obviously hierarchical problems' such as controlling an army. However, I cannot find any literature describing how hierarchical planners have dealt with real problems similar to these in turn-based game problems (TOAW may have 500 units per side, with complex support- and supply-relations, which together should accomplish multiple objectives in complex 3000+ cell terrain). At best, I've read about HTN planners running a few Unreal bots, a single tank platoon or chopper squadron, or solving a cargo loading problem for planning domains much less complex than a turn-based wargame. Do you happen to have any specific pointers? William
  13. CTF

    See: - The Quake III Arena Bot, Jan Paul van Waveren, Masters Thesis, Delft University of Technology, June 2001, http://www.kbs.twi.tudelft.nl/docs/MSc/2001/Waveren_Jean-Paul_van/thesis.pdf, pages 81-86 - Unreal's bot AI for CTF, as documented at http://wiki.beyondunreal.com/wiki/CTFSquadAI William
  14. Articles on the Sims AI?

    Have a look at Ken Forbus' 2001 Computer Game Design class and the corresponding lecture notes. - http://www.cs.northwestern.edu/~forbus/c95-gd/index.html down the bottom. In his lecture notes, there are slides originating from Wright and Doornbos: - http://www.cs.northwestern.edu/~forbus/c95-gd/lectures/The_Sims_Under_the_Hood_files/v3_document.htm Furthermore, Maxis/EA's Jake Simpson did a talk on Scripting and Sims2: Coding the Psychology of Little People at this year's GDC. Contact Jake or any other attendee to get hold of the slides.
  15. Black & White learning

    The AI Game Programming Wisdom series contains four articles by the creators of Black & White: http://www.aiwisdom.com/bygame.html (at the bottom of the page)