# Talyssan

Member

31

122 Neutral

• Rank
Member
1. ## AI for a roguelike game

Two ways spring to mind: 1) Have the monsters "cheat" by always knowing the direction that player is in and have them move towards that point. 2) Give each monster a line of sight (to save time, perhaps do this within a certain distance of the player) and have them move toward anything interesting they see. Doing 2) is extremely tricky in practice I've found, especially when dealing with doors. What I came up with to "solve" it was complete hack that will only work for simple "rogue-likes" but it is very fast. In truth it doesn't make for more or less interesting behaviour unless you put a lot of effort in to make it use the LOS information. Hope that helps, Cam
2. ## AI using instruction sets

No, I'd do it based on a hash-table. Perhaps like how a transposition table is used? I don't know much about them though so I might be wrong. Rather than have if-then behaviour they could simply be used as a generalised sub-tree optimisation or heuristic, possibly improving search. Like I said, it's something to play with in the future.
3. ## Finding the N best moves in a Minimax tree

My bad, yes Alvaro is right my method wouldn't help you much as the algorithm ends at a depth, not at a total number of nodes searched. Rather than trace back up the tree it might be better just send the name of the initial move down the tree - then you could return that move and the final score with it. Sorry for any confusion. :( -Cam
4. ## Finding the N best moves in a Minimax tree

The way I'd do it is the same as my previous suggestion - have a parent pointer in each node. When the iterative deepening finishes a branch have it search its way back up to the original move that was spawned and save the move and the final score that was found at that depth in a table. Each time the search returns it checks against the value in the table (for that move) and replaces it if it's lower. At the end of your searching you can go through the table and do what you like with the answers. It may be useful to use a priority queue in place of simple map (table). The only downside to this is that it adds O(mlogn) to the runtime of the search, as it has to iterate up log n nodes m times. Best of luck - I'll be keen to see the finished product. :) Cam
5. ## MTD(f) : how can I retrieve the best first move ?

Couldn't you follow the tree back up to the origin by creating a parent pointer in each node? Simply check if the parent was the origin and if so then take this node. Hope this helps, Cam
6. ## Where's the AI FAQ?

Perhaps you don't need a general FAQ, but since this is Gamedev.net a lot of people will come here first thinking they are going find exactly game AI info. A FAQ on at least pathfinding would save a lot of new programmers a lot of time and angst. It doesn't need to be completely comprehensive and you don't even need to write it! Just ask existing article writers if their work can be quoted and linked to - most would be only to happy to have their work showcased. -Cam
7. ## AI using instruction sets

Being an MPU enthusiest I can see the appeal of the concept - essentially you are building a macro engine by defining the fundamental "moves" of the AI. For example I suppose you could define the classic Zork directional movements East, West, North and South and by calling a North and an East instruction you will have invented "NorthEast"... and yet you haven't because the actual instructions will be followed one after the other, unless you have some nice way of combining instructions based on say, vector math. That of course means that if you have anything other than movement in mind then you must have the same dimensions for any of those than can be combined too (not that it couldn't be done of course, just more to think about). I actually intend to try this out myself if I get that far with my game, based on a learned table of instruction macros from successful previous outcomes of traditional searching. At least in a limited knowledge situation these may be somewhat applicable even if the initial situation is not identical. -Cam
8. ## AI priority theory

I think most people can work out that much, the problem is in the implementation. You need effective and efficient ways to: 1) Describe knowledge 2) Store/retrieve knowledge 3) Evaluate input based on knowledge Easy to say, damned hard to do. Personally I think #2 (closely related to #1) is needed the most right now as current "memory" structures I've seen don't scale very well. -Cam
9. ## Strong AI without Neural Networks

If you can come up with a better way to store knowledge than either augmented slot & filler or Hopfield relaxation networks then that would be a good start. As far as my limited understanding of human neurophysiology goes, human memory is somewhat akin to a "Processor In Memory (PIM)" system in that memory is organised/associated with other memories that should elicit a similar response. Slot and filler structures work well for limited systems where the number of "memories" is relatively low (thousands at most I imagine). Hopfield networks I haven't looked into, but while I know they work well with images or shapes I have no idea how they perform with standard "rules". Once you get that one solved then you can start looking at ways of "training" your memory. ;) -Cam
10. ## A Challenge For You All

I think you should do a text-based RTS - think notepad. j/k