Last year, the Halite competition was pretty fun! I didn't have as much time to spend as I wanted, but nevertheless.
This year, it looks even more fun, although I have even less time to spend on it. If you all want to give it a go, I can highly recommend it.
The competition consists of a simple game with rules, and a "starter kit" that lets you write a bot to play the game.
The game servers then run the bots against each other, and scores your bot based on how well it does.
In the beginning, getting a high score is not that hard, but as the game goes on, the well-ranked competitors will analyze how their bot is doing and improve the algorithm, whatever it is, so at the end, it's quite tough!
This is a great way to sharpen your programming skills in Python, C++, Java, Rust, or one of the many other supported languages. Let's show them some gamedev.net colors!
I am developing a RTS game involving robber and cops. I have designed a simple rule based AI system for the robber.
Goals of the robber AI:
Escape from the cops (robber should not get caught)
The robber AI uses a rule based system to escape from the cops. I have written IF-THEN clauses to build the rule based AI system. The robber AI uses the knowledge of cops’ current positions.
I am worried about the current architecture of this robber AI system. I read that rule based systems are inefficient and difficult to maintain. For the future,
I want the robber AI to use some power-ups (to escape)
The AI should consider the power-ups used by the cops before making decisions.
For conducting the robbery, the robber has to move to a specific location escaping from the cops.
Is the current rule based AI system will flexible enough (and easy to maintain) to meet the future requirements?
What can be better alternatives for the existing rule based system? (Like decision trees or state machines etc.)
I read people suggesting that the rule based systems are difficult to debug. Can anyone explain why?
I would appreciate any thought/suggestions on this topic. Thank you.
i’m trying to build an effective AI for the Buraco card game (2 and 4 players).
I want to avoid the heuristic approach : i’m not an expert of the game and for the last games i’ve developed this way i obtained mediocre results with that path.
I know the montecarlo tree search algorithm, i’ve used it for a checkers game with discrete result but I’m really confused by the recent success of other Machine Learning options.
For example i found this answer in stack overflow that really puzzles me, it says :
"So again: build a bot which can play against itself. One common basis is a function Q(S,a) which assigns to any game state and possible action of the player a value -- this is called Q-learning. And this function is often implemented as a neural network ... although I would think it does not need to be that sophisticated here.”
I’m very new to Machine Learning (this should be Reinforcement Learning, right?) and i only know a little of Q-learning but it sounds like a great idea: i take my bot, making play against itself and then it learns from its results… the problem is that i have no idea how to start! (and neither if this approach could be good or not).
Could you help me to get the right direction?
Is the Q-learning strategy a good one for my domain?
Is the Montecarlo still the best option for me?
Would it work well in a 4 players game like Buraco (2 opponents and 1 team mate)?
Is there any other method that i’m ignoring?
PS: My goal is to develop an enjoyable AI for a casual application, i can even consider the possibility to make the AI cheating for example by looking at the players hands or deck. Even with this, ehm, permission i would not be able to build a good heuristic, i think
Thank you guys for your help!
Unity has jumped into the world of machine learning solutions with their open beta release of the Unity Machine Learning Agents SDK (ML-Agents SDK).
Released on Github, the SDK brings machine learning to games, allowing researchers and developers to create intelligent agents that can be trained using reinforcement learning, neuroevolution, or other machine learning methods through a Python API.
Integration with Unity Engine
Flexible Multi-agent support
Discrete and continuous action spaces
Python 2 and 3 control interface
Visualizing network outputs in the environment
Tensorflow Sharp Agent Embedding (Experimental)
Learn more about these features and more at the Unity blog announcement.