
Advertisement

Content count
7881 
Joined

Last visited
Community Reputation
21317 ExcellentAbout alvaro

Rank
Legend
Personal Information

Interests
Audio
Programming
Recent Profile Visitors

C++ Problems rotating objects using eulers  quaternions
alvaro replied to Orella's topic in Math and Physics
If you rotate some amount around the X axis, some amount around the Y axis, undo the Xaxis rotation then undo the Yaxis rotation, you won't get back to the same place. Take off a shoe and try it. In other words, what you are describing is just not compatible with how 3D rotations work. Perhaps you can describe what kind of game you are trying to build, or give us the name of some game whose mechanics you would like to imitate. 
C++ Problems rotating objects using eulers  quaternions
alvaro replied to Orella's topic in Math and Physics
Well, you are doing two different things. If you compose rotations around the axes in an arbitrary order, you'll observe the noncommutativity. If you keep track of two numbers like yaw and roll, you can construct rotations from them, but you'll encounter gimbal lock. What are you trying to accomplish? 
C++ Problems rotating objects using eulers  quaternions
alvaro replied to Orella's topic in Math and Physics
I don't see a problem anywhere. Rotations are not commutative. If you rotate around the X axis first and then around the Y axis, in order to undo the rotations you need to undo the Yaxis rotation first and then the Xaxis rotation. If you put on your socks and then your shoes, the correct way to undo the operation is to take off your shoes first and then your socks. This is a matter of how 3D rotations work. There doesn't seem to be anything wrong with your code. 
Game of the General AI Implementation
alvaro replied to Henry Fernandez's topic in Artificial Intelligence
You don't assign probabilities to pieces, but to moves. As I said before, you can start with a policy function that simply generates all the legal moves and assigns them equal probabilities. Later on you can refine this with some heuristics so good moves have higher probability, or you can train a neural network to produce the probability distribution for you. 
Game of the General AI Implementation
alvaro replied to Henry Fernandez's topic in Artificial Intelligence
I don't understand your question. 
Game of the General AI Implementation
alvaro replied to Henry Fernandez's topic in Artificial Intelligence
You can't use the traditional methods for chess AI because they only apply to fullinformation games. That's why I suggested Monte Carlo methods. Have you looked into that at all? Here's my very brief recipe. The main ingredient is a "policy function". That is a [quick] function that, given a game situation, returns a probability distribution over the moves. If you had full information, you would try each of the moves available to you, run 1,000 "playouts" where both players use the policy function to pick the next move for the rest of the game and pick the move with the best results. A few modifications of this very basic method would get you to Monte Carlo Tree Search (MCTS). Since you don't have full information, you have to first "read" what the opponent may have. You can do that by generating random configurations, assigning them weights and then running the playouts from a variety of configurations. The configurations should be weighted by their probability, which can be computed as the product of the probabilities of all the moves by the opponent until this point in the game, using the policy function (this can be justified using Bayes' theorem). You also have to zero out the weight of any configuration that is incompatible with information that has been revealed already (like the identity of some pieces after they have been captured, if I remember the game correctly (sorry, I've never played)). The tricky part is that once the game has gone on for a while and you have learned the identity of some of the opponent's pieces, the vast majority of random configurations will be incompatible with the game history. So you should generate only plausible configurations, and in doing so you should be careful not to bias the probability distribution in any way. The fact that you will be running playouts on different configurations means that you can't reasonably accumulate statistics for future positions the way you do in MCTS, so you will probably have to use a basic scheme without tree search. One more thing: It's possible to use a neural network to play the role of the policy function, and this might be feasible these days. If I were you, I would start with a handcrafted policy function. Something that returns the uniform distribution over all legal moves would be a good place to start. You can then up the probability of obvious good moves. Eventually you can train a neural network to maximize the likelihood of the moves in some database of games (which could be generated with your own program playing itself) and use it as policy network. 
Game of the General AI Implementation
alvaro replied to Henry Fernandez's topic in Artificial Intelligence
I haven't written AI for this game, but the last time I thought about it I could see a way to implement decent AI using MCTS. The hidden information part makes it trickier, but I think the difficulties are surmountable. I'll be happy to discuss some details, but I would like to know what you have thought of so far. Now that we have DeepMind's AlphaZero approach, it can probably be applied here as well, but I have to think about it a bit, and it might take too many resources for a thesis. 
"Occupancy maps" seems to be a rediscovery of hidden Markov models by the game industry. The math involved is exactly the same.

Algorithm How to write code for winning probabilities
alvaro replied to SinnedB's topic in General and Gameplay Programming
Your "probabilities" don't add up to 1, so perhaps I didn't understand the question. Assuming you want probabilities 10/55, 20/55, 15/55 and 10/55, you can have an array with the partial sums: {0, 10, 30, 45} . Then generate a random number between 0 and 55 and use binary search to find the index of the largest element that is less or equal to it. If that's not what you want, please clarify your question. 
I haven't played the game and I didn't bother reading the rule book, but if the problem is chasing some hidden target through a graph, you should probably look into hidden Markov models. The short description is that your current understanding of where the target might be is a probability distribution over the nodes of the graph, and then two types of things might happen: Time elapses: A transition matrix describes the probability of moving from one node to another, and you should update your probability distribution with a matrixvector multiplication. An "observation" occurs: If know the conditional probability of the observation given the target is at node N, for all possible values of N. You should update your probability distribution using Bayes' formula. I don't know what kinds of decisions are made in that game, but it's likely that this way of thinking of the situation is helpful.

Neural Network with varying amount of input data?
alvaro replied to JimboC's topic in Artificial Intelligence
Your description matched something I read about a few months ago, although I never read the details carefully: https://arxiv.org/abs/1706.01427 Is that the kind of thing you were looking for? 
You can turn this into an optimization problem. It looks like a reasonable loss function could be the correlation [or the cosine] between the input image and an image that has white inside the quad and black outside. Once you have a loss function, you need some way to optimize it. I would try gradient descent (which requires you to be able to compute the gradient of the loss function; this might not be trivial). How fast do you need this to be?

If you were referring to my code, the iteration over all possibilities is for demonstration purposes only. The function print_combination would work for large N and K (as long as you can handle numbers as big as C[N][K], which is assumed as part of the original post). EDIT: It can also be done nonrecursively, but I think the code is easier to understand in this version.

#include <iostream> int C[20][20] = {0}; void print_combination(int r, int n, int k) { if (k == 0) return; bool selected = (r < C[n1][k1]); if (selected) { print_combination(r, n1, k1); std::cout << n << ' '; } else print_combination(r  C[n1][k1], n1, k); } int main() { // Initialize combinatorial numbers table for (int i = 0; i < 20; ++i) { C[i][0] = 1; for (int j = 1; j <= i; ++j) C[i][j] = C[i1][j1] + C[i1][j]; } // Loop over all combinations for (int x = 0; x < C[6][3]; ++x) { print_combination(x, 6, 3); std::cout << '\n'; } } For your original question, call `print_combination(random_number, 6, 3)'.

How can I optimize this flood fill pathfinding algorithm?
alvaro replied to EddieK's topic in Artificial Intelligence
Oh, I didn't realize that's what you were using it for. In that case I can't think of anything better than Dijkstra's algorithm.

Advertisement