alvaro

Members
  • Content count

    7810
  • Joined

  • Last visited

Community Reputation

21272 Excellent

1 Follower

About alvaro

  • Rank
    Contributor

Personal Information

  • Interests
    Art
    Audio
    Programming

Recent Profile Visitors

38995 profile views
  1. Why is the AI concerned with world transformations? I would think the AI would determine things like picking actions, or what animation to play next...
  2. 3D 360 degrees rotation around x axis

    You should have a notion of what the current attitude of your object is (i.e., the rotation that brings it from its initial position to its current position). When the mouse is clicked you create a new rotation, which depends only on the point where the mouse was initially clicked and its current position (perhaps just depends on the vector between those). While the mouse button is down, you display the object as having attitude (current_attitude * mouse_rotation). When the mouse button is released, you bake the mouse rotation into the attitude (`current_attitude *= mouse_rotation'). In the paragraph above I have used multiplicative notation to represent composition of rotations. Composition of rotations is hard if you are using Euler angles. But if you are using matrices or quaternions, it literally is a multiplication (be careful to get the order right, because it matters).
  3. Yes, it's the one with the confusing syntax.
  4. You don't need to inherit to share functionality. Composition will be better in the vast majority of situations. So both VertexBuffer and ConstantBuffer can have a member of type Buffer (or MemoryBuffer, or RawBuffer... Using good names is important) and share functionality that way.
  5. Why are you using inheritance here? Do you have anywhere in your code a pointer or reference to a buffer that might be a vertex buffer or a different type of buffer, and you'll only know at run time? That would be about the only situation in which I'd use inheritance. Inheritance is easily abused. Don't use it until you really need it.
  6. 3D 360 degrees rotation around x axis

    I would suggest using a parametrization of rotations that doesn't have a region where things get wacky. Either orthogonal matrices or quaternions would do.
  7. The `const' should go before the curly bracket. The way you wrote it, the `const' is modifying the next variable (`const bool ...'). And I don't think you should be using `inline' there (it most likely has no effect, so your code is cleaner if you remove it).
  8. The most obvious thing to do is to use 3 noise functions, for x, y and z. Use three different seeds to generate them. At least logically you would do that, but the code will be faster if you implement Perlin noise that produces vectors directly. If this second paragraph confuses you, ignore it and go with the first paragraph solution.
  9. Game AI architecture for a RTS game

    No, it's not "goal oriented AI". In the classification of this article it's the "utility-based salad".
  10. Game AI architecture for a RTS game

    I think the main weakness of rule-based systems is a different one: They are fragile. There will be circumstances you have not considered, where following the rules looks dumb. You will have some rule that seems perfectly sensible, like "if you are hungry, go to the fridge", and then there will be a fire in the house and some idiot will go to the fridge because he was hungry. My favorite architecture is a utility maximization paradigm, where you assign a utility value to each action the agent can perform (i.e. how happy it makes you), then pick the action for which this value is maximum. You still have rules, but they are expressed as terms in the utility function that indicate priorities, so staying in a burning house has a negative contribution to utility that is much larger than the positive contribution of going to the fridge when you have the munchies.
  11. I would start by enumerating the possible actions. Then write a heuristic function that assigns utility values (i.e. how happy the agent will be with the result) to the actions. Pick the action that maximizes the utility. If you want some variety, use SoftMax instead of picking the maximum, or add some randomness to your utility evaluation. Assigning reasonable utility values to actions in a game with very complicated state can be challenging, but you can start fairly simple, decide what units the utility will have (this is fairly arbitrary) and tweak the heuristics whenever you observe behavior your don't like. There is a certain art to getting this to work well. One piece of advice: If your utility function ends up being the sum of a bunch of terms (which is reasonable), make sure you build debugging tools to see what the value of each term was for each of the available actions. That will make your life much easier down the road.
  12. I only read the vague initial description of the game, but I think I got enough details to tell you what I think. I think using reinforcement learning is quite reasonable for games with randomness and hidden state. I don't have much experience here, but I think I know how I would do it. First you need a fast simulator for the game. You can start with a strategy that picks an action at random to make sure that the simulator is working. You then define a "policy" to be a mapping from the state known to the agent (including the history of previous actions by all players) to a probability distribution over actions. This policy can be implemented as some sort of neural network, with a final SoftMax operation to make sure what is produced is indeed a probability distribution. If you initialize the weights so the neural network produces small outputs before the SoftMax, the SoftMax will in turn produce close to a uniform distribution. Start playing games. After each game, you can tweak the weights of the neural network to increase the probability of the actions taken by the winners and decrease the probability of the actions taken by the losers. In order to do this, you would need to be able to compute the derivative of the probability of producing a move with respect to each of the weights in the neural network, which is what backpropagation does. If you had a database of games played by experts, you could train a policy to simply imitate their playing. In that case you would be using supervised learning, which is easier. You could even use supervised learning first and then tweak the resulting network using reinforcement learning. At some stage AlphaGo used this (although it's not the primary mechanism of how AlphaGo works; they just used this to generate a database of labeled positions that could be used to train their value network). Monte Carlo search would be difficult to get to work, but not impossible. I can give you more details if you want to go down this route, but I don't recommend it. I will just say that using Monte Carlo search would still require a "policy", as described above. So you have to start there anyway. This sounds like a fun project. I should do something like this with some other card game.
  13. I had a bit of time, so I programmed approach (2): #include <iostream> #include <vector> #include <cmath> #include <cstdlib> double const gravity = 9.81; struct Polynomial { std::vector<double> c; // coefficients in decreasing degree order size_t degree() const { return c.size() - 1; } double &operator[](size_t i) { return c[i]; } double operator[](size_t i) const { return c[i]; } Polynomial(std::vector<double> const &c) : c(c) { } }; bool extract_root(Polynomial &polynomial, double &root) { size_t degree = polynomial.degree(); for (int attempt = 0; attempt < 5; ++attempt) { // Start with a random guess root = -1.0 + 2.0 * double(std::rand()) / RAND_MAX; // Refine the guess several times double value, derivative; for (int step = 0; step < 30; ++step) { value = derivative = 0.0; for (size_t i = 0; i <= degree; ++i) { value = root * value + polynomial[i]; if (i < degree) derivative = root * derivative + polynomial[i] * (degree - i); } double old_root = root; root -= value / derivative; // Newton-Raphson iteration if (root == old_root) break; } if (std::abs(value) < 1e-6) goto DONE; } return false; // No root found DONE:; // Divide by (x - root) double carry = 0.0; for (size_t i = degree + 1; i --> 0;) carry = root * (polynomial[degree - i] += carry); polynomial.c.pop_back(); return true; // Success! } std::vector<double> find_all_real_roots(Polynomial polynomial) { size_t degree = polynomial.degree(); std::vector<double> roots; for (int i = 0; i < degree; ++i) { double root; if (extract_root(polynomial, root)) roots.push_back(root); else break; } return roots; } double aimCannon(double cannonLength, double muzzleSpeed, double aimPoint_x, double aimPoint_y) { // I am going to use short names so the formulas below don't get too long double const l = cannonLength; double const s = muzzleSpeed; double const x = aimPoint_x; double const y = aimPoint_y; double const g = gravity; Polynomial p(std::vector<double>({ 0.25*g*g, 0.0, g*y - s*s, -2*l*s, x*x + y*y - l*l })); double t = 1e40; for (double r : find_all_real_roots(p)) { if (r > 0.0 && r < t) t = r; } if (t == 1e40) throw "No solution found!"; return std::atan2(y + 0.5*g*t*t, x); } int main() { try { double const cannonLength = 1.0; double const muzzleSpeed = 15.0; double const aimPoint_x = 10.0; double const aimPoint_y = 5.0; std::cout << "Aiming for (" << aimPoint_x << ',' << aimPoint_y << ") at speed " << muzzleSpeed << ".\n"; double const angle = aimCannon(cannonLength, muzzleSpeed, aimPoint_x, aimPoint_y); std::cout << "The angle is " << angle << '\n'; for (double t = 0.0; ; t += .1) { double x = std::cos(angle) * (cannonLength + muzzleSpeed * t); double y = std::sin(angle) * (cannonLength + muzzleSpeed * t) - 0.5 * gravity * t * t; std::cout << "t=" << t << " (" << x << "," << y << ")\n"; if (x >= aimPoint_x) break; } } catch (char const *msg) { std::cout << "ERROR: " << msg << '\n'; } }
  14. (1) Think of how long it will take the ball to reach the target, given the angle. You can do this looking just at the x coordinate, which requires solving just a linear equation. Now you can compute the y coordinate at that time. If you fired in the right angle, the answer will be the y coordinate of the target. So this process gives you an equation to solve, which you can now do by your favorite method. (2) Another way to approach this problem is to compute the desired muzzle speed as a function of how long it takes you to hit the target. I know the muzzle speed is given, but bear with me. If you know the time to reach the target, you know how much the projectile is going to drop due to gravity. So you can just imagine the target is that much higher and then shoot in a straight line, ignoring gravity. That will give you the desired speed. Since you know what the muzzle speed is, you can solve for the time it takes to reach the target. Once you have that, use the same trick of moving the target up the desired time and firing straight. (3) Yet another [perhaps less practical] way to think about it: Imagine the set of points that can be reached in time t with your bullet. At time 0 this set of points is a circle that describes all the possible positions for the end of the cannon. At time t the set will be a circle of radius (length_of_cannon + t / ball_speed). If there were no gravity, the sphere would be centered at the base of the cannon. Because of gravity, all the points are lower by gravity*t^2/2. So you can think of gravity as acting on this growing sphere normally. At some point this dropping sphere will engulf the target. Compute at what time that happens. Then computing the angle from the time is easy, using the linear equation given by the horizontal coordinate, or the trick of moving the target up, from (2). I believe (1) is what most people would do, (3) is the first thing that comes to my mind --because I am weird--, and (2) is the way I would do it after thinking about this for the last 10 minutes or so. Notice how (2) and (3) don't really require trigonometry until the last step, when you express the direction of firing as an angle (using atan2), because that's the format that was asked. You could just as well return the answer as a unit vector, and then you wouldn't have angles anywhere. Just the way I like it.
  15. OOP RPG programming principals

    The word you were trying to use is "principles". Principals run schools.