Jump to content

  • Log In with Google      Sign In   
  • Create Account

Álvaro

Member Since 07 Mar 2002
Offline Last Active Today, 04:36 PM

#5295669 Golf Ball Physics (involving 3D slopes)

Posted by on 08 June 2016 - 02:02 PM

A friend of mine is an expert in golf Physics. He published a paper about putting on a planar green that has an appendix with the equations of motion: http://arxiv.org/abs/1106.1698


#5295618 Why is it so hard to do animations?

Posted by on 08 June 2016 - 08:23 AM

Maybe it's hard because .obj files don't support animation. Or am I missing something?


#5295474 std::stringstream crash

Posted by on 07 June 2016 - 06:07 AM

this is a sample test app that i've written, but it doesn't crash


I can write non-crashing code myself, but then I can't help you with your problem. :)

Can you take the original program and start removing anything that doesn't seem relevant, testing often, until whatever you remove makes the bug disappear? There is a decent chance you'll discover the problem yourself in that process. And if you don't, you'll have a neat little program to post here so we can help you.


#5294863 How does Runge-Kutta 4 work in games

Posted by on 03 June 2016 - 03:10 PM

OK, thinking about it a bit more, the equations you need to solve are not necessarily linear, so it's a bit trickier than I thought. It's certainly not impossible.

Have you tried Verlet integration? You need to keep the last two positions, instead of a position and a velocity. But it's trivial to implement and generally much more stable than Euler's method.


#5294823 How does Runge-Kutta 4 work in games

Posted by on 03 June 2016 - 12:18 PM

The Wikipedia page on RK methods seems pretty good: . If you have trouble understanding that page, please ask a more concrete question.

To use RK4 you need to be able to recompute f after half a step, so you need to have f specified as a function of time and x, not a single number.

However, if you are trying to integrate a system with stiff springs you probably should consider an implicit method.


#5294727 implementation of neural network

Posted by on 02 June 2016 - 05:11 PM

Chokepoint for GPUs often usually are complex functions which arent easily done by the simplified instruction sets used by the highly  paralleled processors.    How well do the usual NN sigmoid activation functions work within the GPU instruction sets (and might some table lookup possibly be substituted to get around that) ?


For any decently sized neural net, the vast majority of the time is spent doing matrix multiplications (matrix by column vector if you are feeding it a single sample, or matrix by matrix if you feed it multiple samples at a time, which is typically how training is done). Imagine a neuron with 1,000 inputs. Computing its activation takes 1,000 multiply-adds and one single call to the activation function.

Also, rectified linear units are increasingly common, so instead of 1/(1+exp(-x)), you simply need to compute max(0,x).


#5294140 Quaternion as angular velocity

Posted by on 30 May 2016 - 06:35 AM

First, let me address the title of the thread. Quaternions represent attitudes and rotations, not angular velocities: Angular velocities are regular vectors.

So you have an attitude represented by quaternion q1 and you want to change it to q2 over time. Because the notation used in quaternions is multiplicative, you need to perform the rotation inverse(q1)*q2 over some time T. inverse(q1) is the same as conj(q1) because |q1|=1. If the first frame will move time forward by t, the fraction of the rotation you need to perform is t/T. This means you need to multiply your original q1 by (conj(q1)*q2)^(t/T). The relevant formulas are here.


EDIT: Here's some code for your enjoyment.
 
#include <iostream>
#include <boost/math/quaternion.hpp>

typedef boost::math::quaternion<float> Quaternion;

// This function assumes abs(q) = 1
Quaternion log(Quaternion q) {
  float a = q.real();
  Quaternion v = q.unreal();
  
  // In case some rounding error buildup results in a real part that is too large
  if (a > 1.0f) a = 1.0f;
  if (a < -1.0f) a = -1.0f;
  
  return v * (std::acos(a) / abs(v));
}

int main() {
  Quaternion q1(0.0f, 1.0f, 0.0f, 0.0f);
  Quaternion q2(0.0f, 0.0f, 1.0f, 0.0f);
  
  for (float t = 0; t <= 1.0; t += 0.125)
    std::cout << q1 * exp(log(conj(q1) * q2) * t) << '\n';
}



#5292762 how can neural network can be used in videogames

Posted by on 21 May 2016 - 09:01 AM

https://arxiv.org/pdf/1409.3215.pdf

This is also fun: http://karpathy.github.io/2015/05/21/rnn-effectiveness/


#5292588 how can neural network can be used in videogames

Posted by on 20 May 2016 - 01:51 AM

"You sound like someone that has never programmed either a checkers engine or a chess engine."

You sound like someone who hasn't programmed anything more complex than a "checkers engine or a chess engine".


Whatever.

 

NNs for basic classification of SIMPLE situational factors are fine, but once the situations are no longer simple (like spotting temporal  cause and effect)  they just dont work too well.


NNs can tell a Siberian Husky from an Alaskan Malamute by looking at their picture. They can translate sentences between any two languages, with very little additional machinery. Certain NNs (LSTMs in particular) can spot temporal relationships extremely well. It just sounds like you made up your mind about what NNs could do a decade ago and your opinion is impervious to new information.

 

Yeah, Go is actually a very simple game with a very large branching factor. That is NOT the case for most game AI needs.


This is a serious mischaracterization of the difficulties in writing go AI. The 9x9 game has a branching factor comparable to chess, but until a few years ago we couldn't write strong engines even for that version of the game, because the main problem is not the branching factor: It's the lack of a reasonable evaluation function. Now if you look at what AlphaGo has done, one of their key components is what they call their "value network", which is a NN used as an evaluation function. The problem of writing an evaluation function in go is so subtle and so complex that nobody knows any other way of writing a reasonably reliable evaluation function (this is not exactly true: Monte Carlo methods also kind of work for this, and AlphaGo actually blends the two approaches).


Look, I used to be very skeptical of what NNs could do, but they have gotten a lot better. I still don't think they are very useful for game AI, but it's probably a matter of time until they become useful, and it's probably not a waste of your time to learn about them. It seems plausible that, if you can define a reward system in some quantitative and automated way, you can implement good game AI using a utility-based architecture where the sum of future rewards is estimated using a neural network.


#5292523 implementation of neural network

Posted by on 19 May 2016 - 12:03 PM

Consider your inputs to a feed-forward fully-connected neural network as a column vector with real-valued entries. The operation of a typical layer does this

output = non_linearity(matrix * input + biases)

Here `output', `input' and `biases' are column vectors, and `non_linearity' is a function that applies a non-linear transformation to each coordinate in the vector (typically tanh(x) or max(0,x)).

For non-trivial neural networks the bulk of the work comes from the `matrix * input' operation, which can already be parallelized to some extent. However, you get much better parallelism if you compute your network on multiple data samples at the same time (a so-called "minibatch"). It turns out you can just replace the column vectors with matrices, so each column represents a separate data sample from the minibatch, and the formulas are essentially the same. This allows for much more efficient use of parallel hardware, especially if you are using GPUs. All you need to do is use a well-optimized matrix library.

I know nothing about C# or Unity, sorry.


#5292460 how can neural network can be used in videogames

Posted by on 19 May 2016 - 05:15 AM

Checkers as a equivalent to Chess ? ok .........


You sound like someone that has never programmed either a checkers engine or a chess engine. I have done both. And nobody said "equivalent". But they are in the same class.

 

'Chess' Evaluation function  ( as in 'tool' ??) .... but is it the  fundamental core of the decision logic ?   Which is what Im talking about being a  problematic thing for  NN usage.

Yes, NNs are only a tool. I don't see who you are arguing with here. The main search algorithms to use in boardgames are alpha-beta search and MCTS. Both of them have a two parts that can be implemented using neural networks: An estimate of the probability of each move and an estimate of the result of the game.

 

'possible'   -- Where AI is concerned I recall that little situation in the 50s where they thought AI was just around the corner, and all kinds of Computer AI goodness was just about solved.   Here we are 60 years later.   'Complexity' has proven to be quite perplexing.

I am not old enough to remember what people were thinking in the 50s. But I know what AlphaGo just did to the game of go using NNs. And it's hard to argue that go is not a complex game.


#5292284 how can neural network can be used in videogames

Posted by on 18 May 2016 - 08:48 AM

I'm not that scared by your FUD about how complex things can get. :)


EDIT - a simple thing to contemplate what Im talking about is --- try to program Chess via a NN based solution.


I already mentioned I have used a NN as evaluation function in checkers. Using one as evaluation function in chess is not [much] harder: http://arxiv.org/abs/1509.01549

Other uses of NNs for chess are possible: http://erikbern.com/2014/11/29/deep-learning-for-chess/


#5292170 how can neural network can be used in videogames

Posted by on 17 May 2016 - 04:45 PM

The problem with neural nets is that the inputs (the game situation) has to be fed to it as a bunch of numbers.
That means that there usually is a heck of alot of interpretation pre-processing required to generate this data.first.


You can feed images to a CNN these days.


Another problem is that the process of 'training' the neural nets is usually only understood from the outside - the logic is NOT directly accessible to the programmer.      Alot of 'test' game situational data needs to be build up and maintained, and connected with a CORRECT action (probably done by a human) to force the neural net into producing is required.   Again alot of indirect work.


You can make the network return an estimate of future rewards for each possible action: Read the DQN paper I linked to earlier. There are mechanisms to look into what the neural network is doing, although I think it's best to use NNs in situations where you don't particularly care how it's doing it.


Neutral nets also generally dont handle complex situations very well, too many factors interfere with the internal learning patterns/processes, usually requiring multiple simpler neural nets to be built to handle different strategies/tactics/solutions.


That's not my experience.


Usually with games (and their limited AI processing budgets),  after you have already done the interpretive preprocessing, it usually just takes simple hand written logic to use that data -- and that logic CAN be directly tweaked to get the desired results.


That is the traditional approach, yes: You define a bunch of "features" that capture important aspects of the situation, and then write simple logic to combine them. When you do things the NN way, you let the network learn the features and how they interact.


It might be that practical neural nets may be just a 'tool' the main logic can use for certain analysis (and not for many others) .


I think you should give NNs an honest try. In the last few years there has been a lot of progress and most of your objections don't apply.


If you can define a reward scheme by which the quality of an agent's behavior is evaluated, you can probably use unsupervised learning to train a NN to do the job. I don't know if this is practical yet, but with the right tools, this could be a very neat way of writing game AI.


#5291936 Low level Resources regarding Convolutional Neural Networks

Posted by on 16 May 2016 - 02:38 PM

There is a general book on deep learning by MIT that looks interesting. It hasn't been published yet, but you can read it online: http://www.deeplearningbook.org/ . Disclaimer: I haven't read the whole thing.

You can find an example of training a CNN for digit recognition in the TensorFlow tutorials. Implementing that and playing with it should be quite illuminating.


#5291926 Neural network for ultimate tic tac toe

Posted by on 16 May 2016 - 01:36 PM

The natural place to use a neural network in that game is in writing an evaluation function. This NN will be a function that takes the current game situation as input and outputs a single real number, which is something like the expected result of the game, where +1 means X wins and -1 means O wins.

You can use a regular feed-forward neural network, which are the easiest type to understand. You can give it something like 81 inputs describing what's on the board, 9 describing what sub-boards are closed and 9 indicating what sub-board the next move should be at. Have a few hidden layers using rectified linear units. The last layer can use a tanh activation function to bring the result to the range [-1,+1].

Write an alpha-beta search that uses the NN plus a small amount of noise for evaluation. Initialize the NN using small weights, so the output for any input will be very close to 0 (so the alpha-beta search will be using a random evaluation function at this point). Now you can play games where the program plays itself. After you have a few thousand games, you can start training the neural network to predict the result of the game. You can alternate generation of games and training of the network for a few iterations (say 4).

I did something like this for Spanish checkers at the beginning of 2015, and it worked great.




PARTNERS