• Create Account

# Álvaro

Member Since 07 Mar 2002
Offline Last Active Today, 06:06 PM

### #5294140Quaternion as angular velocity

Posted by on Yesterday, 06:35 AM

First, let me address the title of the thread. Quaternions represent attitudes and rotations, not angular velocities: Angular velocities are regular vectors.

So you have an attitude represented by quaternion q1 and you want to change it to q2 over time. Because the notation used in quaternions is multiplicative, you need to perform the rotation inverse(q1)*q2 over some time T. inverse(q1) is the same as conj(q1) because |q1|=1. If the first frame will move time forward by t, the fraction of the rotation you need to perform is t/T. This means you need to multiply your original q1 by (conj(q1)*q2)^(t/T). The relevant formulas are here.

EDIT: Here's some code for your enjoyment.

```#include <iostream>
#include <boost/math/quaternion.hpp>

typedef boost::math::quaternion<float> Quaternion;

// This function assumes abs(q) = 1
Quaternion log(Quaternion q) {
float a = q.real();
Quaternion v = q.unreal();

// In case some rounding error buildup results in a real part that is too large
if (a > 1.0f) a = 1.0f;
if (a < -1.0f) a = -1.0f;

return v * (std::acos(a) / abs(v));
}

int main() {
Quaternion q1(0.0f, 1.0f, 0.0f, 0.0f);
Quaternion q2(0.0f, 0.0f, 1.0f, 0.0f);

for (float t = 0; t <= 1.0; t += 0.125)
std::cout << q1 * exp(log(conj(q1) * q2) * t) << '\n';
}
```

### #5292762how can neural network can be used in videogames

Posted by on 21 May 2016 - 09:01 AM

https://arxiv.org/pdf/1409.3215.pdf

This is also fun: http://karpathy.github.io/2015/05/21/rnn-effectiveness/

### #5292588how can neural network can be used in videogames

Posted by on 20 May 2016 - 01:51 AM

"You sound like someone that has never programmed either a checkers engine or a chess engine."

You sound like someone who hasn't programmed anything more complex than a "checkers engine or a chess engine".

Whatever.

NNs for basic classification of SIMPLE situational factors are fine, but once the situations are no longer simple (like spotting temporal  cause and effect)  they just dont work too well.

NNs can tell a Siberian Husky from an Alaskan Malamute by looking at their picture. They can translate sentences between any two languages, with very little additional machinery. Certain NNs (LSTMs in particular) can spot temporal relationships extremely well. It just sounds like you made up your mind about what NNs could do a decade ago and your opinion is impervious to new information.

Yeah, Go is actually a very simple game with a very large branching factor. That is NOT the case for most game AI needs.

This is a serious mischaracterization of the difficulties in writing go AI. The 9x9 game has a branching factor comparable to chess, but until a few years ago we couldn't write strong engines even for that version of the game, because the main problem is not the branching factor: It's the lack of a reasonable evaluation function. Now if you look at what AlphaGo has done, one of their key components is what they call their "value network", which is a NN used as an evaluation function. The problem of writing an evaluation function in go is so subtle and so complex that nobody knows any other way of writing a reasonably reliable evaluation function (this is not exactly true: Monte Carlo methods also kind of work for this, and AlphaGo actually blends the two approaches).

Look, I used to be very skeptical of what NNs could do, but they have gotten a lot better. I still don't think they are very useful for game AI, but it's probably a matter of time until they become useful, and it's probably not a waste of your time to learn about them. It seems plausible that, if you can define a reward system in some quantitative and automated way, you can implement good game AI using a utility-based architecture where the sum of future rewards is estimated using a neural network.

### #5292523implementation of neural network

Posted by on 19 May 2016 - 12:03 PM

Consider your inputs to a feed-forward fully-connected neural network as a column vector with real-valued entries. The operation of a typical layer does this

output = non_linearity(matrix * input + biases)

Here `output', `input' and `biases' are column vectors, and `non_linearity' is a function that applies a non-linear transformation to each coordinate in the vector (typically tanh(x) or max(0,x)).

For non-trivial neural networks the bulk of the work comes from the `matrix * input' operation, which can already be parallelized to some extent. However, you get much better parallelism if you compute your network on multiple data samples at the same time (a so-called "minibatch"). It turns out you can just replace the column vectors with matrices, so each column represents a separate data sample from the minibatch, and the formulas are essentially the same. This allows for much more efficient use of parallel hardware, especially if you are using GPUs. All you need to do is use a well-optimized matrix library.

I know nothing about C# or Unity, sorry.

### #5292460how can neural network can be used in videogames

Posted by on 19 May 2016 - 05:15 AM

Checkers as a equivalent to Chess ? ok .........

You sound like someone that has never programmed either a checkers engine or a chess engine. I have done both. And nobody said "equivalent". But they are in the same class.

'Chess' Evaluation function  ( as in 'tool' ??) .... but is it the  fundamental core of the decision logic ?   Which is what Im talking about being a  problematic thing for  NN usage.

Yes, NNs are only a tool. I don't see who you are arguing with here. The main search algorithms to use in boardgames are alpha-beta search and MCTS. Both of them have a two parts that can be implemented using neural networks: An estimate of the probability of each move and an estimate of the result of the game.

'possible'   -- Where AI is concerned I recall that little situation in the 50s where they thought AI was just around the corner, and all kinds of Computer AI goodness was just about solved.   Here we are 60 years later.   'Complexity' has proven to be quite perplexing.

I am not old enough to remember what people were thinking in the 50s. But I know what AlphaGo just did to the game of go using NNs. And it's hard to argue that go is not a complex game.

### #5292284how can neural network can be used in videogames

Posted by on 18 May 2016 - 08:48 AM

I'm not that scared by your FUD about how complex things can get.

EDIT - a simple thing to contemplate what Im talking about is --- try to program Chess via a NN based solution.

I already mentioned I have used a NN as evaluation function in checkers. Using one as evaluation function in chess is not [much] harder: http://arxiv.org/abs/1509.01549

Other uses of NNs for chess are possible: http://erikbern.com/2014/11/29/deep-learning-for-chess/

### #5292170how can neural network can be used in videogames

Posted by on 17 May 2016 - 04:45 PM

The problem with neural nets is that the inputs (the game situation) has to be fed to it as a bunch of numbers.
That means that there usually is a heck of alot of interpretation pre-processing required to generate this data.first.

You can feed images to a CNN these days.

Another problem is that the process of 'training' the neural nets is usually only understood from the outside - the logic is NOT directly accessible to the programmer.      Alot of 'test' game situational data needs to be build up and maintained, and connected with a CORRECT action (probably done by a human) to force the neural net into producing is required.   Again alot of indirect work.

You can make the network return an estimate of future rewards for each possible action: Read the DQN paper I linked to earlier. There are mechanisms to look into what the neural network is doing, although I think it's best to use NNs in situations where you don't particularly care how it's doing it.

Neutral nets also generally dont handle complex situations very well, too many factors interfere with the internal learning patterns/processes, usually requiring multiple simpler neural nets to be built to handle different strategies/tactics/solutions.

That's not my experience.

Usually with games (and their limited AI processing budgets),  after you have already done the interpretive preprocessing, it usually just takes simple hand written logic to use that data -- and that logic CAN be directly tweaked to get the desired results.

That is the traditional approach, yes: You define a bunch of "features" that capture important aspects of the situation, and then write simple logic to combine them. When you do things the NN way, you let the network learn the features and how they interact.

It might be that practical neural nets may be just a 'tool' the main logic can use for certain analysis (and not for many others) .

I think you should give NNs an honest try. In the last few years there has been a lot of progress and most of your objections don't apply.

If you can define a reward scheme by which the quality of an agent's behavior is evaluated, you can probably use unsupervised learning to train a NN to do the job. I don't know if this is practical yet, but with the right tools, this could be a very neat way of writing game AI.

### #5291936Low level Resources regarding Convolutional Neural Networks

Posted by on 16 May 2016 - 02:38 PM

There is a general book on deep learning by MIT that looks interesting. It hasn't been published yet, but you can read it online: http://www.deeplearningbook.org/ . Disclaimer: I haven't read the whole thing.

You can find an example of training a CNN for digit recognition in the TensorFlow tutorials. Implementing that and playing with it should be quite illuminating.

### #5291926Neural network for ultimate tic tac toe

Posted by on 16 May 2016 - 01:36 PM

The natural place to use a neural network in that game is in writing an evaluation function. This NN will be a function that takes the current game situation as input and outputs a single real number, which is something like the expected result of the game, where +1 means X wins and -1 means O wins.

You can use a regular feed-forward neural network, which are the easiest type to understand. You can give it something like 81 inputs describing what's on the board, 9 describing what sub-boards are closed and 9 indicating what sub-board the next move should be at. Have a few hidden layers using rectified linear units. The last layer can use a tanh activation function to bring the result to the range [-1,+1].

Write an alpha-beta search that uses the NN plus a small amount of noise for evaluation. Initialize the NN using small weights, so the output for any input will be very close to 0 (so the alpha-beta search will be using a random evaluation function at this point). Now you can play games where the program plays itself. After you have a few thousand games, you can start training the neural network to predict the result of the game. You can alternate generation of games and training of the network for a few iterations (say 4).

I did something like this for Spanish checkers at the beginning of 2015, and it worked great.

### #5291811how can neural network can be used in videogames

Posted by on 16 May 2016 - 05:45 AM

I would start by reading the DeepMind paper on applying DQN to Atari 2600 games.

You can also check out some of the recent papers on using CNNs for computer go:
* http://arxiv.org/abs/1412.3409
* http://arxiv.org/abs/1412.6564
* http://arxiv.org/abs/1511.06410
* https://vk.com/doc-44016343_437229031?dl=56ce06e325d42fbc72

### #5291791how can neural network can be used in videogames

Posted by on 16 May 2016 - 02:53 AM

We need to narrow down the type of game here.

For board games, neural networks are useful to come up with a probability distribution over the available moves, and to come up with a score that indicates something like the expected reward at the end of the game. These can be used as ingredients in either alpha-beta search or in MCTS.

For other games, you can still come up with something like an estimate of the sum of future rewards (usually with some exponential discount for rewards further away in the future) for each possible action taken. This can be trained using reinforcement learning, like DeepMind did for Atari 2600 video games.

next thing is how can i make a good design of neural network before of just thinking about error and training weights?

I have no idea of how to think of a NN design without knowing what the game is or what the role of the NN is in that game. Give us some context here, and perhaps we can come up with interesting ideas.

### #5289796Is using the Factory or Builder pattern nessesary?

Posted by on 02 May 2016 - 05:41 PM

There are no mandates in programming. Do whatever will make the code more clear. Using a factory or builder is an attempt at restricting what parts of the code need to know about the difference between Domestic and International.

But it looks like your design is doomed to begin with. How are the documents going to be used? Since the base class doesn't expose any virtual functions that may make use of them, you are probably going to be needing to know the type of client in some other part of the code.

I would personally add a variable to Client indicating whether it's international or domestic, and leave the list of documents empty for domestic clients.

Couldn't you argue that that breaks ISP? Pretty much breaks ISP verbatim actually.

I have a healthy disrespect for OOP rules.

If I understand correctly, you are complaining about Client having a list of documents which is not used in the case of domestic clients. If domestic clients having documents is something that doesn't make any sense, then I see your point. I would need to know more about the exact context in which this code is being written to make a judgement, but it seems to me it's more like "we don't currently have a use for the documents in the case of a domestic client", but that sounds like something that may change tomorrow.

### #5289643Is using the Factory or Builder pattern nessesary?

Posted by on 01 May 2016 - 05:59 PM

There are no mandates in programming. Do whatever will make the code more clear. Using a factory or builder is an attempt at restricting what parts of the code need to know about the difference between Domestic and International.

But it looks like your design is doomed to begin with. How are the documents going to be used? Since the base class doesn't expose any virtual functions that may make use of them, you are probably going to be needing to know the type of client in some other part of the code.

I would personally add a variable to Client indicating whether it's international or domestic, and leave the list of documents empty for domestic clients.

### #5288968Lagrange multiplier in constrained dynamics

Posted by on 27 April 2016 - 01:28 PM

I read parts of the tutorial and I think that way of thinking of Lagrange multipliers is probably very useful. The part you quoted about minimizing C seems wrong, though.

### #5288909Lagrange multiplier in constrained dynamics

Posted by on 27 April 2016 - 07:46 AM

I haven't clicked on the link to the tutorial, but I can explain how Lagrange multipliers work.

You want to minimize f(x,y) subject to g(x,y) = 0. We'll introduce an additional variable l (usually "lambda", but I don't have that letter available). Let's look at the function

L(x,y,l) = f(x,y) - l*g(x,y)

and imagine you've found a point where the derivative of L with respect to each of the three variables is 0. You then have

dL/dx = df/dx - l*dg/dx = 0
dL/dy = df/dy - l*dg/dx = 0
dL/dl = g(x,y) = 0

The last condition guarantees that the constraint is being satisfied. The other two basically say that the levels of f and g are tangent.

These are necessary conditions for a point (x,y) to be a solution to your problem. Sufficient conditions do exist, but they are a bit trickier to think about, and this may or may not be important in your case.

PARTNERS