Jump to content

  • Log In with Google      Sign In   
  • Create Account

Álvaro

Member Since 07 Mar 2002
Offline Last Active Today, 02:19 AM

#5297357 what means these error?

Posted by Álvaro on 20 June 2016 - 02:12 PM

Here: https://en.wikipedia.org/wiki/Segmentation_fault


#5297269 [Solved] Rotate towards a target angle

Posted by Álvaro on 19 June 2016 - 08:52 PM

Use complex numbers with modulus 1 instead of angles. That would be z = cos(alpha) + i * sin(alpha).

if (imag(desired / current) > 0.0)
  current *= Complex(cos(0.1), sin(0.1));
else
  current *= Complex(cos(0.1), -sin(0.1));

It's not just this piece of code: Pretty much anything you want to do with angles is easier with complex numbers.




#5296162 Truncating a fraction when overflow occurs

Posted by Álvaro on 11 June 2016 - 08:37 PM

You can use mpq_class from GNU MP to represent rational numbers exactly. It uses arbitrary-precision integers for the numerator and denominator.

But the operation you are asking about is interesting to think about regardless of its practical application. I would consider using the continued fraction expression of your number. See the section titled "Best rational approximations" for details.

Informally, in your example
62000000 / 30999998 = 1 / (2 + 1 / (7749999 + 1 / 2))

Whenever a large number pops up in the continued fraction expression of a number, one can find a good approximation using a fraction by substituting that number with infinity

1 / (2 + 1 / Infinity) = 1 / (2 + 0) = 1 / 2


You can do that with irrational numbers as well:

pi = 3 + 1 / (7 + 1 / (15 + 1 / (1 + 1 / (292 + 1 / (1 + ...)))))

If you replace 292 with infinity, you get

pi ~= 3 + 1 / (7 + 1 / (15 + 1 / (1 + 0))) = 355 / 113 = 3.14159292035398230088...


#5295669 Golf Ball Physics (involving 3D slopes)

Posted by Álvaro on 08 June 2016 - 02:02 PM

A friend of mine is an expert in golf Physics. He published a paper about putting on a planar green that has an appendix with the equations of motion: http://arxiv.org/abs/1106.1698


#5295618 Why is it so hard to do animations?

Posted by Álvaro on 08 June 2016 - 08:23 AM

Maybe it's hard because .obj files don't support animation. Or am I missing something?


#5295474 std::stringstream crash

Posted by Álvaro on 07 June 2016 - 06:07 AM

this is a sample test app that i've written, but it doesn't crash


I can write non-crashing code myself, but then I can't help you with your problem. :)

Can you take the original program and start removing anything that doesn't seem relevant, testing often, until whatever you remove makes the bug disappear? There is a decent chance you'll discover the problem yourself in that process. And if you don't, you'll have a neat little program to post here so we can help you.


#5294863 How does Runge-Kutta 4 work in games

Posted by Álvaro on 03 June 2016 - 03:10 PM

OK, thinking about it a bit more, the equations you need to solve are not necessarily linear, so it's a bit trickier than I thought. It's certainly not impossible.

Have you tried Verlet integration? You need to keep the last two positions, instead of a position and a velocity. But it's trivial to implement and generally much more stable than Euler's method.


#5294823 How does Runge-Kutta 4 work in games

Posted by Álvaro on 03 June 2016 - 12:18 PM

The Wikipedia page on RK methods seems pretty good: . If you have trouble understanding that page, please ask a more concrete question.

To use RK4 you need to be able to recompute f after half a step, so you need to have f specified as a function of time and x, not a single number.

However, if you are trying to integrate a system with stiff springs you probably should consider an implicit method.


#5294727 implementation of neural network

Posted by Álvaro on 02 June 2016 - 05:11 PM

Chokepoint for GPUs often usually are complex functions which arent easily done by the simplified instruction sets used by the highly  paralleled processors.    How well do the usual NN sigmoid activation functions work within the GPU instruction sets (and might some table lookup possibly be substituted to get around that) ?


For any decently sized neural net, the vast majority of the time is spent doing matrix multiplications (matrix by column vector if you are feeding it a single sample, or matrix by matrix if you feed it multiple samples at a time, which is typically how training is done). Imagine a neuron with 1,000 inputs. Computing its activation takes 1,000 multiply-adds and one single call to the activation function.

Also, rectified linear units are increasingly common, so instead of 1/(1+exp(-x)), you simply need to compute max(0,x).


#5294140 Quaternion as angular velocity

Posted by Álvaro on 30 May 2016 - 06:35 AM

First, let me address the title of the thread. Quaternions represent attitudes and rotations, not angular velocities: Angular velocities are regular vectors.

So you have an attitude represented by quaternion q1 and you want to change it to q2 over time. Because the notation used in quaternions is multiplicative, you need to perform the rotation inverse(q1)*q2 over some time T. inverse(q1) is the same as conj(q1) because |q1|=1. If the first frame will move time forward by t, the fraction of the rotation you need to perform is t/T. This means you need to multiply your original q1 by (conj(q1)*q2)^(t/T). The relevant formulas are here.


EDIT: Here's some code for your enjoyment.
 
#include <iostream>
#include <boost/math/quaternion.hpp>

typedef boost::math::quaternion<float> Quaternion;

// This function assumes abs(q) = 1
Quaternion log(Quaternion q) {
  float a = q.real();
  Quaternion v = q.unreal();
  
  // In case some rounding error buildup results in a real part that is too large
  if (a > 1.0f) a = 1.0f;
  if (a < -1.0f) a = -1.0f;
  
  return v * (std::acos(a) / abs(v));
}

int main() {
  Quaternion q1(0.0f, 1.0f, 0.0f, 0.0f);
  Quaternion q2(0.0f, 0.0f, 1.0f, 0.0f);
  
  for (float t = 0; t <= 1.0; t += 0.125)
    std::cout << q1 * exp(log(conj(q1) * q2) * t) << '\n';
}



#5292762 how can neural network can be used in videogames

Posted by Álvaro on 21 May 2016 - 09:01 AM

https://arxiv.org/pdf/1409.3215.pdf

This is also fun: http://karpathy.github.io/2015/05/21/rnn-effectiveness/


#5292588 how can neural network can be used in videogames

Posted by Álvaro on 20 May 2016 - 01:51 AM

"You sound like someone that has never programmed either a checkers engine or a chess engine."

You sound like someone who hasn't programmed anything more complex than a "checkers engine or a chess engine".


Whatever.

 

NNs for basic classification of SIMPLE situational factors are fine, but once the situations are no longer simple (like spotting temporal  cause and effect)  they just dont work too well.


NNs can tell a Siberian Husky from an Alaskan Malamute by looking at their picture. They can translate sentences between any two languages, with very little additional machinery. Certain NNs (LSTMs in particular) can spot temporal relationships extremely well. It just sounds like you made up your mind about what NNs could do a decade ago and your opinion is impervious to new information.

 

Yeah, Go is actually a very simple game with a very large branching factor. That is NOT the case for most game AI needs.


This is a serious mischaracterization of the difficulties in writing go AI. The 9x9 game has a branching factor comparable to chess, but until a few years ago we couldn't write strong engines even for that version of the game, because the main problem is not the branching factor: It's the lack of a reasonable evaluation function. Now if you look at what AlphaGo has done, one of their key components is what they call their "value network", which is a NN used as an evaluation function. The problem of writing an evaluation function in go is so subtle and so complex that nobody knows any other way of writing a reasonably reliable evaluation function (this is not exactly true: Monte Carlo methods also kind of work for this, and AlphaGo actually blends the two approaches).


Look, I used to be very skeptical of what NNs could do, but they have gotten a lot better. I still don't think they are very useful for game AI, but it's probably a matter of time until they become useful, and it's probably not a waste of your time to learn about them. It seems plausible that, if you can define a reward system in some quantitative and automated way, you can implement good game AI using a utility-based architecture where the sum of future rewards is estimated using a neural network.


#5292523 implementation of neural network

Posted by Álvaro on 19 May 2016 - 12:03 PM

Consider your inputs to a feed-forward fully-connected neural network as a column vector with real-valued entries. The operation of a typical layer does this

output = non_linearity(matrix * input + biases)

Here `output', `input' and `biases' are column vectors, and `non_linearity' is a function that applies a non-linear transformation to each coordinate in the vector (typically tanh(x) or max(0,x)).

For non-trivial neural networks the bulk of the work comes from the `matrix * input' operation, which can already be parallelized to some extent. However, you get much better parallelism if you compute your network on multiple data samples at the same time (a so-called "minibatch"). It turns out you can just replace the column vectors with matrices, so each column represents a separate data sample from the minibatch, and the formulas are essentially the same. This allows for much more efficient use of parallel hardware, especially if you are using GPUs. All you need to do is use a well-optimized matrix library.

I know nothing about C# or Unity, sorry.


#5292460 how can neural network can be used in videogames

Posted by Álvaro on 19 May 2016 - 05:15 AM

Checkers as a equivalent to Chess ? ok .........


You sound like someone that has never programmed either a checkers engine or a chess engine. I have done both. And nobody said "equivalent". But they are in the same class.

 

'Chess' Evaluation function  ( as in 'tool' ??) .... but is it the  fundamental core of the decision logic ?   Which is what Im talking about being a  problematic thing for  NN usage.

Yes, NNs are only a tool. I don't see who you are arguing with here. The main search algorithms to use in boardgames are alpha-beta search and MCTS. Both of them have a two parts that can be implemented using neural networks: An estimate of the probability of each move and an estimate of the result of the game.

 

'possible'   -- Where AI is concerned I recall that little situation in the 50s where they thought AI was just around the corner, and all kinds of Computer AI goodness was just about solved.   Here we are 60 years later.   'Complexity' has proven to be quite perplexing.

I am not old enough to remember what people were thinking in the 50s. But I know what AlphaGo just did to the game of go using NNs. And it's hard to argue that go is not a complex game.


#5292284 how can neural network can be used in videogames

Posted by Álvaro on 18 May 2016 - 08:48 AM

I'm not that scared by your FUD about how complex things can get. :)


EDIT - a simple thing to contemplate what Im talking about is --- try to program Chess via a NN based solution.


I already mentioned I have used a NN as evaluation function in checkers. Using one as evaluation function in chess is not [much] harder: http://arxiv.org/abs/1509.01549

Other uses of NNs for chess are possible: http://erikbern.com/2014/11/29/deep-learning-for-chess/




PARTNERS