• Content count

  • Joined

  • Last visited

Community Reputation

132 Neutral

About Marmakoide

  • Rank
  1. AI for an RPG

    Ok... * On topic : Well pseudo-random numbers are good to give a "natural" feeling... * Off topic : Creation versus Darwin... I think I'am not able to discuss about this. Just keep an eye to those links bellow. This is simulated Darwinian evolution applyed to robotic structures.
  2. AI for an RPG

    Yes, I don't understand your comment about errors... Sorry :)
  3. Actual methods to train a backprop network?

    * 3 layers perceptrons with sigmoidal or radial function could estimate any function, since you have the good number of neurons in your hidden layer * The choice of those function could greatly improve your training time and quality * GA could select this for you :), even if GA are slow
  4. Quite fun : me too ! I'am currently design a 2D environment for bots, each bots have the same script. They have energy, a field of vision, two weels, a laser turet, energy that could be shared, messages transmiter/receiver and could be cloned. "destroy-all", "capture-the-flag match", and anything else fun. All the environment will be in C++, but the script language will be LUA, I think.
  5. Computers and people

    I can give you some code, to show you how to parse a sentence to get the grammar. * What my code CAN'T do : - Handling non fixed vocabulary. An example : in French, the verb "to eat" is "manger". Then your dog will eat a bone, the English says "will eat", but the French "mangeras", "as" is added to "manger". The same for "do", "did", "does", etc... Each Occidental language have non fixed vocabulary. Chinese or Vietnamese are languages smarter for this ;) Only one word for a lexical entity, that's all. - Handling non LL(1) grammars. Google for the definition. My code is a LL(1) parser to read an XML-like data file format. I can modify it for a "Tarzan-like english" parser next week (I had a full time job...), that move a cursor on the screen. Commented C is ok ? Handling others kinds of grammar, like LALR(1), Bison is THE solution. It produces a full functionnal parser from a grammar definition.
  6. You also have "Robocraft", that's really fun. You code your bot team in Java, they must win a capture-the-flag match. Many possibilies are enabled, like message between robots. And you can even sends the received ennemies messages to raise the bugs of the ennemy message handling code, that made beautiful chain explosion ;)
  7. Computers and people

    * The cl assical approach is a layered approach : letters -> word (with the stuffs like do/does/did taken in accout). It's the lexical layer. Performed by a huge state machine (transducer). word -> sentence. It's the grammatical layer. Performed by some flavours of Hidden Markov Models (aka HMM). sentence -> meaning. Well... A good research subject ;) For all this, there some software package, but never free. Sad... * You can work with sentences with some fixed grammatical structure, a "Tarzan-like speaking". The grammatical layer could be much easier this way, a simple HMMM could perform this. The lexical layer could be also simplified, with only one word for each lexical entry. A simplified grammar for a RPG game : 1) SENTENCE -> ACTION [and] SENTENCE 2) SENTENCE -> ACTION 3) ACTION -> ACTION_TYPE ACTION_PARAMETER_LIST 4) ACTION_PARAMETER_LIST -> ACTION_PARAMETER ACTION_PARAMETER_LIST 5) ACTION_PARAMETER_LIST -> ACTION_PARAMETER 6) ACTION_PARAMETER -> ITEM 7) ACTION_PARAMETER -> BUDDY 8) ACTION_PARAMETER -> LOCATION_DEF 9) ITEM -> [sword], [crystal ball], ... ... and so on A LL(1) grammar is really easy to parse. By following the derivation of a grammar to build a given sentence, you can deduce the meaning of the sentence. * Some examples of simplified English : Letters -> Take sword and attack monster in-front-of me. Lexical -> [Take][sword][and][attack][monster][in-front-of][me]. Grammatical -> ACTION_TYPE ITEM ACTION_TYPE BUDDY LOCATION LOCATION_PARAMETER
  8. AI for an RPG

    (I was the anonymous poster, I just forgot to login) Could you explain more ?
  9. Compiling ode

    Yes, OPCODE must be compiled first, then ODE will compile fine. ODE use trimeshes if a switch is set somewhere in the makefile.
  10. AI Algorithms on GPU

    - Making A* on a GPU is silly : it needs random access on memory, slow with GPU. - A* it's for finding path... - Algorithm to find path a 'matrix' way ? Perhaps this 1) Build a matrix A, where A(i,j) = 0 if we can reach node(j) from node(i) in one step 2) For the matrix multiplications, replace + by max, and * by + 3) Compute A^n : A(i,j) will give you the distance you need to reach node(j) from node(i), if this distance is < n - A^n is cheap to compute. Example A^7 = (A^4) * (A^3) A^4 = (A^2) * (A^2) A^3 = (A^2) * A A^2 = A * A 4 multiplications instead of 7, log2(n) instead of n ;) - It could help for a pathfinding, I'am not sure it's very interesting (complexity, efficency) Anyway, with a special matrix (path on a square grid as example), I'am nearly sure there is some tricks to reduce computations.
  11. Making an artificial neural network

    As a little project, I use a neural network to control a virtual 2 wheeled robot, with distance and color sensors : 12 inputs and 2 outputs (the wheels motors). The task for the robot was to find a black area. When it was in this area, a light was activated. Then it must go on a white area. The neural network was only 12 neurons, full connected. It was trained by genetic algorithms, around 4 hours to have a result. It was quite fun to see the robot... Another cool stuff is to train a group of this kind of robots (two wheels and sensors) for a collective task, like pushing stuffs in the middle of a room.
  12. Clipping 2d lines on a boundry rectangle

    No, it's not mine. This idea is from Cohen and Sutherland, with their clipping algorithms. When they found this nice trick, I wasn't born ;) It's here :
  13. Compiling ode

    Some features of ODE, like trimeshes, are available only if you compile at first time the third party library called OPCODE. It's distributed with ODE.
  14. OBBOX / OBBOX collision accuracy

    ... That's I already did ... But you give some ideas, I will test it.
  15. Actual methods to train a backprop network?

    A very simple way to train a neural network, whatever the kind of network, is... genetic algorithms. 1/ You fix by hand the structure of your neural network 2/ You encode the weight as let's say 10 bits number. So real weight = 1024 / (encoded weight + 1) 3/ You encode the parameters of each neurons the same way 4/ Evolve this bunch of bits by genetic algorithm. Good and easy frameworks are done for this, so don't programm you'r own package. My favourite one is Open Beagle in C++, ECJ in Java. It could be slow (each set of weight must be tested on each example (or some random selected examples)) but it very robust, not as backprop.