Genetic Algorithms ...

Started by
11 comments, last by NuFAN 21 years, 10 months ago
You misunderstood me... that was my reason for quitting regular UO. The calculations like damage and to-hit are done server-side, so we didn't nerf much unless we really had to (and unlike Origin, we actually listened to the people who play our server)

-fel

~ The opinions stated by this individual are the opinions of this individual and not the opinions of her company, any organization she might be part of, her parrot, or anyone else. ~
Advertisement
Hi felisandria (i know, i''m pretty late ;-)

Your www.uoos.com-page is offline, as you surely know. But is your server-code with described ai (the bunnies chasing you) still availabe?

greetings,
lord skeletor

me@lordskeletor.de, ICQ #83660758
What hasn't been mentioned is that a Genetic Algorithm is a class of Blind Search Algorithms . What most people understand as a Genetic Algorithm is John Holland's Canonical Genetic Algorithm which has 3 operators: selection , single point crossover and mutation ... applied to a haploid (single string) chromosome.

GA's are called a blind search method because the only information they have available to them is a function evaluation of the decoded chromosome. So, consider a chromosome that encodes values between -1 and 1 and an objective function of the form f(x) = 1-x2. Each chromosome has a fitness equal to its objective function value. Applying the GA operators (selection, crossover and mutation) to a population of chromosomes (each representing a value in [-1,1]) would allow you to find the optimal (maximal) value of f(x), since the GA reproduces fit chromosomes.

In other words, the GA searches the state space of the encoding scheme - using genetic-like operations on an abstract enconding of state values - and returns an optimal value.

GAs are particularly useful in search problems where you can only obtain objective function values by evaluation (as opposed to an analytic function), in stochastic domains, multimodal domains (where hill-climbers fail) and discrete domains (step functions). GAs don't work well in domains where the region of the state space occupied by the dominant mode is small compared with the overall size of the state space.

As Mike suggested, you can apply this search technique to finding optimal parameter values in a neural network. In this situation, the objective function is the world in which the agent acts and the value returned is some measure of success of the agent. I won't go into the difficulties with this approach to parameter fitting in ANNs as they are many and detailed.

There are as many application domains for GAs as you can think of search problems. However for many of those domains other search algorithms might be appropriate.

Finally, canonical GAs have an asymptotic convergence rate to the optimal objective function value. This can be overcome though with some rather simple modifications to the implementation.

I hope this has helped!

Cheers,

Timkin

[edited by - Timkin on June 11, 2002 9:44:28 PM]

This topic is closed to new replies.

Advertisement