Sign in to follow this  

Simulated Annealing

This topic is 4245 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

This is SA implementation in c++
#include<Math.h>
#include<iostream>
#include<stdlib.h>
#include<time.h>
using namespace std;

struct X{
		double x[10];		
	};
double f1(X* x);
const double PI = acos(-1.0);

X* createX(double UpperConstrain, double LowerConstrain)
{	
	X* x= new X;
	for(int i =0; i<10; i++)
	{		
		bool q=false;
		while(!q)
		{
			double number = (rand()- RAND_MAX/PI);
			for(int z = 0; z<10; z++)
				if(number == x->x[z]||number > UpperConstrain||number <LowerConstrain)
					continue;
				else{
					x->x[i] = number;
					q=true;
				}
		}
	}
	return x;
};

double Schedule(double t){
	//t = t /(1 + 1.5*t);
	t = 0.8*t;
	return t;
}

int main(){	
	srand(time(NULL));
	X* current = createX(5.12, -5.12);
	/*current->x[0]= 1.8605;
	current->x[1]= 3.4505;
	current->x[2]= 1.776;
	current->x[3]= 2.0405;
	current->x[4]= 4.0605;
	current->x[5]= 0.931;
	current->x[6]= -1.2945;
	current->x[7]= -5.1015;
	current->x[8]= 1.759;
	current->x[9]= 0.279;*/
	double t=10.0;
	bool q = false;	
	X* next;
	while(!q)
	{
		t = Schedule(t);
		if (t<=0.00005){
			q = true;			
			continue;
		}
		next = createX(5.12, - 5.12);
		double e =0.0;
		e = f1(next) - f1(current);
		if(e < 0) current = next;
		else if((rand()%10)/10 < exp(e/t))
			current = next;
	}
	cout<<f1(current)<<"\n";
	for(int a = 0; a<10; a++)
		cout<<current->x[a]<<"\n";
	int i;
	cin>>i;
	delete current;
	delete next;
	return 0;
}

double f1(X* x){
	double total=0.0;	
	double n = 10.0;	
	for(int i = 0; i < 10; i++)	
		total = total + pow(x->x[i],2.0) - 10*cos(2*PI*x->x[i]);
	return 10*n+total;
}
Could anyone check to see if I've been implementing it correctly? or is there any other way i can improve it. Cheers

Share this post


Link to post
Share on other sites
Googling for Simulated Annealing, it seems like it has to do with crystals and stuff.

Is that the same SA you are refering too?

Share this post


Link to post
Share on other sites
well what i'm trying to do is to find a set of x that minize function f1(). I've googling for SA, but all the articles are too general(for me at least) :(

Share this post


Link to post
Share on other sites
Quote:
Original post by Tutukun
well what i'm trying to do is to find a set of x that minize function f1(). I've googling for SA, but all the articles are too general(for me at least) :(


Ok, so you want to do non-linear minimization.

I have two suggestions of algorithms for you:

-Levenberg-Marquardt, probably the most used

or

-Nelder-Mead, useful if f() is not derivable

If you really want to do SA, then Wikipedia has the stuff for you, as usual:

http://en.wikipedia.org/wiki/Simulated_Annealing

They have pseudocode too!

Edit:

BTW, check out Numerical Recipies, they have full tested source for LM and NM, and maybe SA too.

[Edited by - Steadtler on May 1, 2006 1:53:35 PM]

Share this post


Link to post
Share on other sites
Back to the original question about your SA implementation.

It appears that what you're doing is to create you first viable solution (current) with hard coded values. Usually this is a random starting point. Then you are creating a new solution at random. Typically, rather than creating a completely random new solution, you make random modifications to the existing solution. Add a bit of noise to each of the x[i] values and reevaluate. By making a completely random solution you are just jumping around the solution space and not getting any benefit from the current proposed solution.

I'm not sure what the innermost loop (indexed with z) is doing in createX. It appears that it is preventing any two of the x[i] values from being the same. Is this necessary?

Also, your cooling schedule is a bit unusual. Taking 80% of the current temperature each time will cool rapidly at first and then more slowly later. An exponential cooling, more like that which is commented out in your schedule function, might give you a slightly better result.

The random replacement also looks a bit suspect:

if(e < 0) current = next;
else if((rand()%10)/10 < exp(e/t))

This suggests to me that early in the simulation when t is large, you'll have a low probability of replacing the solution at random. Later, when t is small, you'll have a higher probability of replacing it. This is backwards. Am I reading the code correctly??

Are you having problems with the results?? What kinds of results do you get when you run it? How about running it multiple times - do you get wildly different results??

-Kirk

Share this post


Link to post
Share on other sites
Kirk has identified the essential issues with your implementation. I'll try and elaborate just a little so that you hopefully have enough information to get a working, reliable version of SA going.

Quote:
Original post by kirkd
It appears that what you're doing is to create you first viable solution (current) with hard coded values. Usually this is a random starting point.


If you expect the function surface you are trying to find an optima on to be multi-modal, then a good strategy is to perform multiple optimisations with a random restart. So, your basic implementation should generate a random starting point on your function surface and then perform the optimisation for this starting point. You can then run this implementation several times, keeping the best result found so far as the solution to your problem. This provides a nice anytime optimisation algorithm (meaning that any time you stop the algorithm, you have a viable solution to the problem).


Quote:
Typically, rather than creating a completely random new solution, you make random modifications to the existing solution.


When you generate a candidate for the next solution, it should be drawn from a probability distribution conditioned on the current solution (and for more advanced forms of SA, on the shape of the surface of the local solution neighbourhood); that is, a conditional distribution of the form p(z(k+1)|x(k)). This should be a local distribution with a probability going to zero rapidly as the distance between z(k+1) and x(k) increases. If you allow z(k+1) to be any point in the state space, then you're just performing a random walk.

The simplest solution to the generation of a next candidate is to take a small step in the state space in a uniformly random direction from the current point. So, generate a random unit vector in your state space, scale it by your step size and add this to the vector defining the current point from your origin. This gives you a new candidate, which you accept according to your acceptance function.

Speaking of which, your acceptance function doesn't look right to me. If you're using the Metropolis function (which you look to be trying to do), then your acceptance probability should be
A(x(k),z(k+1),t) = min(1, exp{- [ f(z(k+1))-f(x(k)) ] / t(k)})

and your next point x(k+1) will be


/ z(k+1) if p ≤ A(x(k),z(k+1),t(k))
x(k+1) = |
\ x(k) otherwise

where p is a uniform random number in [0,1]. (There is of course a slight problem with the Metropolis function... when A()=0 and p=0, so if A()=0 we set x(k+1)=x(k) ).

As for your cooling schedule, there are any number of ways you can choose this. A reasonable scheme is that of Bohachevsky:

t(k) = U(y(k)) = a*(f(x(k)-f*)[sup]b[/b]

where a,b are constants and f* is the optimal value of f(). Since this is not known, the currect best estimate f^ is used. If you're using a random-restart strategy, f^ is the best solution found from all previous runs of the algorithm. y(k) is the set of all known information found so far (the set of all points x(0),x(1),...,x(k) ). This choice of cooling schedule gives a diminishing probability of accepting ascent steps as we get close to the optimal value.

I hope this helps you to sort out the problems with your implementation.

Good luck,

Timkin

(PS: Kirk, good to see you around here again mate! ;) )

Share this post


Link to post
Share on other sites
Timkin,

Thanks for the extra details, but isn't that what I said?? 8^) Just kidding. Very nice explanation, and to the OP, I think these details will improve your implementation significantly.

Good to see you floating around also, Timkin. We've both been skulking a bit as of late.

-Kirk

Share this post


Link to post
Share on other sites
Quote:
Original post by kirkd
Thanks for the extra details, but isn't that what I said?? 8^) Just kidding.


Yes it is what you said. ;) Just trying to add a little more depth to so those that aren't familiar with SA gain a slightly better understanding of it (actually, I just wanted to show off my equation formatting capabilities really! ;) )

Quote:
We've both been skulking a bit as of late.

Actually I've been around a fair bit... I tend to read the boards most weekdays... I just don't post as much as I used to; too much work on my plate and there are enough other people around now that can handle most of the questions that I don't need to any more. I kind of miss the old days though when we had good meaty discussions on the deeper issues of game AI (as opposed to some of the sillier ones these days ;) ) and we had great threads that grew every day. Ah well, time marches on and so do people.

Cheers,

Timkin

Share this post


Link to post
Share on other sites
Yeah, give me a couple of days and I'll see if I can come up with a deep (meaty) question. Stimulate some discussion/controversy. 8^)

-Kirk

Share this post


Link to post
Share on other sites

This topic is 4245 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this