Archived

This topic is now archived and is closed to further replies.

greedy method....

This topic is 5729 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

Recommended Posts

Could anyone here shed a light on me about the epsilon greedy method in Q learning algorithm ? I read the material I have but I couldnot figure it out how to use it in this kind of algorithm... Any help please ? Regards,

Share on other sites
I''m not sure if this is along the same lines as what you are looking for or not, but I found this link on the gameai.com website while looking for something else, and thought I''d let you check it out

http://www.cc.gatech.edu/fac/Sven.Koenig/greedyonline/

Hope this is useful, if not, sorry
CodeJunkie

Share on other sites
Epsilon greedy is a learning policy, meaning that it not always takes the best (according to the Q-values) actions but sometimes tries something else. A greedy action is to select action=argmax_{a}(Q(current state,a). In other words, for the current state you look at the Q values for all actions and pick the one with the highest value. Because the Q values do not have to be correct, specially in the beginning, you may want to try different actions. If you don''t do this you may take the same action in that state all the time and updating only that value. In this way the values for other actions are never changed even if they are better actions. That''s why a random component is added. The epsilon in the epsilon greedy policy is the probability that you pick at random one of the non greedy actions. And thus with probability (1-epsilon) you pick the greedy action. Normally you start with a high value for epsilon (like 0.5) and reduce it after each run. The result is that you start exploring a lot, but later when the values become more accurate you''ll take greedy actions more often.

1. 1
2. 2
3. 3
Rutin
21
4. 4
5. 5
khawk
14

• 9
• 11
• 11
• 23
• 10
• Forum Statistics

• Total Topics
633653
• Total Posts
3013152
×