Application of Q-learning based on adaptive greedy considering negative rewards in football match system Online publication date: Fri, 24-May-2019
by Fei Xue; Juntao Li; Ruiping Yuan; Tao Liu; Tingting Dong
International Journal of Wireless and Mobile Computing (IJWMC), Vol. 16, No. 3, 2019
Abstract: Aiming at the problem that the multi-robot task allocation method in soccer system is easy to fall into the problem of local optimal solution and real-time performance, a new multi-robot task allocation method is proposed. First of all, in order to improve the speed and efficiency of finding optimal actions and make better use of the disadvantages that traditional Q-learning can't often propagate negative values, we propose a new way to propagate negative values, that is, Q-learning methods based on negative rewards. Next, in order to adapt to the dynamic external environment, an adaptive ε greedy method of which the mode of operation is judged by the ε value is proposed. This method is based on the classical ε-greedy. In the process of solving problems, ε can be adaptively changed as needed for a better balance of exploration and exploitation in reinforcement learning. Finally, we apply this method to the robot's football game system. It has been experimentally proven that dangerous actions can be avoided effectively by the Q-learning method which can spread negative rewards. The adaptive ε-greedy strategy can be used to adapt to the external environment better and faster so as to improve the speed of convergence.
Existing subscribers:
Go to Inderscience Online Journals to access the Full Text of this article.
If you are not a subscriber and you just want to read the full contents of this article, buy online access here.Complimentary Subscribers, Editors or Members of the Editorial Board of the International Journal of Wireless and Mobile Computing (IJWMC):
Login with your Inderscience username and password:
Want to subscribe?
A subscription gives you complete access to all articles in the current issue, as well as to all articles in the previous three years (where applicable). See our Orders page to subscribe.
If you still need assistance, please email subs@inderscience.com