Distributed visual navigation based on neural Q-learning for a mobile robot Online publication date: Sun, 28-Jan-2007
by Guosheng Yang, Zeng-Guang Hou, Zize Liang
International Journal of Vehicle Autonomous Systems (IJVAS), Vol. 4, No. 2/3/4, 2006
Abstract: Distributed visual navigation based on neural Q-learning for a mobile robot is studied in this paper. First, a general distributed structure based on the multiple processors for visual navigation is established according to the decomposition of the mobile robot visual navigation task. Second, in terms of the general distributed structure, the local environment description method based on the Peer Group Filtering (PGF) and fuzzy technology is put forward. Third, in each local environment description, a controller based on neural Q-learning is designed to guide the mobile robot navigation. In the last part of this paper, experimental simulations are done to test the effectiveness of the presented distributed algorithm, including the image segmentation, environment description and navigation policy.
Existing subscribers:
Go to Inderscience Online Journals to access the Full Text of this article.
If you are not a subscriber and you just want to read the full contents of this article, buy online access here.Complimentary Subscribers, Editors or Members of the Editorial Board of the International Journal of Vehicle Autonomous Systems (IJVAS):
Login with your Inderscience username and password:
Want to subscribe?
A subscription gives you complete access to all articles in the current issue, as well as to all articles in the previous three years (where applicable). See our Orders page to subscribe.
If you still need assistance, please email subs@inderscience.com