Service Boy Robot Path Planner by Deep Q Network

Rawin Chaisittiporn

Abstract


Abstract—This paper has studied how to use Deep Q Network (DQN) with ROS for path planner of service boy robot. We use pymunk library of Python for training the neuron network to learn the best action for various input states, by principle of reinforcement learning. Additionally, we use pygame library for learning simulation, observation and evaluation graphically. We have designed input states, output actions, and hidden layers of the neuron network. After training in predefined episodes we use the neuron network to predict the action of TurtleBot3 real robot. We use ROS for robot operation and use ROS slam_gmapping, amcl, except costmap and planners (both global and local planner). Instead, we use well trained Deep Q Network to predict the action of the robot. The result shows that well trained Deep Q Network has more efficient than original ROS planners. It can navigate to any place in the map and can reach the goal by obstacle avoidance. Absolutely, it can move to any destination in the map by orientation ignorance and reduce the distance accuracy between destination and the robot. Absolutely, it can suitably perform a role of service boy, like in the restaurant.
Keywords-DQN, Reinforcement Learning; ROS Navigation; Path planner; Service boy robot


Full Text:

PDF

References


B. Gökçe and H. L. Akın, "Implementation of Reinforcement Learning by transfering sub-goal policies in robot navigation," 2013 21st Signal Processing and Communications Applications Conference (SIU), Haspolat, 2013, pp. 1-4.

H. Matt, "Using reinforcement learning in Python to teach a virtual car to avoid obstacles", Retrieved from http://blog.coast.ai/using-reinforcement-learning-in-python-to-teach-a-virtual-car-to-avoid-obstacles-6e782cc7d4c6, (2018, July 20).

K. Nikaido and K. Kurashige, "Self-Generation of Reward by Sensor Input in Reinforcement Learning," 2013 Second International Conference on Robot, Vision and Signal Processing, Kitakyushu, 2013, pp. 270-273.

M. A. Moussa, "Combining expert neural networks using reinforcement feedback for learning primitive grasping behavior," in IEEE Transactions on Neural Networks, vol. 15, no. 3, pp. 629-638, May 2004.

M. Duguleana and G. Mogan, “Neural networks based reinforcement learning for mobile robots obstacle avoidance.,” Expert Syst. Appl., vol. 62, pp. 104–115, 2016.

R. Kaplan, C. Sauer, and A. Sosa, “Beating Atari with Natural Language Guided Reinforcement Learning.,” CoRR, vol. abs/1704.05539, 2017.

T. Lei and L. Ming, "A robot exploration strategy based on Q-learning network," 2016 IEEE International Conference on Real-time Computing and Robotics (RCAR), Angkor Wat, 2016, pp. 57-62.

T. Tongloy, S. Chuwongin, K. Jaksukam, C. Chousangsuntorn and S. Boonsang, "Asynchronous deep reinforcement learning for the mobile robot navigation with supervised auxiliary tasks," 2017 2nd International Conference on Robotics and Automation Engineering (ICRAE), Shanghai, 2017, pp. 68-72.

V. Mnih et al., “Playing atari with deep reinforcement learning,” arXiv preprint arXiv:1312.5602, 2013.

W. Li, F. Huang, X. Li, G. Pan, and F. Wu, “State Distribution-aware Sampling for Deep Q-learning.,” CoRR, vol. abs/1804.08619, 2018.

X. Zhuang, "The Strategy Entropy of Reinforcement Learning for Mobile Robot Navigation in Complex Environments," Proceedings

of the 2005 IEEE International Conference on Robotics and Automation, Barcelona, Spain, 2005, pp. 1742-1747.

Y. F. Chen, M. Liu, M. Everett, and J. P. How, “Decentralized non-communicating multiagent collision avoidance with deep reinforcement learning.,” in ICRA, 2017, pp. 285–292.


Refbacks

  • There are currently no refbacks.