PL EN


Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników
Tytuł artykułu

Efficient learning variable impedance control for industrial robots

Autorzy
Treść / Zawartość
Identyfikatory
Warianty tytułu
Języki publikacji
EN
Abstrakty
EN
Compared with the robots, humans can learn to perform various contact tasks in unstructured environments by modulating arm impedance characteristics. In this article, we consider endowing this compliant ability to the industrial robots to effectively learn to perform repetitive force-sensitive tasks. Current learning impedance control methods usually suffer from inefficiency. This paper establishes an efficient variable impedance control method. To improve the learning efficiency, we employ the probabilistic Gaussian process model as the transition dynamics of the system for internal simulation, permitting long-term inference and planning in a Bayesian manner. Then, the optimal impedance regulation strategy is searched using a model-based reinforcement learning algorithm. The effectiveness and efficiency of the proposed method are verified through force control tasks using a 6-DoFs Reinovo industrial manipulator.
Rocznik
Strony
201--212
Opis fizyczny
Bibliogr. 36 poz., rys., tab., wykr.
Twórcy
autor
  • College of Automation, Harbin Engineering University, Harbin 150001, China
autor
  • College of Automation, Harbin Engineering University, Harbin 150001, China
autor
  • College of Automation, Harbin Engineering University, Harbin 150001, China
autor
  • College of Automation, Harbin Engineering University, Harbin 150001, China
autor
  • College of Automation, Harbin Engineering University, Harbin 150001, China
Bibliografia
  • [1] N. Hogan, “Impedance control: An approach to manipulation: Part i–theory”, Journal of Dynamic Systems, Measurement, and Control 107, pp 1‒7, 1985.
  • [2] C. Yang, G. Ganesh, S. Haddadin, S. Parusel, A. Albu-Schaeffer, and E. Burdet, “Human-like adaptation of force and impedance in stable and unstable interactions”, IEEE Transactions on Robotics 27, pp 918‒930, 2011.
  • [3] J.V.D. Kieboom and A.J. Ijspeert, “Exploiting natural dynamics in biped locomotion using variable impedance control”, IEEE-RAS International Conference on Humanoid Robots, pp 348‒353, 2013.
  • [4] C. Yang, J. Luo, Y. Pan, Z. Liu, and C.Y. Su, “Personalized variable gain control with tremor attenuation for robot teleoperation”, IEEE Transactions on Systems Man & Cybernetics Systems PP, pp 1‒12, 2017.
  • [5] G. Ganesh, N. Jarrassé, S. Haddadin, A. Albu-Schaeffer, and E. Burdet, “A versatile biomimetic controller for contact tooling and haptic exploration”, 2012 IEEE International Conference on Robotics and Automation, pp 3329‒3334, 14‒18 May 2012.
  • [6] F. Ferraguti, C. Secchi, and C. Fantuzzi, “A tank-based approach to impedance control with variable stiffness”, 2013 IEEE International Conference on Robotics and Automation, pp 4948‒4953, 6‒10 May 2013 2013.
  • [7] K. Lee and M. Buss, “Force tracking impedance control with variable target stiffness”, IFAC Proceedings Volumes 41, pp 6751‒6756, 2008.
  • [8] D. Braun, M. Howard, and S. Vijayakumar, “Optimal variable stiffness control: Formulation and application to explosive movement tasks”, Autonomous Robots 33, pp 237‒253, 2012.
  • [9] T. Tsuji and Y. Tanaka, “On-line learning of robot arm impedance using neural networks”, Robotics and Autonomous Systems 52, pp 257‒271, 2005.
  • [10] A.S. Polydoros and L. Nalpantidis, “Survey of model-based reinforcement learning: Applications on robotics”, Journal of Intelligent & Robotic Systems 86, pp 153‒173, 2017.
  • [11] K. Arulkumaran, M.P. Deisenroth, M. Brundage, and A.A. Bharath, “Deep reinforcement learning: a brief survey“, IEEE Signal Processing Magazine, vol. 34, no. 6, pp 26‒38, 2017.
  • [12] M. Deisenroth, G. Neumann, and J. Peters, “A survey on policy search for robotics”, Journal of Intelligent & Robotic Systems 15, pp 1‒2, 2013.
  • [13] A. Nagabandi, G. Kahn, R.S. Fearing, and S. Levine, “Neural network dynamics for model-based deep reinforcement learning with model-free fine-tuning”, arXiv preprint arXiv:1708.02596 2017.
  • [14] M.P. Deisenroth and C.E. Rasmussen, “Pilco: A model-based and data-efficient approach to policy search”, International Conference on Machine Learning, ICML 2011, pp 465‒472, June 28 – July 2011.
  • [15] M.P. Deisenroth, D. Fox, and C.E. Rasmussen, “Gaussian processes for data-efficient learning in robotics and control”, IEEE Trans Pattern Anal Mach Intell 37, pp 408‒423, 2015.
  • [16] V. Mnih, K. Kavukcuoglu, D. Silver, A.A. Rusu, J. Veness, M.G. Bellemare, A. Graves, M. Riedmiller, A.K. Fidjeland, G. Ostrovski, et al. “Human-level control through deep reinforcement learning”, Nature 518, p 529, 2015.
  • [17] D. Silver, J. Schrittwieser, K. Simonyan, I. Antonoglou, A. Huang, A. Guez, T. Hubert, L. Baker, M. Lai, A. Bolton, et al. “Mastering the game of go without human knowledge”, Nature 550, p 354, 2017.
  • [18] M. Denil, P. Agrawal, T.D. Kulkarni, T. Erez, P. Battaglia, and N. de Freitas, “Learning to perform physics experiments via deep reinforcement learning”, arXiv preprint arXiv:1611.01843 2016.
  • [19] S. Gu, E. Holly, T. Lillicrap, and S. Levine, “Deep reinforcement learning for robotic manipulation with asynchronous off-policy updates”, 2017 IEEE International Conference on Robotics and Automation (ICRA), pp 3389‒3396, May 29 2017 -June 3 2017 2017.
  • [20] V. Koropouli, S. Hirche, and D. Lee, “Generalization of force control policies from demonstrations for constrained robotic motion tasks”, Journal of Intelligent & Robotic Systems 80, pp 1‒16, 2015.
  • [21] D. Mitrovic, S. Klanke, M. Howard, and S. Vijayakumar, “Exploiting sensorimotor stochasticity for learning control of variable impedance actuators”, 2010 10th IEEE-RAS International Conference on Humanoid Robots, pp 536‒541, 6‒8 Dec. 2010 2010.
  • [22] K. Kronander and A. Billard, “Learning compliant manipulation through kinesthetic and tactile human-robot interaction”, IEEE Trans Haptics 7, pp 367‒380, 2014.
  • [23] D. Mitrovic, S. Klanke, and S. Vijayakumar, “Learning impedance control of antagonistic systems based on stochastic optimization principles”, International Journal of Robotics Research 30, pp 556‒573, 2011.
  • [24] S.M. Khansari-Zadeh and O. Khatib, “Learning potential functions from human demonstrations with encapsulated dynamic and compliant behaviors”, Autonomous Robots 41, pp 45‒69, 2017.
  • [25] M. Li, H. Yin, K. Tahara, and A. Billard, “Learning object-level impedance control for robust grasping and dexterous manipulation”, 2014 IEEE International Conference on Robotics and Automation (ICRA), pp 6784‒6791, May 31–June 7 2014.
  • [26] Z. Du, W. Wang, Z. Yan, W. Dong, and W. Wang, “Variable admittance control based on fuzzy reinforcement learning for minimally invasive surgery manipulator”, Sensors 17, 2017.
  • [27] J. Buchli, F. Stulp, E. Theodorou, and S. Schaal, “Learning variable impedance control”, International Journal of Robotics Research 30, pp 820‒833, 2011.
  • [28] F. Stulp, J. Buchli, A. Ellmer, M. Mistry, E.A. Theodorou, and S. Schaal, “Model-free reinforcement learning of impedance control in stochastic environments”, IEEE Transactions on Autonomous Mental Development 4, pp 330‒341, 2012.
  • [29] M. Kalakrishnan, L. Righetti, P. Pastor, and S. Schaal, “Learning force control policies for compliant manipulation”, Ieee/rsj International Conference on Intelligent Robots and Systems, IROS 2011, pp 4639‒4644, September 2011.
  • [30] F. Winter, M. Saveriano, and D. Lee, “The role of coupling terms in variable impedance policies learning”, International Workshop on Human-Friendly Robotics, 2016.
  • [31] C.E. Rasmussen and C.K.I. Williams, “Gaussian processes for machine learning (adaptive computation and machine learning)”, MIT Press: p 69‒106, 2006.
  • [32] M. Deisenroth, “Efficient reinforcement learning using gaussian processes”, KIT Scientific Publishing: 2010.
  • [33] C. Takahashi, R. Scheidt, and D. Reinkensmeyer, “Impedance control and internal model formation when reaching in a randomly varying dynamical environment”, Journal of Neurophysiology 86, pp 1047‒1051, 2001.
  • [34] D.W. Franklin, E. Burdet, K.P. Tee, R. Osu, C.-M. Chew, T.E. Milner, and M. Kawato, “Cns learns stable, accurate, and efficient movements using a simple algorithm”, J Neurosci 28, pp 11165‒11173, 2008.
  • [35] A. Babiarz, R. Bieda, K. Jaskot, and J. Klamka, “The dynamics of the human arm with an observer for the capture of body motion parameters”, Bull. Pol. Ac.: Tech. 61, p 955, 2013.
  • [36] J. Kober and J. Peters, “Learning motor primitives for robotics”, 2009 IEEE International Conference on Robotics and Automation, pp 2112‒2118, 12‒17 May 2009 2009.
Uwagi
EN
This work is supported by the National Natural Science Foundation of China and the China Academy of Engineering Physics (NSAF, Grant No.U1530119).
PL
Opracowanie rekordu w ramach umowy 509/P-DUN/2018 ze środków MNiSW przeznaczonych na działalność upowszechniającą naukę (2019).
Typ dokumentu
Bibliografia
Identyfikator YADDA
bwmeta1.element.baztech-68fe11f3-e7ab-4407-912c-c4163e3d86a7
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.