PL EN


Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników
Tytuł artykułu

Adaptive controller design for electric drive with variable parameters by Reinforcement Learning method

Treść / Zawartość
Identyfikatory
Warianty tytułu
Języki publikacji
EN
Abstrakty
EN
The paper presents a method for designing a neural speed controller with use of Reinforcement Learning method. The controlled object is an electric drive with a synchronous motor with permanent magnets, having a complex mechanical structure and changeable parameters. Several research cases of the control system with a neural controller are presented, focusing on the change of object parameters. Also, the influence of the system critic behaviour is researched, where the critic is a function of control error and energy cost. It ensures long term performance stability without the need of switching off the adaptation algorithm. Numerous simulation tests were carried out and confirmed on a real stand.
Rocznik
Strony
1019--1030
Opis fizyczny
Bibliogr 29 poz., rys.
Twórcy
  • Poznan University of Technology, Institute of Robotics and Machine Intelligence, Piotrowo 3A, 60-965 Poznań
autor
  • Poznan University of Technology, Institute of Robotics and Machine Intelligence, Piotrowo 3A, 60-965 Poznań
autor
  • Poznan University of Technology, Institute of Robotics and Machine Intelligence, Piotrowo 3A, 60-965 Poznań
Bibliografia
  • [1] A. Babiarz, J. Klamka, R. Bieda, and K. Jaskot, “The dynamics of the human arm with an observer for the capture of body motion parameters”, Bull. Pol. Ac.: Tech. 61 (4), 955–971 (2013).
  • [2] J. Ikaheimo, “Permanent magnet motors eliminate gearboxes”, ABB Review 4, 22–25 (2002).
  • [3] K. Wróbel, K. Szabat, and P. Serkies, “Long-horizon model predictive control of induction motor drive”, Arch. Electr.Eng. 68 (3), 579–593 (2019).
  • [4] T.C. Chen and T.T. Sheu, “Model reference robust speed control for induction-motor drive with time delay based on neural network”, IEEE Trans. Syst., Man, Cybern. Syst. 31 (6), 746–753 (2001).
  • [5] J. Kabziński, “Adaptive, compensating control of wheel slip in railway vehicles”, Bull. Pol. Ac.: Tech. 63 (4), 955–963 (2015).
  • [6] B.K. Bose, “Neural Network applications in power electronics and motor drives—An introduction and perspective”, IEEE Trans. Ind. Electron. 54 (1), 14–33 (2007).
  • [7] M. Kaminski and T. Orlowska-Kowalska, “FPGA implementation of ADALINE-based speed controller for two-mass system”, IEEE Trans. Ind. Informat. 9 (3), 1301–1311 (2013).
  • [8] B. Ufnalski and L.M. Grzesiak, “Repetitive neurocontroller with disturbance feedforward path active in the pass-to-pass direction for a VSI inverter with an output LC filter”, Bull. Pol. Ac.: Tech. 64 (1), 115–125 (2016).
  • [9] L.M. Grzesiak, V .Meganck, J. Sobolewski, and B. Ufnalski, “On-line trained neural speed controller with variable weight update period for direct-torque-controller AC drive”, 12th International Power Electronics and Motion Control Conference, Portoroz, 1127–1132 (2006).
  • [10] T. Orlowska-Kowalska and K. Szabat, “Control of the drive system with stiff and elastic coupling using adaptive neurofuzzy approach”, IEEE Trans. Ind. Electron. 51 (4), 228–240 (2007).
  • [11] M.A. Rahman and M.A. Hoque, “On-line adaptive artificial neural network based vector control of permanent magnet synchronous motors”, IEEE Trans. Energy Convers. 13 (4), 311–318 (1988).
  • [12] D. Chen and M. York, “Adaptive neural inverse control applied to power systems”, IEEE PES Power Systems Conference and Exposition, Atlanta, GA, 2109–2115 (2006).
  • [13] E. Colina-Morles and N. Mort, “Inverse model neural network-based control of dynamic systems”, International Conference on Control – Control ’94, Coventry, UK, 955–960 vol. 2 (1994).
  • [14] Y. Li, B. Zhang, and X. Xu, “Decoupling control for permanent magnet in-wheel motor using internal model control based on back-propagation neural network inverse system”, Bull. Pol. Ac.: Tech. 66 (6), 961–972 (2018).
  • [15] T.M. Mitchell, Machine Learning, McGraw-Hill Science/Engineering/Math, 1997.
  • [16] L. Dong Jin and B. Hyochoong, “Model-free LQ control for unmanned helicopters using reinforcement learning”, 11th International Conference on Control, Automation and Systems, Gyeonggi-do, 117–120 (2011).
  • [17] D. Lee, M. Choi, and H. Bang, “Model-free linear quadratic tracking control for unmanned helicopters using reinforcement learning”, The 5th International Conference on Automation, Robotics and Applications, Wellington, 19–22 (2011).
  • [18] Xiao-ting Cui and Xiang-dong Liu, “Fuzzy Neural Control of Satellite Attitude by TD Based Reinforcement Learning”, 6th World Congress on Intelligent Control and Automation, Dalian, 3983–3986 (2006).
  • [19] J. Xue, Q. Gao, and W. Ju, “Reinforcement Learning for Engine Idle Speed Control”, International Conference on Measuring Technology and Mechatronics Automation, Changsha City, 1008–1011 (2010).
  • [20] E. Bejar and A. Moran, “Deep reinforcement learning based neuro-control for a two-dimensional magnetic positioning system”, 4th International Conference on Control, Automation and Robotics (ICCAR), Auckland, 268–273 (2018).
  • [21] B. Subudhi and S.K. Pradhan, “Direct adaptive control of a flexible robot using reinforcement learning”, International Conference on Industrial Electronics, Control and Robotics, Orissa, 129–136 (2010).
  • [22] H. Li, “The implementation of reinforcement learning algorithms on the elevator control system”, IEEE 20th Conference on Emerging Technologies & Factory Automation (ETFA), Luxembourg, 1–4 (2015).
  • [23] S. Zhipeng, G. Chen, and S. Jianbo, “Reinforcement learning control for ship steering based on general fuzzified CMAC”, 5th Asian Control Conference (IEEE Cat. No.04EX904), Melbourne, Victoria, Australia, 1552–1557 vol. 3 (2004).
  • [24] S. Bhasin, Reinforcement learning and optimal control methods for uncertain nonlinear systems, Dissertation Ph.D, University of Florida, (2011).
  • [25] A. Barto, A. G., Sutton, and C. Anderson, “Neuron-like adaptive elementsthat can solve difficult learning control problems”, IEEE Trans. Syst., Man, Cybern. Syst. 13 (5), 834–846 (1983).
  • [26] T. Pajchrowski, K. Zawirski, and K. Nowopolski, “A Neural Speed Controller Trained On-Line by Means of Modified RPROP Algorithm”, IEEE Trans. Ind. Inform. 11 (2), 560–568 (2015).
  • [27] J. W. Umland and M. Safiuddin, “Magnitude and symmetric optimum criterion for the design of linear control systems: what is it and how does it compare with the others?”, IEEE Trans. Ind Appl. 26 (3), 489–497 (1990).
  • [28] D. Łuczak, K. Nowopolski, K. Siembab, and B. Wicher, “Speed calculation methods in electrical drive with non-ideal position sensor”, 19th International Conference on Methods and Models in Automation and Robotics (MMAR), Mi ̨ edzyzdroje, 726-731 (2014).
  • [29] D. Svozil, V. Kvasnicka, and J. Pospichal, “Introduction to multi-layer feed-forward neural networks”, Chemom. Intell. Lab. Syst. 39 (1), 43–62 (1997).
Typ dokumentu
Bibliografia
Identyfikator YADDA
bwmeta1.element.baztech-ee9f37ba-01d3-4fd5-b05c-84165faff8d4
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.