PL EN


Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników
Tytuł artykułu

Towards automatic facility layout design using reinforcement learning

Wybrane pełne teksty z tego czasopisma
Identyfikatory
Warianty tytułu
Konferencja
17th Conference on Computer Science and Intelligence Systems
Języki publikacji
EN
Abstrakty
EN
The accuracy and perfection of layout designing significantly depend on the designer's ability. Quick and near-optimal designs are very difficult to create. In this study, we proposed an automatic design mechanism that can more easily design layouts for various unit groups and sites using reinforcement learning. Accordingly, we devised a mechanism to deploy units to be able to fill the largest rectangular space in the current site. We aim to successfully deploy given units within a given site by filling a part of the site. We apply the mechanism to the three sets of units in benchmark problems. The performance was evaluated by changing the learning parameters and iteration count. Consequently, it was possible to produce a layout that successfully deployed units within a given one-floor site.
Rocznik
Tom
Strony
11--20
Opis fizyczny
Bibliogr. 56 poz., rys., wykr.
Twórcy
autor
  • Graduate School of Information Science and Technology Osaka University, Japan
  • Graduate School of Information Science and Technology Osaka University, Japan
  • Graduate School of Information Science and Technology Osaka University, Japan
Bibliografia
  • 1. Andrew Kusiak, Sunderesh S.Heragu, 1987,The facility layout problem, European Journal of Operational Research 29, 229-251, https://doi.org/10.1016/0377-2217(87)90238-4
  • 2. Sunderesh S.Heragu, AndrewKusiak, 1991, Efficient models for the facility layout problem, European Journal of Operational Research, https://doi.org/10.1016/0377-2217(91)90088-D
  • 3. S.P.Singh, R.R.K.Sharma, 2006, A review of different approaches to the facility layout problems, The International Journal of Advanced Manufacturing Technology volume 30, pages 425-433, https://doi.org/10.1007/s00170-005-0087-9
  • 4. Kar Yan Tam, 1992, Genetic algorithms, function optimization, and facility layout design, European Journal of Operational Research Volume 63 issue 2, https://doi.org/10.1016/0377-2217(92)90034-7
  • 5. Anita Thengade, Rucha Dondal, 2012, Genetic Algorithm - Survey Paper, MPGI National Multi Conference 2012, ISSN: 0975 – 8887.
  • 6. Pedro G. Espejo, Sebastian Ventura, Francisco Herrera, 2010, A Survey on the Application of Genetic Programming to Classification, IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews, Volume 40, Issue 2, 10.1109/TSMCC.2009.2033566.
  • 7. Venkatesh Dixit, Jim Lawlor, 2019, Modified genetic algorithm for automated facility layout design, International Journal of Advance Research, Ideas and Innovations in Technology, Volume 5, Issue 3, ISSN: 2454-132X.
  • 8. José Fernando Gonçalvesa, Mauricio G.C.Resende, 2015, A biased random-key genetic algorithm for the unequal area facility layout problem, European Journal of Operational Research, Volume 246, https://doi.org/10.1016/j.ejor.2015.04.029
  • 9. Stanislas Chaillou, 2019, AI and Architecture An Experimental Perspective, The Routledge Companion to Artificial Intelligence in Architecture, ISBN:9780367824259.
  • 10. Luisa Fernanda Vargas-Pardo, Frank Nixon Giraldo-Ramos, 2021, Firefly algorithm for facility layout problemoptimization, Visión electrónica, https://doi.org/10.14483/issn.2248-4728
  • 11. Jing fa, Liuab JunLiu, 2019, Applying multi-objective ant colony optimization algorithm for solving the unequal area facility layout problems, Applied Soft Computing, Volume 74, https://doi.org/10.1016/j.asoc.2018.10.012
  • 12. Russell D.Meller, Yavuz A.Bozer, 1997, Alternative Approaches to Solve the Multi-Floor Facility Layout Problem, Journal of Manufacturing Systems, Volume 16, Issue 3, https://doi.org/10.1016/S0278-6125(97)88887-5,
  • 13. Arthur R.Butz, 1969, Convergence with Hilbert’s Space Filling Curve, Journal of Computer and System Sciences, https://doi.org/10.1016/S0022-0000(69)80010-3
  • 14. L.P.Kaelbling, M.L.Littman, A.W.Moore, 1996, Reinforcement Learning: A Survey, JAIR, https://doi.org/10.1613/jair.301
  • 15. Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan Wierstra, Martin Riedmiller, 2013, Playing Atari with Deep Reinforcement Learning, NIPS Deep Learning Workshop, https://doi.org/10.48550/arXiv.1312.5602
  • 16. Yu-Jui Liu, Shin-Ming Cheng, Yu-Lin Hsueh, 2017, eNB Selection for Machine Type Communications Using Reinforcement Learning Based Markov Decision Process, /url10.1109/TVT.2017.2730230.
  • 17. Frank L. Lewis, Draguna Vrabie, 2009, Reinforcement learning and adaptive dynamic programming for feedback control, IEEE Circuits and Systems Magazine Volume9 Issue3, 10.1109/MCAS.2009.933854.
  • 18. F. Llorente, L. Martino, J. Read, D. Delgado, 2021, A survey of Monte Carlo methods for noisy and costly densities with application to reinforcement learning, https://doi.org/10.48550/arXiv.2108.00490
  • 19. P. Cichosz, 1995, Truncating Temporal Differences: On the Efficient Implementation of TD(lambda) for Reinforcement Learning, 2017, JournalofArti cialIntelligenceResearch2, IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, Volume 66, NO. 12, https://doi.org/10.1613/jair.135
  • 20. Harmon, Mance E., Harmon, Stephanie S., 1997, Reinforcement Learning: A Tutorial., January.
  • 21. E. N. BARRON, H. ISHII, 1989, The Bellman equation for minimizing the maximum cost, Nonlinear Analysis, Theory, Methods and Applocations, https://doi.org/10.1016/0362-546X(89)90096-5
  • 22. CHRISTOPHER J.C.H. WATKINS, PETER DAYAN, 1992, Q-Learning, https://doi.org/10.1007/BF00992698 Machine Learning, 8, 279-292.
  • 23. Ali Asghari, Mohammad Karim Sohrabi, Farzin Yaghmaee, 2021, Task scheduling, resource provisioning, and load balancing on scientifc workfows using parallel SARSA reinforcement learning agents and genetic algorithm, The Journal of Supercomputing, https://doi.org/10.1007/s11227-020-03364-1
  • 24. Feng Ding, Guanfeng Ma, Zhikui Chen, Jing Gao, Peng Li, 2021, Averaged Soft Actor-Critic for Deep Reinforcement Learning, Complexity, vol.2021, https://doi.org/10.1155/2021/6658724
  • 25. Seyed Sajad Mousavi, Michael Schukat1, Enda Howley, 2017, Traffic light control using deep policy-gradient and value-function-basedreinforcement learning, IET Intelligent Transport Systems, https: //doi.org/10.1049/iet-its.2017.0153.
  • 26. Xinhan Di, Pengqian Yu, IHome Company, IBM Research, 2021, Deep Reinforcement Learning for Producing Furniture Layout in Indoor Scenes, Cornell University, https://doi.org/10.48550/arXiv.2101.07462
  • 27. Vincent Francois-Lavet, Peter Henderson, Riashat Islam, Marc G. Bellemare, Joelle Pineau, 2018, An Introduction to Deep Reinforcement Learning, Foundations and Trends in Machine Learning, Volume 11, https://doi.org/10.1561/2200000071
  • 28. Matthias Klar, Moritz Glatt, Jan C. Aurich, 2021, An implementation of a reinforcement learning based algorithm for factory layout planning, Manufacturing Letters, Volume 30, October, https://doi.org/10.1016/j.mfglet.2021.08.,
  • 29. Richa Verma, Sarmimala Saikia, Harshad Khadilkar, Puneet Agarwal, Gautam Shrof, Ashwin Srinivasan, 2019, A Reinforcement Learning Framework for Container Selection and Ship Load Sequencing in Ports, Autonomous Agents and Multiagent Systems.
  • 30. Ruizhen Hu, Juzhan Xu, Bin Chen, Minglun Gong, Hao Zhang, Hui Huang, 2020, TAP-Net: Transport-and-Pack using Reinforcement Learning, ACM Transactions on Graphics, Volume 39, December, https://doi.org/10.1145/3414685.3417796
  • 31. Azalia Mirhoseini, Anna Goldie, Mustafa Yazgan, Joe Wenjie Jiang, Ebrahim Songhori, Shen Wang, Young-Joon Lee, Eric Johnson, Omkar Pathak, Azade Nazi, Jiwoo Pak, Andy Tong, Kavya Srinivasa, William Hang, Emre Tuncer, Quoc V. Le, James Laudon, Richard Ho, Roger Carpenter, Jeff Dean, 2021, A graph placement methodology for fast chip design, Nature, volume 594, pages207-212, https://doi.org/10.1038/s41586-021-03544-w
  • 32. Xinhan Di, Pengqian Yu, 2021, Multi-Agent Reinforcement Learning of 3D Furniture Layout Simulation in Indoor Graphics Scenes, ICLR SimDL Workshop, https://doi.org/10.48550/arXiv.2102.09137
  • 33. Peter Burggraf, Johannes Wagner, Benjamin Heinbach, 2021, Bibliomet- ric Study on the Use of Machine Learning as Resolution Technique for Facility Layout Problems, IEEE Access, Volume 9, http://dx.doi.org/10.1109/ACCESS.2021.3054563
  • 34. Christian E. López, James Cunningham, Omar Ashour, Conrad S. Tucker, 2020, Deep Reinforcement Learning for Procedural Content Generation of 3D Virtual Environments, Journal of Computing and Information Science in Engineering, https://doi.org/10.1115/1.4046293
  • 35. Niloufar Izadinia, Kourosh Eshghi, Mohammad Hassan Salmani, A robust model for multi-floor layout problem, 2014, Computers and Industrial Engineering 78, http://dx.doi.org/10.1016/j.cie.2014.09.023
  • 36. Junjie Li, Sotetsu Koyamada, Qiwei Ye, Guoqing Liu, Chao Wang, Ruihan Yang, Li Zhao, Tao Qin, Tie-Yan Liu, Hsiao-Wuen Hon, 2020, Suphx: Mastering Mahjong with Deep Reinforcement Learning, Cornell University, https://doi.org/10.48550/arXiv.2003.13590
  • 37. Matthew Lai, 2015, Giraffe: Using Deep Reinforcement Learning to Play Chess, partial fulfilment of the requirements for the MSc Degree in Advanced Computing of Imperial College, https://doi.org/10.48550/arXiv.1509.01549
  • 38. Adrian Goldwaser, Michael Thielscher, 2020, Deep Reinforcement Learning for General Game Playing, The Thirty-Fourth AAAI Conference on Artificial Intelligence (AAAI-20), https://doi.org/10.1609/aaai.v34i02.5533
  • 39. David Silver, Thomas Hubert, Julian Schrittwieser, Ioannis Antonoglou, Matthew Lai, Arthur Guez, Marc Lanctot, Laurent Sifre, Dharshan Kumaran, Thore Graepel, Timothy Lillicrap, Karen Simonyan, Demis Hassabis, 2018, A general reinforcement learning algorithm that masters chess, shogi, and Go through self-play, Science, 10.1126/science. aar6404.
  • 40. Guillaume Chaslot, Sander Bakkes, Istvan Szita, Pieter Spronck, 2008, Monte-Carlo Tree Search: A New Framework for Game AI, Proceedings of the Fourth Artificial Intelligence and Interactive Digital Entertainment Conference, https://ojs.aaai.org/index.php/AIIDE/article/view/18700
  • 41. Yahui Liu, Buyang Cao, Hehua Li, Improving ant colony optimization algorithm with epsilon greedy and Levy flight, 2021, Complex and Intelligent Systems 17111722,https://doi.org/10.1007/s40747-020-00138-3
  • 42. Tailong Yang, Shuyan Zhang, Cuixia Li, 2021, A multi-objective hyper-heuristic algorithm based on adaptive epsilon-greedy selection, Complex and Intelligent Systems, https://doi.org/10.1007/s40747-020-00230-8
  • 43. Abbas Ahmadi, Mohammad Reza Akbari Jokar, 2016, An efficient multiple-stage mathematical programming method for advanced single and multi-floor facility layout problems, Applied Mathematical Modelling, Volume 40, Issues 9-10, Pages 5605-5620, https://doi.org/10.1016/j.apm.2016.01.014
  • 44. Seongwoo Lee, Joonho Seon, Chanuk Kyeong, Soohyun Kim, Youngghyu Sun, Jinyoung Kim, 2021, Novel Energy Trading System Based on Deep-Reinforcement Learning in Microgrids, https://doi.org/10.3390/en14175515,
  • 45. Amine Drira, Henri Pierreval, SoniaHajri-Gabouj, 2007, Facility layout problems: A survey, Annual Reviews in Control, Volume 31, Issue 2, https://doi.org/10.1016/j.arcontrol.2007.04.001
  • 46. Stefan Helber, Daniel Bohme, Farid Oucherif, Svenja Lagershausen, Steffen Kasper, 2015, A hierarchical facility layout planning approach for large and complex hospitals, Flexible Services and Manufacturing Journal, pp 5-29, https://doi.org/10.1007/s10696-015-9214-6
  • 47. Peter Hahn, J.MacGregor Smith, Yi-Rong Zhu, 2008, The Multi-Story Space Assignment Problem, Annals of Operations Research, pp 77-103, https://doi.org/10.1007/s10479-008-0474-3
  • 48. Yifei Zhang, 2021, The design of the warehouse layout based on the non-logistics analysis of SLP, E3S Web of Conferences 253, https://doi.org/10.1051/e3sconf/202125303035
  • 49. Yifei Zhang, 2020, Research on layout planning of disinfection tableware distribution center based on SLP method, MATEC Web of Conferences 325, https://doi.org/10.1051/matecconf/202032503004
  • 50. Zhiang Zhang, Adrian Chongb, Yuqi Panc, Chenlu Zhanga, Khee Poh Lam, 2019, Whole building energy model for HVAC optimal control: A practical framework based on deep reinforcement learning, Energy and Buildings, Volume 199, Pages 472-490, https://doi.org/10.1016/j.enbuild.2019.07.029
  • 51. Felipe Leno Da Silva, Anna Helena Reali Costa, 2019, A Survey on Transfer Learning for MultiagentReinforcement Learning Systems, Journal of Artificial Intelligence Research 64, https://doi.org/10.1613/jair.1.11396
  • 52. Felipe Leno Da Silva, Matthew E. Taylor, Anna Helena Reali Costa, 2018, Autonomously Reusing Knowledge in Multiagent Reinforcement Learning, Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence (IJCAI-18).
  • 53. Wei Du, Shifei Ding, 2020, A survey on multi-agent deep reinforcement learning: from the perspective of challenges and applications, Artificial Intelligence Review, https://doi.org/10.1007/s10462-020-09938-y
  • 54. Ingy Elsayed-Aly, Suda Bharadwaj, Christopher Amato, Rüdiger Ehlers, Ufuk Topcu, Lu Feng, 2021, Safe Multi-Agent Reinforcement Learning via Shielding, Autonomous Agents and Multiagent Systems, https://doi.org/10.48550/arXiv.2101.11196
  • 55. Alfredo V. Clemente, Humberto N. Castejon, Arjun Chandra, 2017, EFFICIENT PARALLEL METHODS FOR DEEP REINFORCEMENT LEARNING, Cornell University, https://doi.org/10.48550/arXiv.1705.04862
  • 56. Arun Nair, Praveen Srinivasan, Sam Blackwell, Cagdas Alcicek, Rory Fearon, Alessandro De Maria, Vedavyas, Panneershelvam, Mustafa Suleyman, Charles Beattie, Stig Petersen, Shane Legg, Volodymyr Mnih, Koray Kavukcuoglu, David Silver, 2015, Massively Parallel Methods for Deep Reinforcement Learning, the Deep Learning Workshop, International Conference on Machine Learning, https://doi.org/10.48550/arXiv.1507.04296
Uwagi
Opracowanie rekordu ze środków MEiN, umowa nr SONP/SP/546092/2022 w ramach programu "Społeczna odpowiedzialność nauki" - moduł: Popularyzacja nauki i promocja sportu (2022-2023).
Typ dokumentu
Bibliografia
Identyfikator YADDA
bwmeta1.element.baztech-f7fa1226-b921-46d5-a8fc-f7e08bdb86d3
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.