Warianty tytułu
Języki publikacji
Abstrakty
In many popular, as well scientific, discourses it is suggested that the “massive” use of Artificial Intelligence, including Machine Learning, and reaching the point of “singularity” through so-called Artificial General Intelligence (AGI), and Artificial Super-Intelligence (ASI), will completely exclude humans from decision making, resulting in total dominance of machines over human race. Speaking in terms of manufacturing systems, it would mean that there will be achieved intelligent and total automation (once the humans will be excluded). The hypothesis presented in this paper is that there is a limit of AI/ML autonomy capacity, and more concretely, that the ML algorithms will be not able to became totally autonomous and, consequently, that the human role will be indispensable. In the context of the question, the authors of this paper introduce the notion of the manufacturing singularity and an intelligent machine architecture towards the manufacturing singularity, arguing that the intelligent machine will be always human dependent, and that, concerning the manufacturing, the human will remain in the centre of Cyber-Physical Systems (CPS) and in I4.0. The methodology to support this argument is inductive, similarly to the methodology applied in a number of texts found in literature, and based on computational requirements of inductive inference based machine learning. The argumentation is supported by several experiments that demonstrate the role of human within the process of machine learning. Based on the exposed considerations, a generic architecture of intelligent CPS, with embedded ML functional modules in multiple learning loops, in order to evaluate way of use of ML functionality in the context of CPPS/CPS. Similarly to other papers found in literature, due to the (informal) inductive methodology applied, considering that this methodology doesn’t provide an absolute proof in favour of, or against, the hypothesis defined, the paper represents a kind of position paper. The paper is divided into two parts. In the first part a review of argumentation from literature, both in favor of and against the thesis on the human role in future, is presented. In this part a concept of the manufacturing singularity is introduced, as well as an intelligent machine architecture towards the manufacturing singularity is presented, arguing that the intelligent machine will always be human dependent, and that, concerning the manufacturing, the human will remain in the centre. The argumentation is based on the phenomenon related to computational machine learning paradigm, as intrinsic feature of the AI/ML, through the inductive inference based machine learning algorithms, whose effectiveness is conditioned by the human participation. In the second part, an architecture of the Cyber-Physical (Production) Systems with multiple learning loops is presented, together with a set of experiments demonstrating the indispensable human role. Also, a discussion of the problem from the manufacturing community point of view on future of human role in Industry 4.0 as the environment for advanced AI/ML applications is included in this part.
Czasopismo
Rocznik
Tom
Strony
161--184
Opis fizyczny
Bibliogr. 69 poz., rys.
Twórcy
autor
- University of Minho, School of Engineering, Department of Production and Systems Engineering, Portugal, putnikgd@dps.uminho.pt
- ALGORITMI Research Centre, Universidade do Minho, Portugal
autor
- University of Minho, School of Engineering, Department of Information Systems, Portugal
- ALGORITMI Research Centre, Universidade do Minho, Portugal
autor
- ALGORITMI Research Centre, Universidade do Minho, Portugal
autor
- Polytechnic Institute of Cávado and Ave, School of Technology, Portugal
Bibliografia
- [1] Wikipedia, 2020, Existential risk from artificial general intelligence, https://en.wikipedia.org/wiki/Existential_risk_from_artificial_general_intelligence
- [2] URBAN T., 2015, The AI Revolution: the Road to Superintelligence, Wait But Why, https://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html.
- [3] BOSTROM N., 2014, Superintelligence: Paths, Dangers, Strategies, Oxford University Press.
- [4] CELLAN-JONES R., 2014, Stephen Hawking Warns Artificial Intelligence Could End Mankind, BBC News, https://www.bbc.com/news/technology-30290540.
- [5] MUSK E., 2017, Elon Musk at National Governors Association, 2017 Summer Meeting, https://www.cspan.org/video/?431119-6/elon-musk-addresses-nga&start=5049.
- [6] RUSSELL S., NORVIG P., 2009, Artificial Intelligence. Artificial intelligence: A modern approach, 3rd edition, 2016, Prentice Hall.
- [7] GOERTZEL B., PENNACHIN C., (Eds.), 2007, Artificial General Intelligence, New York, Springer.
- [8] YAMPOLSKIY R.V., 2005, Artificial Superintelligence: a Futuristic Approach, CRC Press.
- [9] KURZWEIL R., 2005, The Singularity is Near: When Humans Transcend Biology, Penguin.pdf.
- [10] TEGMARK M., 2017, Life 3.0: Being Human in the Age of Artificial Intelligence, Alfred A. Knopf.
- [11] EDEN A. H., MOOR J.H., SØRAKER J.H., STEINHART E., (Eds.), 2012, Singularity Hypotheses: A Scientific and Philosophical Assessment, Springer.
- [12] MÜLLER V.C., BOSTROM N., 2016, Future Progress in Artificial Intelligence: A Survey of Expert Opinion, Fundamental issues of artificial intelligence, Springer, 553–571.
- [13] NILSSON N.J., 2005, Human-Level Artificial Intelligence? Be serious!, AI magazine, 26/4, 68–75.
- [14] BOSTROM N., YUDKOWSKY E., 2014, The Ethics of Artificial Intelligence, Frankish, K., Ramsey, W. M. (Eds.), The Cambridge handbook of artificial intelligence, The Cambridge handbook of artificial intelligence, Cambridge University Press., 1, 316–334.
- [15] LIU H.Y., 2018, The Power Structure of Artificial Intelligence, Law, Innovation and Technology, 10/2, 197–229.
- [16] GOERTZEL B., 2013, The Structure of Intelligence: A New Mathematical Model of Mind, Springer Science & Business Media.
- [17] GOERTZEL B., 2014, Artificial General Intelligence: Concept, State of the Art, and Future Prospects, Journal of Artificial General Intelligence, 5/1, 1–48.
- [18] TURCHIN A., DENKENBERGER D., 2020, Classification of Global Catastrophic Risks Connected with Artificial Intelligence, AI & SOCIETY, 35/1, 147–163.
- [19] YUDKOWSKY E., 2008, Artificial Intelligence As a Positive and Negative Factor in Global Risk., Nick Bostrom and Milan M. Ćirković (Eds.) Global Catastrophic Risks, 308–345.
- [20] BOSTROM N., 2002, Existential Risks: Analyzing Human Extinction Scenarios and Related Hazards, Journal of Evolution and technology, Vol. 9/1.
- [21] TORRES P., 2019, Existential Risks: a Philosophical Analysis, Inquiry, 1–26.
- [22] BEARD S., ROWE T., FOX J., 2020, An Analysis and Evaluation of Methods Currently Used to Quantify the Likelihood of Existential Hazards, Futures, 115, 102469.
- [23] BAUM S.D., 2020, Quantifying the Probability of Existential Catastrophe: A reply to Beard et al. Futures, 123, 102608.
- [24] VINCENT C., MÜLLER., 2014, Risks of General Artificial Intelligence (Editorial), Journal of Experimental & Theoretical Artificial Intelligence, 26/3, 297–301.
- [25] SOTALA K., YAMPOLSKIY R.V., 2015, Responses to Catastrophic AGI Risk: a Survey, Physica Scripta, 90/1, 018001.
- [26] YAMPOLSKIY R.V., SPELLCHECKER M.S., 2016, Artificial Intelligence Safety and Cybersecurity: A Timeline of AI Failures, arXiv Preprint arXiv, 1610.07997.
- [27] TORRES P., 2019, The Possibility and Risks of Artificial General Intelligence, Bulletin of the Atomic Scientists, 75/3, 105–108.
- [28] ĆIRKOVIĆ M. M., 2015, Linking Simulation Argument to the AI risk, Futures, 72, 27–31.
- [29] MILLER J.D., FELTON D., 2017, The Fermi Paradox, Bayes’ Rule, and Existential Risk Management, Futures, 86, 44–57.
- [30] PISTONO F., YAMPOLSKIY R.V., 2016, Unethical Research: How to Create a Malevolent Artificial Intelligence, https://arxiv.org/abs/1605.02817.
- [31] CRITCH A., KRUEGER D., 2020, AI Research Considerations for Human Existential Safety (ARCHES), arXiv Preprint arXiv:2006.04948.
- [32] CALO R., 2017, Artificial Intelligence Policy: a Primer and Roadmap. UCDL Rev., 51, 399.
- [33] WOGU I.A.P., 2017, Artificial Intelligence, Alienation and Ontological Problems of Other Minds: A Critical Investigation Into the Future of Man and Machines, 2017 International Conference on Computing Networking and Informatics (ICCNI) , IEEE, 1–10.
- [34] BOYD M., WILSON N., 2020, Catastrophic Risk from Rapid Developments in Artificial Intelligence, Policy Quarterly, 16/1, 53–61.
- [35] ČERKA P., GRIGIENĖ J., SIRBIKYTĖ G., 2015, Liability for Damages Caused by Artificial Intelligence, Computer Law & Security Review, 31/3, 376–389.
- [36] RUSSELL S., DEWEY D., TEGMARK M., 2015, Research Priorities for Robust and Beneficial Artificial Intelligence, Ai Magazine, 36/4, 105–114.
- [37] CASTEL J.G., CASTEL M.E., 2016, The Road to Artificial Super-Intelligence: Has International Law a Role to Play? Canadian Journal of Law and Technology, 14/1.
- [38] Future of Life Institute, 2015, An Open Letter – Research Priorities for Robust and Beneficial Artificial Intelligence, Future of Life Institute, https://futureoflife.org/ai-open-letter/.
- [39] European Parliament, 2018, Should We Fear Artificial Intelligence, European Parliament – Directorate-General for Parliamentary Research Services – Scientific Foresight Unit (STOA), ISBN 978-92-846-2676-2.
- [40] AGRAWAL A., GANS J., GOLDFARB A., 2018, The Obama Administration’s Roadmap for AI Policy. Harvard Business Review. https://hbr.org/2016/12/the-obama-administrations-roadmap-for-ai-policy
- [41] VINGE V., 1993, The Coming Technological Singularity: How to Survive in the Post-Human Era. Proceedings of VISION-21 Symposium. NASA Conference Publication 10129. Westlake, Ohio. http://www.frc.ri.cmu.edu/∼hpm/book98/com.ch1/vinge.singularity.html.
- [42] BOSTROM N., 2005, A History of Transhumanist Thought. Journal of evolution and technology, 14/1, 1–25.
- [43] HOUSE, WHITE, 2018, Update From the National Science and Technology Council Select Committee on Artificial Intelligence, White House, Office of Science and Technology Policy.
- [44] HOADLEY D.S., LUCAS N.J., 2018, Artificial Intelligence and National Security, Congressional Research Service.
- [45] Should we fear AI – European Parliament-DG Parliamentary Res Services – Sc Foresight Unit (STOA) EPRS _IDA(2018)614547_EN.pdf.
- [46] Nature, 2016, Anticipating Artificial Intelligence (Editorial), Nature, 532, 413.
- [47] KOCH C., 2015, Will Artificial Intelligence Surpass Our Own, Scientific American. https://www.scientific american.com/article/will-artificial-intelligence-surpass-our-own.
- [48] KENNEDY K., MIFSUD C., (Eds.), 2017, Artificial Intelligence – The Future of Humankind, TIME Special editions.
- [49] TAYLOR T., DORIN A., 2018, Past Visions of Artificial Futures: one Hundred and Fifty Years Under the Spectre of Evolving Machines, Artificial Life Conference Proceedings 91–98, MIT Press.
- [50] BUTLER S., 1863, Darwin Among the Machines, [to the Editor of the Press, Christchurch, New Zealand, 13 June, 1863.]. A First Year in Canterbury Settlement with Other Early Essays, 180-5. NZETC – New Zealand Electronic Texts Collection. http://nzetc.victoria.ac.nz/tm/scholarly/tei-ButFir-ButFir-f4.html.
- [51] BUTLER S., 1872, Erewhon, Edition 1974, Penguin UK.
- [52] TURING A.M., 1996, Intelligent Machinery, a Heretical Theory (c. 1951). Philosophia Mathematica, 4/3, 256–260, https://doi.org/10.1093/philmat/4.3.256.
- [53] COPELAND B.J., (Ed.), 2004, The essential Turing, Oxford University Press.
- [54] GOOD I.J., 1966, Speculations Concerning the First Ultraintelligent Machine, Advances in computers, 6, 31–88, Elsevier.
- [55] TURING A., 1950, I.–Computing Machinery and Intelligence, Mind, LIX(236), 433–460, https://doi.org/10.1093/mind/LIX.236.433
- [56] SEARLE J.R., 1980, Minds, Brains, and Programs, The Behavioral and Brain Sciences, 3, 417–457.
- [57] VINGE V., 2003, Technological Singularity, http://www8.cs.umu.se/kurser/5DV084/HT10/utdelat/vinge.pdf.
- [58] GUNNING D., AHA D.W., 2019, DARPA’s Explainable Artificial Intelligence (XAI) Program, AI Magazine. 40/2, 44-58, https://doi.org/10.1609/aimag.v40i2.2850.
- [59] NATARAJAN B.K., 2014, Machine Learning: a Theoretical Approach, Elsevier.
- [60] VALIANT L., 2013, Probably Approximately Correct: Nature’s Algorithms for Learning and Prospering in a Complex World, Basic Books (AZ).
- [61] MICLET L., 1990, Grammatical Inference, Bunke H., & Sanfeliu A., (Eds.) Syntactic and Structural Pattern Recognition – Theory and Applications, World Scientific, 237–290.
- [62] ANGLUIN D., SMITH C.H., 1983, Inductive Inference: Theory and Methods, ACM computing surveys (CSUR), 15/3, 237–269.
- [63] VALIANT L., 1984, A Theory of the Learnable, Communications of the ACM, 27/11, 1134–1142.
- [64] PUTNIK G., 1993, Application of the Inductive Learning Based on Automata Theory for Tooling Selection in Manufacturing Systems, Dr.Sci. Thesis, Mechanical Engineering Faculty, University of Belgrade, Belgrade, Serbia, (in Serbian).
- [65] PUTNIK G.D., ROSAS J.A., 1997, LEARN – A Prototype Software Tool for Machine Learning, Proceedings of the 2nd World Congress on Intelligent Manufacturing Processes and Systems, Budapest, (L. Monostori; Ed.), Springer, 587–592.
- [66] PUTNIK G.D., ROSAS J.A., 1997, Manufacturing System Simulation Model Synthesis: Towards Application of Inductive Inference, L.M. Camarinha-Matos (Ed.) Reengineering for Sustainable Industrial Production, Proceedings of OE/IEEE/IFIP International Conference on Integrated and Sustainable Industrial Production – ISIP ´97, Chapman & Hall, 259–272.
- [67] PUTNIK G.D., ROSAS J.A., 2001, Manufacturing System Design: Towards Application of Inductive Inference, Proceedings of the International Workshop on Emerging Synthesis – IWES 01, CIRP sponsored, Bled, Slovenia.
- [68] PUTNIK G.D., 2011, A Computational General Design Theory model as an interpretation of the Computational Inductive Inference, (unpublished manuscript).
- [69] DENNING P.J., DENNIS, J.B., QUALITZ J.E., 1978, Machines, languages, and computation, Prentice-Hall.
Uwagi
Opracowanie rekordu ze środków MNiSW, umowa Nr 461252 w ramach programu "Społeczna odpowiedzialność nauki" - moduł: Popularyzacja nauki i promocja sportu (2021).
Typ dokumentu
Bibliografia
Identyfikatory
Identyfikator YADDA
bwmeta1.element.baztech-73ee018b-4435-4513-8ef4-0d8236691588