Ograniczanie wyników
Czasopisma help
Autorzy help
Lata help
Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników

Znaleziono wyników: 620

Liczba wyników na stronie
first rewind previous Strona / 31 next fast forward last
Wyniki wyszukiwania
Wyszukiwano:
w słowach kluczowych:  machine learning
help Sortuj według:

help Ogranicz wyniki do:
first rewind previous Strona / 31 next fast forward last
PL
W teledetekcji w ostatnich latach uzyskano duży postęp dzięki wprowadzeniu automatyzacji oraz zastosowaniu algorytmów sztucznej inteligencji. Nowe podejście pozwoliło wykryć zależności, które nie były widoczne dla człowieka i stał się możliwy analityczny opis rzeczywistości, który do tej pory głównie opierał się na intuicji. Jednym z wyzwań w zakresie teledetekcji gleb jest monitorowane stanu gleb w skali kraju oraz aktualizacja baz danych o glebach, w tym aktualizacja przebiegu wydzieleń klas bonitacyjnych. Celem niniejszych badań było sprawdzenie możliwości wykorzystania metod uczenia maszynowego do klasyfikacji gleb zgodnie z obowiązującą gleboznawczą klasyfikacją gruntów ornych z użyciem danych teledetekcyjnych i numerycznego modelu terenu (NMT). Jako dane źródłowe wykorzystano satelitarne obrazy optyczne Sentinel-2 i radarowe Sentinel-1 oraz cztery produkty pochodne NMT, opisujące cechy ważne z punktu widzenia klasyfikacji bonitacyjnej gleb. Klasyfikacje zostały przeprowadzone metodą lasów losowych i konwolucyjnych sieci neuronowych (CNN) na wybranym obszarze treningowym w różnych scenariuszach, a następnie wytrenowane modele zostały zweryfikowane na zestawie testowym. Niestety modele opisane lasami losowymi nie uzyskały dobrych wyników na zestawie testowym w przeciwieństwie do zestawu treningowego (skuteczność 70% vs 10%). Modele opisane przez CNN uzyskały wyniki podobne dla obu zestawów, lecz ich skuteczność była niska (40%).
EN
In recent years there was a large progress in remote sensing, thanks to applying automation and artificial intelligence algorithms. The new approach revealed relationships which were not visible for human operator and enabled analytical description of reality which was based on intuition so far. One of challenges of soil remote sensing is monitoring of soil condition in country scale and database actualisation including actualisation of soil valuation classes boundaries. The aim of the research was to find out possibilities of using machine learning methods for soil classification on arable land according to current Polish law with remote sensing data and digital elevation model (DEM). Used source data were optical satellite images of Sentinel-2 and radar of Sentinel-1, and four derived products of DEM describing significant features for soil valuation classification. Classification was done by random forests and convolutional neural networks (CNN) on selected training dataset in different scenarios and then trained models were verified on test dataset. Unluckily, models described by random forests were not successful on test dataset as much as on training dataset (accuracy 70% vs 10%). Models described by CNN had similar results for both datasets but the accuracy was low (40%).
EN
The article presents an analysis of the accuracy of 3 popular machine learning (ML) methods: Maximum Likelihood Classifier (MLC), Support Vector Machine (SVM), and Random Forest (RF) depending on the size of the training sample. The analysis involved performing the classification of the content of a Landsat 8 satellite image (divided into 6 basic land cover classes) in 10 different variants of the number of training samples (from 2664 to 34711 pixels), estimating individual results, and a comparative analysis of the obtained results. For each classification variant, an error matrix was developed and on their basis, accuracy metrics were calculated: f1-score, precision and recall (for individual classes) as well as overall accuracy and kappa index of agreement (generally for the entire classification). The analysis showed a stimulating effect of the size of the training sample on the accuracy of the obtained classification results in all analyzed cases, with the most sensitive to this factor being MLC, showing the best effectiveness with the largest training sample and the smallest - with the smallest, and the least SVM, characterized by the highest accuracy with the smallest training sample, comparing to other algorithms.
PL
Artykuł przedstawia analizę dokładności 3 popularnych metod uczenia maszynowego: Maximum Likelihood Classifier (MLC), Support Vector Machine (SVM) oraz Random Forest (RF) w zależności od liczebności próbki treningowej. Analiza polegała na wykonaniu klasyfikacji treści zdjęcia satelitarnego Landsat 8 (w podziale na 6 podstawowych klas pokrycia terenu) w 10 różnych wariantach liczebności próbek uczących (od 2664 do 34711 pikseli), oszacowaniu poszczególnych wyników oraz analizie porównawczej uzyskanych wyników. Dla każdego wariantu klasyfikacji opracowano macierz błędów, a na ich podstawie obliczono metryki dokładności: F1-score, precision and recall (dla pojedynczych klas) oraz ogólną dokładność i wskaźnik zgodności Kappa (ogólnie dla całej klasyfikacji). Analiza wykazała stymulujący wpływ rozmiaru próbki uczącej na dokładność uzyskiwanych wyników klasyfikacji we wszystkich analizowanych przypadkach, przy czym najbardziej wrażliwym na ten czynnik był MLC, wykazujący się najlepszą skutecznością przy największej próbce treningowej i najmniejszą - przy najmniejszej, a najmniej SVM, cechujący się największą dokładnością przy najmniejszej próbce treningowej, w porównaniu do pozostałych algorytmów.
3
Content available AI-supported reasoning in physiotherapy
EN
Artificial intelligence (AI)-based clinical reasoning support systems in physiotherapy, and in particular data-driven (machine learning) systems, can be useful in making and reviewing decisions regarding functional diagnosis and formulating/maintaining/modifying a rehabilitation programme. The aim of this article is to explore the extent to which the opportunities offered by AI-based systems for clinical reasoning in physiotherapy have been exploited and where the potential for their further stimulated development lies.
PL
Systemy wspomagania wnioskowania klinicznego w fizjoterapii oparte na sztucznej inteligencji, a w szczególności na danych (uczenie maszynowe), mogą być przydatne w podejmowaniu i weryfikacji decyzji dotyczących diagnostyki funkcjonalnej ora formułowania/utrzymywania/modyfikowania programu rehabilitacji. Celem niniejszego artykułu jest zbadanie, w jakim stopniu możliwości oferowane przez systemy oparte na sztucznej inteligencji w zakresie rozumowania klinicznego w fizjoterapii zostały wykorzystane i gdzie leży potencjał ich dalszego stymulowanego rozwoju.
EN
This review article explores the historical background and recent advances in the application of artificial intelligence (AI) in the development of radiofrequency pulses and pulse sequences in nuclear magnetic resonance spectroscopy (NMR) and imaging (MRI). The introduction of AI into this field, which traces back to the late 1970s, has recently witnessed remarkable progress, leading to the design of specialized frameworks and software solutions such as DeepRF, MRzero, and GENETICS-AI. Through an analysis of literature and case studies, this review tracks the transformation of AI-driven pulse design from initial proof-of-concept studies to comprehensive scientific programs, shedding light on the potential implications for the broader NMR and MRI communities. The fusion of artificial intelligence and magnetic resonance pulse design stands as a promising frontier in spectroscopy and imaging, offering innovative enhancements in data acquisition, analysis, and interpretation across diverse scientific domains.
EN
Motivated by applications, we consider new operator-theoretic approaches to conditional mean embedding (CME). Our present results combine a spectral analysis-based optimization scheme with the use of kernels, stochastic processes, and constructive learning algorithms. For initially given non-linear data, we consider optimization-based feature selections. This entails the use of convex sets of kernels in a construction of optimal feature selection via regression algorithms from learning models. Thus, with initial inputs of training data (for a suitable learning algorithm), each choice of a kernel K in turn yields a variety of Hilbert spaces and realizations of features. A novel aspect of our work is the inclusion of a secondary optimization process over a specified convex set of positive definite kernels, resulting in the determination of “optimal” feature representations.
EN
This research addresses the growing complexity and urgency of climate change’s impact on water resources in arid regions. It combines advanced climate modelling, machine learning, and hydrological modelling to gain profound insights into temperature variations and precipitation patterns and their impacts on the runoff. Notably, it predicts a continuous rise in both maximum and minimum air temperatures until 2050, with minimum temperatures increasing more rapidly. It highlights a concerning trend of decreasing basin precipitation. Sophisticated hydrological models factor in land use, vegetation, and groundwater, offering nuanced insights into water availability, which signifies a detailed and comprehensive understanding of factors impacting water availability. This includes considerations of spatial variability, temporal dynamics, land use effects, vegetation dynamics, groundwater interactions, and the influence of climate change. The research integrates data from advanced climate models, machine learning, and real-time observations, and refers to continuously updated data from various sources, including weather stations, satellites, ground-based sensors, climate monitoring networks, and stream gauges, for accurate basin discharge simulations (Nash-Sutcliffe efficiency - NSERCP2.6 = 0.99, root mean square error - RMSERCP2.6 = 1.1, and coefficient of determination R2RCP2.6 = 0.95 of representative concentration pathways 2.6 (RCP)). By uniting these approaches, the study offers valuable insights for policymakers, water resource managers, and local communities to adapt to and manage water resources in arid regions.
EN
In mining, where production is affected by several factors, including equipment availability, it is necessary to develop reliable models to accurately predict mine production to improve operational efficiency. Hence, in this study, four (4) machine learning algorithms - namely: artificial neural network (ANN), random forest (RF), gradient boosting regression (GBR) and decision tree (DT)) - were implemented to predict mine production. Multiple Linear Regression (MLR) analysis was used as a baseline study for comparison purposes. In that regard, one hundred and twenty-six (126) datasets from an open-pit gold mine were used. The developed models were evaluated and compared using the correlation coefficient (R2), mean absolute percentage error (MAPE) and variance accounted for (VAF). It has been shown in this study that the ANN model can best estimate open-pit mine production by comparing its performance to that of the other machine learning models. The R2, MAPE, RMSE and VAF of the models were 0.8003, 0.7486, 0.7519, 0.6538, 0.6044, 4.23%, 5.07%, 5.44%, 6.31%, 6.15% and 79.66%, 74.69%, 74.10%, 65.16% and 60.11% for ANN, RF, GBR, DT and MLR, respectively. Overall, this study has shown that machine learning algorithms predict mine production with higher accuracy.
EN
Additively manufactured components often show insufficient component quality due to the formation of different defects. Defects such as porosity result in material inhomogeneity and structural integrity issues. The integration of in-process monitoring in machining processes facilitates the identification of inhomogeneity characteristics in manufacturing, which is crucial for process adaptation. The incorporation of artificial defects in components has the potential to mimic and study the behaviour of real-world defects in a more controlled way. This study highlights the potential benefits of cutting force and vibration monitoring during machining operations with the goal of providing insights into the machining behaviours and the effects of the artificially introduced defects on the process. Detection of anomalies relies on identifying changes in force profiles or vibration patterns that might indicate the interaction between the tool and the defect. Machine learning algorithms were used to process and interpret the collected data. The algorithms are trained to recognize patterns, anomalies, or deviations from expected behaviours, which can aid in evaluating the effect of detected defects on the machining process and the resultant component quality. The main objective of this study is to contribute to enhancing quality control of machining processes for inhomogeneous materials.
EN
The study sought to use computer techniques to detect selected psychological traits based on the nature of the writing and to evaluate the effectiveness of the resulting software. Digital image processing and deep neural networks were used. The work is complex and multidimensional in nature, and the authors wanted to demonstrate the feasibility of such a topic using image processing techniques and neural networks and machine learning. The main studies that allowed the attribution of psychological traits were based on two models known from the literature, KAMR and DA. The evaluation algorithms that were implemented allowed the evaluation of the subjects and the assignment of psychological traits to them. The DA model turned out to be more effective than the KAMR model.
PL
Rozwijająca się świadomość oraz aspekty prawne związane z energochłonnością systemów dystrybucji wody, w połączeniu z starzeniem się infrastruktury wodociągowej i stresem wodnym, wymuszają poszukiwanie rozwiązań wspierających efektywniejszą kontrolę i zarządzanie infrastrukturą techniczną. Osiągnięcie standardu mądrych czy też inteligentnych systemów wodociągowych na każdym szczeblu obszaru kluczowego nadal pozostaje kwestią otwartą zarówno w warunkach krajowych jak i zagranicznych. Dotyczy to także mikroskali sieci wodociągowych, to znaczny konsumentów wody oraz stosowania inteligentnych wodomierzy z wbudowanymi algorytmami uczenia maszynowego. Artykuł przedstawia wyniki badań z wdrożenia modelu krótkoterminowej predykcji zużycia wody wraz z detekcją anomalii dla budynków wielorodzinnych. Prognoza zużycia wody, przeprowadzona w oparciu o wysokoczęstotliwościowe pomiary oraz głębokie sieci neuronowe, pozwoliła na osiągnięcie błędu predykcji poniżej 3,0%. Detekcja wykrywania anomalii, zrealizowana w oparciu o bazowy model prognostyczny, charakteryzowała się nawet 97,3% skutecznością wykrywania anomalii.
EN
The development of awareness and legislative aspects related to the energy efficiency of water distribution systems, combined with the ageing of water supply infrastructure and water stress, led to the search for solutions to support more effective control and management of technical infrastructure. Increasing the standard of smart or intelligent water supply systems at all levels of key areas is still a problem under domestic and foreign conditions. This also applies to the microscale of water supply networks, namely water consumers of water and the use of smart water meters with integrated machine learning algorithms. This article presents the results of research on the implementation of a short-term water consumption prediction model with anomaly detection for multifamily residential buildings. The prediction of water consumption, based on high-frequency measurements and deep neural networks, achieved a prediction error of less than 3.0%. Anomaly detection, based on the underlying prediction model, had up to 97.3% accuracy.
EN
This paper researches various modelling approaches for website-related predictions, offering an overview of the field. With the ever-expanding landscape of the World Wide Web, there is an increasing need for automated methods to categorize websites. This study examines an array of prediction tasks, including website categorization, web navigation prediction, malicious website detection, fake news website detection, phishing website detection, and evaluation of website aesthetics.
PL
Ten artykuł naukowy przeprowadza analizę różnorodnych metod modelowania stosowanych do prognozowania aspektów witryn internetowych, zapewniając przegląd tej dynamicznie rozwijającej się dziedziny. Podczas gdy Internet nieustannie się powiększa, nabiera wagi potrzeba stosowania automatycznych metod do klasyfikacji nowo powstających stron internetowych. Zbadano metody zastosowane w szerokim zakresie przewidywań, obejmujących kategoryzację witryn internetowych, prognozowanie zachowań nawigacyjnych użytkowników online, identyfikację stron o złośliwym charakterze, wykrywanie fałszywych informacji, rozpoznawanie prób phishingu oraz ocenę estetycznych aspektów witryn internetowych.
PL
W artykule przedstawiono analizę statystyczną wieloletnich danych (wartości godzinowe zapotrzebowania na energię elektryczną) z KSE oraz analizę możliwości zastosowania sztucznej sieci neuronowej samoorganizującej się (Self Organizing Map) do podziału dobowych profili zapotrzebowania na energię elektryczną w KSE. Artykuł kończy podsumowanie oraz wnioski z wykonanych analiz statystycznych oraz badań związanych z zastosowaniem SOM do grupowania profili zapotrzebowania na energię.
EN
The article presents a statistical analysis of long-term data (hourly values of electricity demand) from the NPS and an analysis of the possibility of using a self-organizing artificial neural network (Self Organizing Map) to divide daily profiles of electricity demand in the NPS. The article concludes with a summary and conclusions from the conducted statistical analyses and studies related to the application of SOM for clustering electricity demand profiles.
EN
Background: Continuous software engineering practices are currently considered state of the art in Software Engineering (SE). Recently, this interest in continuous SE has extended to ML system development as well, primarily through MLOps. However, little is known about continuous SE in ML development outside the specific continuous practices present in MLOps. Aim: In this paper, we explored continuous SE in ML development more generally, outside the specific scope of MLOps. We sought to understand what challenges organizations face in adopting all the 13 continuous SE practices identified in existing literature. Method: We conducted a multiple case study of organizations developing ML systems. Data from the cases was collected through thematic interviews. The interview instrument focused on different aspects of continuous SE, as well as the use of relevant tools and methods. Results: We interviewed 8 ML experts from different organizations. Based on the data, we identified various challenges associated with the adoption of continuous SE practices in ML development. Our results are summarized through 7 key findings. Conclusion: The largest challenges we identified seem to stem from communication issues. ML experts seem to continue to work in silos, detached from both the rest of the project and the customers.
EN
Background: Continuous modifications, suboptimal software design practices, and stringent project deadlines contribute to the proliferation of code smells. Detecting and refactoring these code smells are pivotal to maintaining complex and essential software systems. Neglecting them may lead to future software defects, rendering systems challenging to maintain, and eventually obsolete. Supervised machine learning techniques have emerged as valuable tools for classifying code smells without needing expert knowledge or fixed threshold values. Further enhancement of classifier performance can be achieved through effective feature selection techniques and the optimization of hyperparameter values. Aim: Performance measures of multiple machine learning classifiers are improved by fine tuning its hyperparameters using various type of meta-heuristic algorithms including swarm intelligent, physics, math, and bio-based etc. Their performance measures are compared to find the best meta-heuristic algorithm in the context of code smell detection and its impact is evaluated based on statistical tests. Method: This study employs sixteen contemporary and robust meta-heuristic algorithms to optimize the hyperparameters of two machine learning algorithms: Support Vector Machine (SVM) and k-nearest Neighbors (k-NN). The No Free Lunch theorem underscores that the success of an optimization algorithm in one application may not necessarily extend to others. Consequently, a rigorous comparative analysis of these algorithms is undertaken to identify the best-fit solutions for code smell detection. A diverse range of optimization algorithms, encompassing Arithmetic, Jellyfish Search, Flow Direction, Student Psychology Based, Pathfinder, Sine Cosine, Jaya, Crow Search, Dragonfly, Krill Herd, Multi-Verse, Symbiotic Organisms Search, Flower Pollination, Teaching Learning Based, Gravitational Search, and Biogeography-Based Optimization, have been implemented. Results: In the case of optimized SVM, the highest attained accuracy, AUC, and F-measure values are 98.75%, 100%, and 98.57%, respectively. Remarkably, significant increases in accuracy and AUC, reaching 32.22% and 45.11% respectively, are observed. For k-NN, the best accuracy, AUC, and F-measure values are all perfect at 100%, with noteworthy hikes in accuracy and ROC-AUC values, amounting to 43.89% and 40.83%, respectively. Conclusion: Optimized SVM exhibits exceptional performance with the Sine Cosine Optimization algorithm, while k-NN attains its peak performance with the Flower Optimization algorithm. Statistical analysis underscores the substantial impact of employing meta-heuristic algorithms for optimizing machine learning classifiers, enhancing their performance significantly. Optimized SVM excels in detecting the God Class, while optimized k-NN is particularly effective in identifying the Data Class. This innovative fusion automates the tuning process and elevates classifier performance, simultaneously addressing multiple longstanding challenges.
PL
Postęp technologiczny w dziedzinie głębokiego uczenia znacząco przyczynił się do roz-woju syntezowania głosu, umożliwił tworzenie realistycznych nagrań audio, które mogą naśladować indywidualne cechy głosów ludzkich. Chociaż ta innowacja otwiera nowe możliwości w dziedzinie technologii mowy, niesie ze sobą również poważne obawy dotyczące bezpieczeństwa, zwłaszcza w kontekście potencjalnego wykorzystania technologii deepfake do celów przestępczych. Przeprowadzone badanie koncentrowało się na ocenie wpływu syntetycznych głosów na systemy biometrycznej weryfikacji mówców w języku polskim oraz skuteczności wykrywania deepfake’ów narzędziami dostępnymi publicznie, z wykorzystaniem dwóch głównych metod generowania głosu, tj. przekształcenia tekstu na mowę oraz konwersji mowy. Jednym z głównych wniosków analizy jest potwierdzenie zdolności syntetycznych głosów do zachowania charakterystycznych cech biometrycznych i otwierania drogi przestępcom do nieautoryzowanego dostępu do zabezpieczonych systemów lub danych. To podkreśla potencjalne zagrożenia dla indywidualnych użytkowników oraz instytucji, które polegają na technologiach rozpoznawania mówcy jako metodzie uwierzytelniania i wskazuje na konieczność wdrażania modułów wykrywania ataków. Badanie ponadto pokazało, że deepfaki odnalezione w polskiej części internetu dotyczące promowania fałszywych inwestycji lub kierowane w celach dezinformacji najczęściej wykorzystują popularne i łatwo dostępne narzędzia do syntezy głosu. Badanie przyniosło również nowe spojrzenie na różnice w skuteczności metod kon-wersji tekstu na mowę i klonowania mowy. Okazuje się, że metody klonowania mowy mogą być bardziej skuteczne w przekazywaniu biometrycznych cech osobniczych niż metody konwersji tekstu na mowę, co stanowi szczególny problem z punktu widzenia bezpieczeństwa systemów weryfikacji. Wyniki eksperymentów podkreślają potrzebę dalszych badań i rozwoju w dziedzinie bezpieczeństwa biometrycznego, żeby skutecznie przeciwdziałać wykorzystywaniu syntetycznych głosów do nielegalnych działań. Wzrost świadomości o potencjalnych zagrożeniach i kontynuacja pracy nad ulepszaniem technologii weryfikacji mówców są ważne dla ochrony przed coraz bardziej wyrafinowanymi atakami wykorzystującymi technologię deepfake.
EN
Technological advancements in the field of deep learning have significantly contributed to the development of voice synthesis, enabling the creation of realistic audio recordings that can mimic the individual characteristics of human voices. While this innovation opens up new possibilities in the field of speech technology, it also raises serious security concerns, especially in the context of the potential use of deepfake technology for criminal purposes. Our study focuses on assessing the impact of synthetic voices on biometric speaker verification systems in Polish and the effectiveness of detecting deepfakes with publicly available tools, considering two main approaches to voice generation: text-to-speech conversion and speech conversion. One of the main findings of our research is the confirmation that synthetic voices are capable of retaining biometric characteristics, which could allow criminals unauthorized access to protected systems or data. The analysis showed that the greater the biometric similarity between the „victim’s” voice and the „criminal’s” synthetic voice, the more difficult it is for verification systems to distinguish between real and fake voices. This highlights the potential threats to individual users and institutions that rely on speaker recognition technologies as a method of authentication. Our study also provides a new perspective on the differences in the effectiveness of text-to-speech conversion methods versus speech cloning. It turns out that speech cloning methods may be more effective in conveying individual biometric characteristics than text-to-speech conversion methods, posing a particular problem from the security perspective of verification systems. The results of the experiments underscore the need for further research and development in the field of biometric security to effectively counteract the use of synthetic voices for illegal activities. Increasing awareness of potential threats and continuing work on improving speaker verification technologies are crucial for protecting against increasingly sophisticated attacks utilizing deepfake technology.
EN
This work focused on the analysis of various gene expression-based cancer subtype classification approaches. Correctly classifying cancer subtypes is critical for understanding cancer pathophysiology and effectively treating cancer patients by using gene expression data to categorize cancer subtypes. When dealing with limited samples and high-dimensional biological data, most classifiers may suffer from overfitting and lower precision. The goal of this research is to develop a machine learning (ML) system capable of classifying human cancer subtypes based on gene expression data in cancer cells. These issues can be solved using ML algorithms such as Transductive Support Vector Machines (TSVM), Boosting Cascade Deep Forest (BCD Forest), Enhanced Neural Network Classifier (ENNC), Deep Flexible Neural Forest (DFN Forest), Convolutional Neural Network (CNN), and Cascade Flexible Neural Forest (CFN Forest). In inferring the benefits and rawbacks of these strategies, such as DFN Forest and CFN Forest, the findings are 95%.
EN
Parkinson’s disease is associated with memory loss, anxiety, and depression in the brain. Problems such as poor balance and difficulty during walking can be observed in addition to symptoms of impaired posture and rigidity. The field dedicated to making computers capable of learning autonomously, without having to be explicitly programmed, is known as machine learning. An approach to the diagnosis of Parkinson’s disease, which is based on artificial intelligence, is discussed in this article. The input for this system is provided through photographic examples of Parkinson’s disease patient handwriting. Received photos are preprocessed using the relief feature option to begin the process. This is helpful in the process of selecting characteristics for the identification of Parkinson’s disease. After that, the linear discriminant analysis (LDA) algorithm is employed to reduce the dimensions, bringing down the total number of dimensions that are present in the input data. The photos are then classified via radial basis function-support vector machine (SVM-RBF), k-nearest neighbors (KNN), and naive Bayes algorithms, respectively.
EN
By reviewing the current state of the art, this paper opens a Special Section titled “The Internet of Things and AI-driven optimization in the Industry 4.0 paradigm”. The topics of this section are part of the broader issues of integration of IoT devices, cloud computing, big data analytics, and artificial intelligence to optimize industrial processes and increase efficiency. It also focuses on how to use modern methods (i.e. computerization, robotization, automation, machine learning, new business models, etc.) to integrate the entire manufacturing industry around current and future economic and social goals. The article presents the state of knowledge on the use of the Internet of Things and optimization based on artificial intelligence within the Industry 4.0 paradigm. The authors review the previous and current state of knowledge in this field and describe known opportunities, limitations, directions for further research, and industrial applications of the most promising ideas and technologies, considering technological, economic, and social opportunities.
EN
This paper presents a study on applying machine learning algorithms for the classification of a two-phase flow regime and its internal structures. This research results may be used in adjusting optimal control of air pressure and liquid flow rate to pipeline and process vessels. To achieve this goal the model of an artificial neural network was built and trained using measurement data acquired from a 3D electrical capacitance tomography (ECT) measurement system. Because the set of measurement data collected to build the AI model was insufficient, a novel approach dedicated to data augmentation had to be developed. The main goal of the research was to examine the high adaptability of the artificial neural network (ANN) model in the case of emergency state and measurement system errors. Another goal was to test if it could resist unforeseen problems and correctly predict the flow type or detect these failures. It may help to avoid any pernicious damage and finally to compare its accuracy to the fuzzy classifier based on reconstructed tomography images – authors’ previous work.
EN
This article examines in depth the most recent thermal testing techniques for lithium-ion batteries (LIBs). Temperature estimation circuits can be divided into six divisions based on modeling and calculation methods, including electrochemical computational modeling, equivalent electric circuit modeling (EECM), machine learning (ML), digital analysis, direct impedance measurement and magnetic nanoparticles as a base. Complexity, accuracy and computational cost-based EECM circuits are feasible. The accuracy, usability and adaptability of diagrams produced using ML have the potential to be very high. However, none of them can anticipate the low-cost integrated BMS in real time due to their high computational costs. An appropriate solution might be a hybrid strategy that combines EECM and ML.
first rewind previous Strona / 31 next fast forward last
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.