Ograniczanie wyników
Czasopisma help
Autorzy help
Lata help
Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników

Znaleziono wyników: 48

Liczba wyników na stronie
first rewind previous Strona / 3 next fast forward last
Wyniki wyszukiwania
Wyszukiwano:
w słowach kluczowych:  nauczanie maszynowe
help Sortuj według:

help Ogranicz wyniki do:
first rewind previous Strona / 3 next fast forward last
EN
By reviewing the current state of the art, this paper opens a Special Section titled “The Internet of Things and AI-driven optimization in the Industry 4.0 paradigm”. The topics of this section are part of the broader issues of integration of IoT devices, cloud computing, big data analytics, and artificial intelligence to optimize industrial processes and increase efficiency. It also focuses on how to use modern methods (i.e. computerization, robotization, automation, machine learning, new business models, etc.) to integrate the entire manufacturing industry around current and future economic and social goals. The article presents the state of knowledge on the use of the Internet of Things and optimization based on artificial intelligence within the Industry 4.0 paradigm. The authors review the previous and current state of knowledge in this field and describe known opportunities, limitations, directions for further research, and industrial applications of the most promising ideas and technologies, considering technological, economic, and social opportunities.
EN
This paper presents a study on applying machine learning algorithms for the classification of a two-phase flow regime and its internal structures. This research results may be used in adjusting optimal control of air pressure and liquid flow rate to pipeline and process vessels. To achieve this goal the model of an artificial neural network was built and trained using measurement data acquired from a 3D electrical capacitance tomography (ECT) measurement system. Because the set of measurement data collected to build the AI model was insufficient, a novel approach dedicated to data augmentation had to be developed. The main goal of the research was to examine the high adaptability of the artificial neural network (ANN) model in the case of emergency state and measurement system errors. Another goal was to test if it could resist unforeseen problems and correctly predict the flow type or detect these failures. It may help to avoid any pernicious damage and finally to compare its accuracy to the fuzzy classifier based on reconstructed tomography images – authors’ previous work.
EN
Purpose: The aim of the article is to review the literature on the risks and opportunities of implementing Industry 4.0 - Artificial Intelligence solutions in the chemical industry. Design/methodology/approach: The review was carried out using available scientific articles, popular science publications, and media reports from the world's largest companies in the chemical industry. Findings: The analysis indicates that there are more benefits than risks arising from the implementation of Artificial Intelligence solutions in the chemical industry. Research limitations/implications: The frequent lack of specific economic indicators makes it difficult to clearly indicate the implementation potential of a specific solution in other companies in the chemical industry. Social implications: The implementation of AI in chemical industry companies can reduce environmental pollution, raw material consumption, and optimize production processes. Originality/value: The article, based on real data, is aimed at middle and senior management of companies in the chemical industry, presenting the advantages and disadvantages of implementing AI solutions in the chemical industry.
EN
Convolutional Neural Network (CNN) is a special type of Artificial Neural Network which takes input in the form of an image. Like Artificial Neural Network they consist of weights that are estimated during training, neurons (activation functions), and an objective (loss function). CNN is finding various applications in image recognition, semantic segmentation, object detection, and localization. The present work deals with the prediction of the welding efficiency of the Friction Stir Welded joints on the basis of microstructure images by carrying out training on 3000 microstructure images and further testing on 300 microstructure images. The loss function decreased for both training and testing set decreased with the increasing number of epochs. The obtained results showed an accuracy of 80 % on the validation dataset.
EN
Background: The purpose of this article is to present the developed AdaBoost.M1 based on Ant Colony Optimization (hereby referred to as ACOBoost.M1 throughout the study) to classify the risk of delay in the pharmaceutical supply chain. This study investigates one research hypothesis, namely, that the ACOBoost.M1 can be used to predict the risk of delay in the supply chain and is characterized by a high prediction performance. Methods: We developed a machine learning algorithm based on Ant Colony Optimization (ACO). The meta-heuristic algorithm ACO is used to find the best hyperparameters for AdaBoost.M1 to classify the risk of delay in the pharmaceutical supply chain. The study used a dataset from 4PL logistics service provider. Results: The results indicate that ACOBoost.M1 may predict the risk of delay in the supply chain and is characterized by a high prediction performance. Conclusions: The present findings highlight the significance of applying machine learning algorithms, such as the AdaBoost.M1 model with Ant Colony Optimization for hyperparameter tuning, to manage the risk of delays in the pharmaceutical supply chain. These findings not only showcase the potential for machine learning in enhancing supply chain efficiency and robustness but also set the stage for future research. Further exploration could include investigating other optimization techniques, machine learning models, and their applications across various industries and sectors.
EN
The influence of artificial intelligence (AI) in smart cities has resulted in enhanced efficiency, accessibility, and improved quality of life. However, this integration has brought forth new challenges, particularly concerning data security and privacy due to the widespread use of Internet of Things (IoT) technologies. The article aims to provide a classification of scientific research relating to artificial intelligence in smart city issues and to identify emerging directions of future research. A systematic literature review based on bibliometric analysis of Scopus and Web of Science databases was conducted for the study. Research query included TITLE-ABS-KEY (“smart city” AND “artificial intelligence”) in the case of Scopus and TS = (“smart city” AND “artificial intelligence”) in the case of the Web of Sciences database. For the purpose of the analysis, 3101 publication records were qualified. Based on bibliometric analysis, seven research areas were identified: safety, living, energy, mobility, health, pollution, and industry. Urban mobility has seen significant innovations through AI applications, such as autonomous vehicles (AVs), electric vehicles (EVs), and unmanned aerial vehicles (UAVs), yet security concerns persist, necessitating further research in this area. AI’s impact extends to energy management and sustainability practices, demanding standardised regulations to guide future research in renewable energy adoption and developing integrated local energy systems. Additionally, AI’s applications in health, environmental management, and the industrial sector require further investigation to address data handling, privacy, security, and societal implications, ensuring responsible and sustainable digitisation in smart cities.
EN
Raw data processing is a key business operation. Business-specific rules determine howthe raw data should be transformed into business-required formats. When source datacontinuously changes its formats and has keying errors and invalid data, then the effectiveness of the data transformation is a big challenge. The conventional data extraction andtransformation technique produces a delay in handling such data because of continuousfluctuations in data formats and requires continuous development of a business rule engine.The best business rule engines require near real-time detection of business rule and datatransformation mechanisms utilizing machine learning classification models. Since data iscombined from numerous sources and older systems, it is challenging to categorize andcluster the data and apply suitable business rules to turn raw data into the business-required format. This paper proposes a methodology for designing ensemble machine learning techniques and approaches for classifying and segmenting registered numbersof registered title records to choose the most suitable business rule that can convert theregistered number into the format the business expects, allowing businesses to provide customers with the most recent data in less time. This study evaluates the suggested modelby gathering sample data and analyzing classification machine learning (ML) models todetermine the relevant business rule. Experimentation employed Python, R, SQL storedprocedures, Impala scripts, and Datameer tools.
EN
Skin disorders, a prevalent cause of illnesses, may be identified by studying their physical structure and history of the condition. Currently, skin diseases are diagnosed using invasive procedures such as clinical examination and histology. The examinations are quite effective and beneficial. This paper describes an evolutionary model for skin disease classification and detection based on machine learning and image processing. This model integrates image preprocessing, image augmentation, segmentation, and machine learning algorithms. The experimental investigation makes use of a dermatology data set. The model employs the machine learning methods: the support vector machine (SVM), the k-nearest neighbors (KNN), and random forest algorithms for image categorization and detection. This suggested methodology is beneficial for the accurate identification of skin disease using image analysis. The SVM algorithm achieved an accuracy of 98.8%. The KNN algorithm achieved a sensitivity of 91%. The specificity of KNN was 99%.
EN
Birth defects affect 1 to 3 percent of the population and are mostly detected in pregnantwomen through double, triple, and quadruple testing. Ultrasonography helps to discoverand define such anomalies in fetuses. Ultrasound pictures of nuchal translucency (NT)are routinely used to detect genetic disorders in fetuses. The NT area lacks identifiablelocal behaviors and detection algorithms are required to classify the fetal head. On theother hand, explicit identification of other body parts comes at a higher cost in termsof annotations, implementation, and analysis. In circumstances of ambiguous head placement or non-standard head-NT relationships, it may potentially cause cascading errors.In this research work, a linear contour size filter is used to decrease noise from the image,and then the picture is scaled. Then, a novel hybrid maxpool matrix histogram analysis (HMMHA) is proposed to enhance the initiation and progression. The training andassessment were conducted using a dataset of 33 ultrasound pictures. Extensive testingshows that the direct method reliably identifies and measures NT. The suggested modelmay assist doctors in making decisions about pregnancies with fetal growth restriction,particularly for patients who have nuchal translucency or congenital anomalies and donot require induced labor due to these abnormalities. The performance of the proposedtechnique is analyzed in terms of error rate, sensitivity, Matthews correlation coefficient(MCC), accuracy, precision, recall, and F1-score. The error rate of the proposed model is28.21% and it is found to be better when compared with the conventional approaches. Finally, the error prediction is compared with the existing models obtained from the medicaldataset of pregnant women to identify fetal abnormality positions.
EN
Snow Water Equivalent (SWE) is one of the most critical variables in mountainous watersheds and needs to be considered in water resources management plans. As direct measurement of SWE is difficult and empirical equations are highly uncertain, the present study aimed to obtain accurate predictions of SWE using machine learning methods. Five standalone algorithms of tree-based [M5P and random tree (RT)], rule-based [M5Rules (M5R)] and lazy-based learner (IBK and Kstar) and five novel hybrid bagging-based algorithms (BA) with standalone models (i.e., BA-M5P, BA-RT, BA-IBK, BA-Kstar and BA-M5R) were developed. A total of 2550 snow measurements were collected from 62 snow and rain-gauge stations located in 13 mountainous provinces in Iran. Data including ice beneath the snow (IBS), fresh snow depth (FSD), length of snow sample (LSS), snow density (SDN), snow depth (SD) and time of falling (TS) were measured. Based on the Pearson correlation between inputs (IBS, FSD, LSS, SDN, SD and TS) and output (SWE), six different input combinations were constructed. The dataset was separated into two groups (70% and 30% of the data) by a cross-validation technique for model construction (training dataset) and model evaluation (testing dataset), respectively. Different visual and quantitative metrics (e.g., Nash–Sutclife efficiency (NSE)) were used for evaluating model accuracy. It was found that SD had the highest correlation with SWE in Iran (r=0.73). In general, the bootstrap aggregation (i.e., bagging) hybrid machine learning methods (BA-M5P, BA-RT, BA-IBK, BA-Kstar and BA-M5R) increased prediction accuracy when compared to each standalone method. While BA-M5R had the highest prediction accuracy (NSE=0.83) (considering all six input variables), BA-IBK could predict SWE with high accuracy (NSE=0.71) using only two input variables (SD and LSS). Our findings demonstrate that SWE can be accurately predicted through a variety of machine learning methods using easily measurable variables and may be useful for applications in other mountainous regions across the globe.
EN
Significant research has been done on estimating reference evapotranspiration (ET0) from limited climatic measurements using machine learning (ML) to facilitate the acquirement of ET0 values in areas with limited access to weather stations. However, the spatial generalizability of ET0 estimating ML models is still questionable, especially in regions with significant climatic variation like Turkey. Aiming at exploring this generalizability, this study compares two ET0 modeling approaches: (1) one general model covering all of Turkey, (2) seven regional models, one model for each of Turkey’s seven regions. In both approaches, ET0 was predicted using 16 input combinations and 3 ML methods: support vector regression (SVR), Gaussian process regression (GPR), and random forest (RF). A cross-station evaluation was used to evaluate the models. Results showed that the use of regional models created using SVR and GPR methods resulted in a reduction in root mean squared error (RMSE) in comparison with the general model approach. Models created using the RF method suffered from overfitting in the regional models’ approach. Furthermore, a randomization test showed that the reduction in RMSE when using these regional models was statistically significant. These results emphasize the importance of defining the spatial extent of ET0 estimating models to maintain their generalizability.
12
EN
2-D dipping dike model is often used in the magnetic anomaly interpretations of mineral exploration and regional geodynamic studies. However, the conventional interpretation techniques used for modeling the dike parameters are quite challenging and time-consuming. In this study, a fast and efficient inversion algorithm based on machine learning (ML) techniques such as K-Nearest Neighbors (KNN), Random Forest (RF), and XGBoost is developed to interpret the magnetic anomalies produced by the 2-D dike body. The model parameters estimated from these methods include the depth to the top of the dike (z), half-width (d), Amplitude coefficient (K), index angle (α), and origin (x0). Initially, ML models are trained with optimized hyper-parameters on simulated datasets, and their performance is evaluated using Mean absolute error (MAE), Root means squared error (RMSE), and Squared correlation (R2). The applicability of the ML algorithms was demonstrated on the synthetic data, including the effect of noise and nearby geological structures. The results obtained for synthetic data showed good agreement with the true model parameters. On the noise-free synthetic data, XGBoost better predicts the model parameters of dike than KNN and RF. In comparison, its performance decreases with increasing the percentage of noise and geological complexity. Further, the validity of the ML algorithms was also tested on the four field examples: (i) Mundiyawas-Khera Copper deposit, Alwar Basin, (ii) Pranhita–Godavari (P-G) basin, India, (iii) Pima Copper deposit of Arizona, USA, and (iv) Iron deposit, Western Gansu province China. The obtained results also agree well with the previous studies and drill-hole data.
EN
Purpose: The main objective of this article is to identify areas for optimizing marketing communication via artificial intelligence solutions. Design/methodology/approach: In order to realise the assumptions made, an analysis and evaluation of exemplary implementations of AI systems in marketing communications was carried out. For the purpose of achieving the research objective, it was decided to choose the case study method. As part of the discussion, the considerations on the use of AI undertaken in world literature were analysed, as well as the analysis of three different practical projects. Findings: AI can contribute to the optimisation and personalisation of communication with the customer. Its application generates multifaceted benefits for both sides of the market exchange. Achieving them, however, requires a good understanding of this technology and the precise setting of objectives for its implementation. Research limitations/implications: The article contains a preliminary study. In the future it is planned to conduct additional quantitative and qualitative research. Practical implications: The conclusions of the study can serve to better understand the benefits of using artificial intelligence in communication with the consumer. The results of the research can be used both in market practice and also serve as an inspiration for further studies of this topic. Originality/value: The article reveals the specifics of artificial intelligence in relation to business activities and, in particular, communication with the buyer. The research used examples from business practice.
EN
To improve the R&D process, by reducing duplicated bug tickets, we used an idea of composing BERT encoder as Siamese network to create a system for finding similar existing tickets. We proposed several different methods of generating artificial ticket pairs, to augment the training set. Two phases of training were conducted. The first showed that only and approximate 9% pairs were correctly identified as certainly similar. Only 48% of the test samples are found to be pairs of similar tickets. With the fine-tuning we improved that result up to 81%, proving the concept to be viable for further improvements.
EN
In this paper, the performance of the Bayesian Optimization (BO) technique applied to various problems of microwave engineering is studied. Bayesian optimization is a novel, non-deterministic, global optimization scheme that uses machine learning to solve complex optimization problems. However, each new optimization scheme needs to be evaluated to find its best application niche, as there is no universal technique that suits all problems. Here, BO was applied to different types of microwave and antenna engineering problems, including matching circuit design, multiband antenna and antenna array design, or microwave filter design. Since each of the presented problems has a different nature and characteristics such as different scales (i.e. number of design variables), we try to address the question about the generality of BO and identify the problem areas for which the technique is or is not recommended.
EN
Water quality monitoring and assessment has been one of the world’s major concerns in recent decades. This study examines the performance of three approaches based on the integration of machine learning and feature extraction techniques to improve water quality prediction in the Western Middle Chelif plain in Algeria during 2014–2018. The most dominant Water Quality Index parameters that were extracted by neuro-sensitivity analysis (NSA) and principal component analysis (PCA) techniques were used in the multilayer perceptron neural network, support vector regression (SVR) and decision tree regression models. Various combinations of input data were studied and evaluated in terms of prediction performance, using statistical criteria and graphical comparisons. According to the results, the MLPNN1 model with eight input parameters gave the highest performance for both training and validation phases (R=0.98/0.95, NSE=0.96/0.88, RMSE=11.20/15.03, MAE=7.89/10.22 and GA=1.34) when compared with the multiple linear regression, TDR and SVR models. Generally, the prediction performance of models integrated with NSA approaches is significantly improved and outperforms models coupled with the PCA dimensionality reduction method.
17
EN
The Zambezi watershed is essential for water supply, irrigation, fishing activities, and river transport of the populations of Southern Africa. The importance and variability of these water resources make it necessary to develop studies that may help understand and manage them. Despite this need, water resources studies for this region are still scarce. Therefore, the present work aims to present a strategy for forecasting the daily water flow of the Zambezi River in the Cahora Bassa dam, located in Mozambique, an important energy producer in the country and the fourth largest dam in Africa. Historical rainfall, evaporation, and humidity records collected from 2003 to 2011 are used for training and testing a model that forecasts water flow using the Group Method of Data Handling algorithm. The results achieved were compared, through error metrics, with those of other models to prove the effectiveness of the assembled model. They revealed that the proposed model achieves a satisfactory performance for the forecast horizon and could become a helpful tool in monitoring hydrographic basins and forecasting their daily streamflow values.
EN
Nowadays, machine learning algorithms are considered a powerful tool for analyzing big and complex data due to their ability to deliver accurate and fast results. The main objective of the present study is to prove the effectiveness of the extreme gradient boosting (XGBoost) method as well as employed data types in the Saharan region mapping. To reveal the potential of the XGBoost, we conducted two experiments. The first was to use different combinations of: airborne gamma-ray spectrometry data, airborne magnetic data, Landsat 8 data and digital elevation model. The objective is to train 9 XGBoost models in order to determine each data type sensitivity in capturing the lithological rock classes. The second experiment was to compare the XGBoost to deep neural networks (DNN) to display its potential against other machine learning algorithms. Compared to the existing geological map, the application of XGBoost reveals a great potential for geological mapping as it was able to achieve a correlation score of (78%) where igneous and metamorphic rocks are easily identified compared to sedimentary rocks. In addition, using different data combinations reveals airborne magnetic data utility to discriminate some lithological units. It also reveals the potential of the apparent density, derived from airborne magnetic data, to improve the algorithm’s accuracy up to 20%. Furthermore, the second experiment in this study indicates that the XGBoost is a better choice for the geological mapping task compared to the DNN. The obtained predicted map shows that the XGBoost method provides an efficient tool to update existing geological maps and to edit new geological maps in the region with well outcropped rocks.
EN
Flooding is currently the most dangerous natural hazard. It can have heavy human and material impacts and, in recent years, flooding has tended to occur more frequently, due to changes our species has made to hydrological regimes, and due to climate change. It is of the utmost importance that new models are developed to predict and map food susceptibility with high accuracy, to support decision-makers and planners in designing more effective food management strategies. The objective of this study is the development of a new method based on state-of-the-art machine learning and remote sensing, namely random forest (RF), dingo optimization algorithm, a weighted chimp optimization algorithm (WChOA), and particle swarm optimization to build food susceptibility maps in the Nghe An province of Vietnam. The CyGNSS system was used to collect soil moisture data to integrate into the susceptibility model. A total of 1650 food locations and 14 conditioning factors were used to construct the model. These data were divided at a ratio of 60/20/20 to train, validate, and test the model, respectively. In addition, various statistical indices, namely root-mean-square error, receiver operation characteristic, mean absolute error, and the coefficient of determination (R2 ), were used to assess the performance of the model. The results for all the models were good, with an AUC value of+0.9. The RF-WChOA model performed best, with an AUC value of 0.99. The proposed models can predict and map food susceptibility with high accuracy.
EN
The present study investigates the prediction accuracy of standalone Reduced Error Pruning Tree model and its integration with Bagging (BA), Dagging (DA), Additive Regression (AR) and Random Committee (RC) for drought forecasting on time scales of 3, 6, 12, 48 months ahead using Standard Precipitation Index (SPI), which is among the most common criteria for testing drought prediction, at Kermanshah synoptic station in western Iran. To this end, monthly data obtained from a 31-year period record including rainfall, maximum and minimum temperatures, and maximum and minimum relative humidtty rates were considered as the required input to predict SPI. In addition, different inputs were combined and constructed to determine the most effective parameter. Finally, the obtained results were validated using visual and quantitative criteria. According to the results, the best input combination comprised both meteorological variable and SPI along with lag time. Although hybrid models enhanced the results of standalone models, the accuracy of the best performing models could vary on different SPI time scales. Overall, BA, DA and RC models were much more effective than AR models. Moreover, RMSE value increased from SPI (3) to SPI (48), indicating that performance modeling would become much more challenging and complex on higher time scales. Finally, the performance of the newly developed models was compared with that of conventional and most commonly used Support Vector Machine and Adaptive Neuro-Fuzzy Inference System (ANFIS) models, regarded as the benchmark. The results revealed that all the newly developed models were characterized by higher prediction power than ANFIS and ANN.
first rewind previous Strona / 3 next fast forward last
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.