Niniejszy artykuł przedstawia swobodne rozważania nad sztuczną inteligencją w kontekście odbioru społecznego i pokładanych w niej nadziei. Prezentowane są różne aspekty, przede wszystkim dotyczące edukacji i nauki. W sposób nawiązujący do tradycji i popkultury wyjaśniono wybrane zagadnienia związane z działaniem sztucznych sieci neuronowych, ze szczególnym wskazaniem tego, co jest pomijane w dyskursie medialnym: braków i niedociągnięć ze strony tej technologii. To, co oferują obecnie istniejące systemy sztucznej inteligencji jest bardzo dalekie od tego, co mogłoby być dopiero ewentualnie postrzegane jako prawdziwa sztuczna inteligencja. W szczególności obecnie nie ma absolutnie żadnych szans, aby można było się spodziewać, że jakikolwiek system sztucznej inteligencji będzie w stanie udowodnić przykładowo hipotezę Riemanna. Podobnie istniejące obecnie systemy komputerowego przekładu są również dalekie od pożądanego w tym zakresie ideału, a samo zastosowane w ich przypadku uczenie maszynowe nie jest bynajmniej w stanie rozwiązać skutecznie wszelkich pojawiających się tutaj problemów.
EN
This article presents free considerations on Artificial Intelligence in the context of social reception and hopes placed in it. Various aspects are presented, primarily those related to education and science. In a way referring to tradition and pop culture, selected issues related to the operation of Artificial Neural Networks are explained, with particular emphasis on what is omitted in media discourse, i.e. the shortcomings and deficiencies of this technology. Certainly, what is offered by currently existing artificial intelligence systems is still very far from what could possibly be seen as true artificial intelligence. In particular, there is currently absolutely no chance that any artificial intelligence system could be expected to be able to prove the Riemann hypothesis, for example, especially since this has been an open mathematical problem for more than 150 years, the solution of which is probably beyond the capacity of the human intellect. Similarly, the computer translation systems that currently exist are also far from the desired ideal in this respect, and the machine learning applied to them alone is by no means capable of effectively solving all the problems that arise in such systems.
Rosnąca w społeczeństwie świadomość ekologiczna sprawia, że w Europie rośnie zapotrzebowanie na używaną odzież i obuwie. Jednak wysokie koszty siły roboczej w krajach UE sprawiają, że ręczne sortowanie odzieży staje się nieopłacalne. Skuteczna klasyfikacja materiałów odzieży i obuwia w oparciu o tradycyjne rozwiązania, np. kamery mono/RGB nie jest możliwa. W artykule opisujemy opracowany i wdrożony system automatycznej klasyfikacji materiałów przyszwy obuwia na linii sortującej, wykorzystujący sztuczne sieci neuronowe do analizy obrazu z kamer hiperspektralnych w paśmie NIR-SWIR (900-1700 nm).
EN
The growing environmental awareness in society is driving an increasing demand for second-hand clothing and footwear in Europe. However, the high labor costs in EU countries make manual sorting of clothing not economically viable. Effective classification of garment and footwear materials using traditional solutions, such as mono/RGB cameras, is not possible. In our article, we describe the development and implementation of an automated system for classifying upper shoe materials on a sorting line, utilizing artificial neural networks to analyze images from hyperspectral cameras in the NIR-SWIR range (900-1700 nm).
Parametry oczyszczalni ścieków w Polsce muszą spełniać określone normy, które regulują jakość odpływu po procesie oczyszczania. Ilość oraz jakość dopływających ścieków zależy od wielu czynników, między innymi od warunków pogodowych. Prognozowanie tych parametrów pozwala zapewnić optymalną pracę oczyszczalni, co przyczynia się do redukcji kosztów ich pracy. W tym celu, korzystając z danych pogodowych, przeprowadzono próbę oszacowania ilości ścieków dopływających do oczyszczalni w Rzeszowie. Wykorzystano ponad 1000 modeli uczenia maszynowego (ML), w tym także modele statystyczne, takie jak ARIMA i SARIMAX, oraz algorytmy ML, takie jak KNN i sieci neuronowe, w różnych konfiguracjach i przedziałach czasowych. Uzyskano najmniejszy średni błąd bezwzględny (MAE) na poziomie 3598 m3 oraz błąd średniokwadratowy (RMSE) równe 4808 m3. Badanie pokazało, jak wybór parametrów oraz różnych typów modeli predykcyjnych (statycznych, dynamicznych, uczenia maszynowego) wpływa na dokładność prognoz, co bazując wyłącznie na podstawowych danych czasowych, okazuje się być wymagającym procesem.
EN
The parameters of wastewater treatment plants in Poland must meet certain standards that regulate the quality of wastewater after the treatment process. The quantity and quality of incoming sewage depend on many factors, including weather conditions. Forecasting these parameters can ensure optimal operation of the treatment plant, which will reduce operating costs. For this purpose, using weather data, an attempt was made to estimate the amount of sewage flowing into the sewage treatment plant in Rzeszow. Over 1000 machine learning (ML) models were used, including statistical models such as ARIMA and SARIMAX, and ML algorithms such as KNN and neural networks, in various configurations and time frames. The lowest mean absolute error (MAE) of 3598 m3 and the root mean square error (RMSE) of 4808 m3 were obtained. The study showed how the selection of parameters and different types of predictive models (static, dynamic, machine learning) affects forecast accuracy. It also highlighted that forecasting based solely on basic time-series data is a challenging process.
The article discusses modern single-pixel imaging techniques. Different solutions of spatial light modulators (SLMs) used in infrared imaging are presented. The focus is on image reconstruction methods, in particular on the use of a modulator based on orthogonal codes, cyclic matrices, and neural networks for image reconstruction. The potential possibilities and limitations of these new imaging methods are described, emphasizing their usefulness in different ranges of the infrared spectrum. Moreover, the experimental implementation of a single-pixel infrared camera is presented. Possible applications and future development perspectives of this technology are indicated.
This paper presents methods for detecting and eliminating artifacts in signals recorded by the FOS6 rotational seismograph based on the Sagnac effect. A combination of classical threshold-based techniques and artificial intelligence (AI) algorithms was employed, aimed not only at detecting artifacts but also at improving the overall quality of the recorded data. Particular emphasis was placed on the deliberate use of AI – not as a direct filtering tool, but as a means of identifying regions of the signal that can be effectively smoothed or removed while preserving waveform integrity. The threshold-based algorithm mainly functioned as a source of training data for the AI models, enabling effective learning and testing of the approaches developed. Training data were obtained from the earlier FOS5 device, and verification was performed using recordings from both FOS5 and FOS6, enabling evaluation of the proposed methods under real-world conditions. To suppress artifacts, a simple linear interpolation method was proposed that preserves signal continuity and morphology while minimising distortion. The results show that this combined approach significantly increases the usability of the measurement system, enabling a more reliable analysis of seismic events and reducing the number of false alarms.
Brain-computer interfaces (BCIs) enable direct communication between the brain and information technologies, translating brain activity recorded intracranially into commands. Recent advances in BCIs have utilised multimodal approaches, such as electroencephalography (EEG)-based systems in combination with other biosignals, as well as deep learning to improve the efficiency and reliability of such technologies. Due to the inherent uncertainty of the data of electroencephalogram (EEG) patterns, traditional EEG diagnostic methods often face difficulties. Specifically, in multiple neurological disorders, the main motivation is to overcome the limitations of existing methods that are unable to cope with the complex and overlapping nature of EEG signals. In this paper, the use of Karhunen-Loève decomposition functions for the analysis of spatiotemporal EEG signals in a state of calm mental load in healthy persons and patients with nervous disorders is considered. Approaches in the time, frequency, and time-frequency domains are considered. The results in this study show the relationship between EEG modulation during a cognitive task involving healthy people of the control group and the pathological mental state of patients, according to the results of Karhunen-Loève decomposition in pre-selected EEG frequency ranges. The results given in this paper improve the quality and speed of recognising emotional states of patients with emotional expression disorders from the EEG signal, and also develop brain-computer interface (BCI) technologies, including for the application of artificial intelligence.
The article presents and discusses the results of the research of forecasting power demands in Polish Power System with time horizon of one hour ahead in conditions of limited availability of forecasting model input data, covering only three months. The prediction was carried out using deep neural networks - LSTM (Long Short-Term Memory) connected to an ensemble. The performance of the ensemble is much more efficient than individual networks working separately. The numerical experiments were conducted using MATLAB computing environment. The accuracy of the predictions was estimated using such statistical measures as MAPE, MAE, RMSE, Pearson correlation coefficient R.
This study is devoted to addressing the problem of robust Mittag-Leffler (ML) synchronization for generalized fractional-order reaction-diffusion networks (GFRDNs) with mixed delays and uncertainties. The proposed GFRDNs include local field GFRDNs and static GFRDNs as its special cases. An impulsive controller is intended to achieve synchronization in GFRDNs, which was previously unsolved in integer-order generalized reactiondiffusion neural networks. Novel synchronization criteria as linear matrix inequalities (LMIs) are developed to undertake the ML synchronization beneath investigation. Ensuring conditions can be efficiently solved by means of MATLAB LMI toolbox. Following that, simulations are offered for proving the impact of the findings achieved.
We address the well-known NP-hard problem of packing rectangular items into a strip, a problem of significant importance in electronics (e.g., packing components on printed circuit boards and macro-cell placement in Very-Large-Scale Integration design) and telecommunications (e.g., allocating data packets over transmission channels). Traditional heuristics and metaheuristics struggle with generalization, efficiency, and adaptability, as they rely on predefined rules or require extensive computational effort for each new problem instance. In this paper, we propose a neural-driven constructive heuristic that leverages a lightware neural network trained via black-box optimization to dynamically evaluate item placement decisions. Instead of relying on static heuristic rules, our approach adapts to the characteristics of each problem instance, enabling more efficient and effective packing strategies. To train the neural network, we employ the Covariance Matrix Adaptation Evolution Strategy (CMA-ES), a state-of-the-art derivative-free optimization method. Our method learns decision policies by optimizing fill factor improvements over a large dataset of problem instances. Unlike conventional heuristics, our approach dynamically adapts placement decisions based on a broad set of features describing the current partial solution and remaining items. Through extensive computational experiments, we compare our method against well-known strip packing heuristics, including MaxRects and Skyline-based algorithms. The results demonstrate that our approach consistently outperforms the best traditional heuristics, achieving up to 6.74 percentage points of improvement in packing efficiency. Furthermore, our method improves 87.87% of tested instances. Our study highlights the potential of machine learning-driven heuristics in combinatorial optimization and opens avenues for further research into adaptive decision-making strategies in packing and scheduling problems.
Approximately 460 million individuals were living with diabetes globally in 2023. This study explores and contrasts methods for forecasting hospital readmissions among diabetic patients by integrating traditional approaches with modern deep learning frameworks. In this work, a varietyof deep learning architectures–including recurrent models like LSTM and GRU, as well as CNNs and Autoencoders–are examined along with conventional machine learning approaches. Four essential metrics–accuracy, precision, recall, and F1-score–were employed to measure and compare the effectiveness of different models. The results revealed that deep neural network methods significantly outperformed classical machine learning algorithms. Among traditional methods, the Decision Tree achieved the highest effectiveness. However, the LSTM network demonstrated superior performance, achieving scores of 0.74 for accuracy, 0.73 for precision, 0.74 for recall, and 0.73 for the F1-score. Additionally, the GRU and Vanilla LSTM models exhibited performance close to the best model, indicating that recurrent networks are more suitable for this problem than traditional methods.
PL
W 2023 r. na całym świecie na cukrzycę cierpiało około 460 milionów osób. Niniejszy artykuł analizuje i porównuje metody prognozowania ponownych hospitalizacji pacjentów z cukrzycą poprzez połączenie tradycyjnych podejść z nowoczesnymi frameworkami głębokiego uczenia się. W ramach niniejszej pracy przeanalizowano różne architektury głębokiego uczenia się –w tym modele rekurencyjne, takie jak LSTM i GRU, a także CNNi autoenkodery–wraz z konwencjonalnymi podejściami do uczenia maszynowego. Do pomiaru i porównania skuteczności różnych modeli wykorzystano cztery podstawowe wskaźniki –dokładność, precyzję, czułośći F1-score. Wyniki wykazały, że metody głębokich sieci neuronowych znacznie przewyższały klasyczne algorytmy uczenia maszynowego. Spośród metod tradycyjnych najwyższą skuteczność osiągnęło drzewo decyzyjne. Jednak sieć LSTM wykazała się lepszą wydajnością, osiągając wyniki 0,74 dla dokładności, 0,73 dla precyzji, 0,74 dla czułościi 0,73 dla F1-score. Ponadto modele GRU i Vanilla LSTM wykazały wydajność zbliżoną do najlepszego modelu, co wskazuje, że sieci rekurencyjne są bardziej odpowiednie dla tego problemu niż metody tradycyjne.
This paper is devoted to the analysis of existing convolutional neuralnetworks and experimental verification of the YOLO and U-Netarchitectures for the identification and classification of building materials based on images of destroyed structures. The aim of the study is to determinethe effectiveness of these models in the tasks of recognising materials suitable for reuse and recycling. This will help reduce construction wasteand introduce a more environmentally friendly approach to resource management. The study examined several modern deep learning models for image processing, including Faster R-CNN, Mask R-CNN, FCN (Fully Convolutional Networks), and SegNet. However, the choice was made on the YOLOand U-Netarchitectures. YOLO is used for fast object identification in images, which allows for quick detection and classification of building materials, and U-Netis used for detailed image segmentation, providing accurate determination of the structure and composition of building materials. Each of these models has been adapted to the specific requirements of building materials analysis in the context of collapsed structures. Experimental results have shown that the use of these models allows achieving high accuracy of segmentation of images of destroyed buildings, which makes them promising for usein automated resource control systems.
PL
Niniejszy artykuł poświęcony jest analizie istniejących konwolucyjnych sieci neuronowych i eksperymentalnej weryfikacji architektur YOLOi U-Net do identyfikacji i klasyfikacji materiałów budowlanych na podstawie obrazów zniszczonych konstrukcji. Celem badania jest określenie skuteczności tych modeli w zadaniach rozpoznawania materiałów nadających się do ponownego wykorzystania i recyklingu. Pomoże to zmniejszyć ilość odpadów budowlanych i wprowadzić bardziej przyjazne dla środowiska podejście do zarządzania zasobami. W badaniu przeanalizowano kilkanowoczesnych modeli głębokiego uczenia do przetwarzania obrazu, w tym Faster R-CNN, Mask R-CNN, FCN (Fully Convolutional Networks) i SegNet, jednak wybór padłna architektury YOLO i U-Net. YOLO służy do szybkiej identyfikacji obiektów na obrazach, co pozwala na szybkie wykrywanie i klasyfikację materiałów budowlanych, a U-Net służy do szczegółowej segmentacji obrazu, zapewniając dokładne określenie struktury i składu materiałów budowlanych. Każdyz tych modeli został dostosowany do specyficznych wymagań analizy materiałów budowlanych w kontekście zawalonych konstrukcji.Wyniki eksperymentów wykazały, żezastosowanie tych modeli pozwala osiągnąć wysoką dokładność segmentacji obrazów zniszczonych budynków, co czynije obiecującymi do wykorzystania w zautomatyzowanych systemach kontroli zasobów.
This study stands out for its novelty, offering an alternative solution to traditional methods for analyzing failure modes and their effects. We utilized machine learning techniques, which have enabled a significant shift in the predictive maintenance of electric vehicles. We performed numerous tests and evaluations of advanced models such as random forests, decision trees, logistic regression, and neural networks, where random forests and neural networks achieved exceptional accuracy of 96.67%. This breakthrough improves fault prediction accuracy, reduces operational costs, and minimizes downtime by combining numerical and categorical data. The study focuses on the transformative potential of machine learning, enhancing the reliability, lifespan, and maintenance of electric vehicles through a data-driven approach. The main innovation of this study lies in integrating multiple models, such as Random Forest and Neural Networks, to analyze failures in electric vehicles. While previous studies typically relied on traditional techniques like decision trees or regression analysis, our research presents a multi-layered approach, enabling the models to detect more complex patterns and improve prediction accuracy. Moreover, we incorporate real-world data collected from electric vehicle sensors, which allows the model to make precise predictions in real-world operational environments. This approach significantly advances previous studies, which primarily relied on simulated data or isolated models.
Duża dostępność wszelkiego rodzaju materiałów publikowanych w Internecie, stwarza potrzebę mechanizmów kontroli treści, tak by trafiały one tylko do osób uprawnionych oraz będących chętnymi odbiorcami takich treści. Jednym ze szczególnie wrażliwych rodzajów treści są materiały wideo o charakterze pornograficznym, do których dostęp powinien mieć bardzo selektywny charakter. Do praktycznej realizacji tego celu niezbędne jest wypracowanie metod automatycznej klasyfikacji takich treści. Rozpoznawanie pornograficznego charakteru materiałów wideo jest przypadkiem szczególnym szerszego problemu rozpoznawania aktywności (HAR – Human Activity Recognition). Artykuł podejmuje się zadania przedstawienia technologii informatycznych umożliwiających klasyfikację materiałów wideo ze szczególnym uwzględnieniem danych pornograficznych. Przedstawione są metody klasyczne oraz najnowsze metody wykorzystujące uczenie głębokie. Artykuł skupia się na rozwiązaniach wykorzystujących niskopoziomowe (w niewielkim stopniu przetworzone) cechy obrazu.
EN
The high availability of all kinds of material published on the Internet, creates the need for the mechanisms to control the content, so that it goes only to authorized persons and those who are willing recipients of such content. One particularly sensitive type of content is pornographic video material, access to which should be highly selective. For the practical realization of this goal, it is necessary to develop methods of automatic classification of such content. Recognizing the pornographic nature of video materials is a special case of the broader problem of Human Activity Recognition (HAR). The article undertakes the task of presenting information technologies that make it possible to classify video materials with a special focus on pornographic data. Classical methods and the latest methods using deep learning are presented. The article focuses on solutions using low-level (low-processed) image features.
14
Dostęp do pełnego tekstu na zewnętrznej witrynie WWW
The stability of tailings storage facilities (TSFs) is crucial for preventing failures that can lead to severe environmental and economic consequences. This research was conducted as part of the SEC4TD project, which aims to enhance TSF safety through advanced technologies. The project integrates IoT sensors, finite element method (FEM) simulations, and neural networks to automate the calculation of the factor of safety (FoS). The system starts with real-time water level readings from piezometers, which are processed by a trained neural network to estimate the FoS. Initially, a direct FEM-based approach was tested but proved impractical due to computational complexity and frequent convergence issues, requiring constant engineering supervision. To address this, an alternative framework was developed: engineers first analyze TSF cross-sections and generate multiple FEM models with varying water levels. The results of these simulations serve as training data for a neural network, which then enables rapid and reliable FoS predictions without the need for real-time FEM computations. In this article, the complete framework for integrating FEM-based FoS calculations with neural networks is presented, detailing the methodology, training process, and implementation. This approach allows for real-time safety assessments, providing TSF management teams with both sensor data and automated risk analysis, ultimately improving decision-making and increasing TSF security.
PL
Stateczność obiektów unieszkodliwiania odpadów wydobywczych (OUOW) ma kluczowe znaczenie dla zapobiegania awariom mogącym prowadzić do poważnych skutków środowiskowych i ekonomicznych. Praca została zrealizowana w ramach projektu SEC4TD, którego celem jest zwiększenie bezpieczeństwa OUOW poprzez zastosowanie nowoczesnych technologii. Opracowany system integruje czujniki IoT, symulacje metodą elementów skończonych (MES) oraz sieci neuronowe w celu automatyzacji obliczania współczynnika stateczności (FoS). System rozpoczyna pracę od pozyskania w czasie rzeczywistym danych o poziomie wody z piezometrów, które są następnie przetwarzane przez wytrenowaną sieć neuronową w celu oszacowania FoS. Początkowo testowano bezpośrednie podejście oparte na MES, jednak ze względu na złożoność obliczeniową oraz częste problemy z zbieżnością, wymagające stałego nadzoru inżynierskiego, okazało się ono niepraktyczne. W związku z tym opracowano alternatywną metodę: inżynierowie analizują przekroje OUOW i generują wiele modeli MES z różnymi poziomami wody. Wyniki tych symulacji służą jako dane treningowe dla sieci neuronowej, która umożliwia szybkie i wiarygodne prognozowanie FoS bez konieczności przeprowadzania obliczeń MES w czasie rzeczywistym. W artykule przedstawiono pełne ramy integracji obliczeń MES z sieciami neuronowymi, opisując metodykę, proces uczenia oraz implementację rozwiązania. Zaproponowane podejście umożliwia dokonywanie ocen bezpieczeństwa w czasie rzeczywistym, dostarczając zespołom zarządzającym OUOW zarówno danych pomiarowych, jak i automatycznej analizy ryzyka, co znacząco wspiera proces decyzyjny i zwiększa bezpieczeństwo obiektów.
W artykule zawarto doświadczenia autora związane z budową modeli rekurencyjnych sieci neuronowych przeznaczonych do modelowania przebiegów konwergencji wyrobisk górniczych. We wstępie przedstawiono kwestię analizy danych typu sekwencyjnego, następnie omówiono analizy pomiarów konwergencji, które stanowią przykład sekwencji. W dalszej części zaprezentowano model sieci neuronowej wraz z procesem trenowania w oparciu o dane z pomiarów konwergencji wyrobisk górniczych. Na podstawie sformułowanego kryterium wykazano możliwość zastosowania sieci rekurencyjnych do modelowania przebiegów konwergencji.
EN
The article contains the author’s experience with four recurrent neural network models made in the Tensorflow environment, designed to predict the convergence values of mine workings. It was shown that convergence measurements are sequential data and simple processing operations were performed on them. After formulating the criterion of convergence of learning curves, as a measure of correct training of the network, the construction of models was started. The models were made in four forecast variants, modelling both single and multiple outputs. Based on the learning curves, the fulfilment of the criterion was demonstrated and the possibility of using recurrent networks in modelling simple convergence courses was proven.
NRLMSISE is an empirical model that allows us to predict temperatures and densities of the main atmospheric components. The model is widely used to evaluate atmospheric impacts on satellite orbits and laser beam refraction which come through the atmosphere, such as those used for Earth-satellite distance measurements. Model of the atmosphere is a valuable part of the Satellite Laser Ranging processing software like Kyiv Geodynamics (Juliette). Juliette is written in C++ and exploits the C++ clone of NRLMSISE written by the second author. The C++ version produces the same outputs as an official Fortran code. Accurate modeling of atmospheric influences on satellite motion requires performing numerous calculations along satellite orbits or laser beam paths, which are computationally intensive. By decreasing calculation time of NRLMSISE, we would not only save the modeling time but also give a prospect for a wider application of the model due to lowering computational resource demands. Our work demonstrates how the traditional NRLMSISE model can be effectively translated into a neural network. This conversion achieves significant performance gains on both CPU and GPU while maintaining acceptable accuracy when compared to the C++ implementation of NRLMSISE. We demonstrate the process of moving NRLMSISE to a neural network, the resulting accuracy, ease of running the trained model on CUDA-enabled GPUs, and the obtained boost of performance on both CPU and GPU.
Vision-based control in robotics offers versatile automation; however, accessible educational platforms for exploring its integration with AI are still limited. This paper addresses this gap by presenting a small, 3D-printed parallel SCARA robot designed specifically for educational purposes. We provide details on its construction and demonstrate its application in laboratory exercises, which cover inverse and forward kinematics, vision-based tip positioning, and object detection. Notably, we investigate both supervised (using convolutional neural networks) and unsupervised (through autoencoder latent space exploration) approaches for classifying faulty parts. The unsupervised method achieved high performance, with a precision of 1.00, recall of 0.96, and an F1-measure of 0.98, which is comparable to the supervised approach that yielded a precision of 0.98, recall of 0.97, and an F1-measure of 0.97. This work contributes to the development of a low-cost platform and demonstrates the effectiveness of unsupervised AI techniques for vision-based robotic fault detection in educational settings, paving the way for more advanced AI-integrated robotics curricula.
With the growing emphasis on data-driven decision making, artificial intelligence (AI) methods have become increasingly important in managerial practice. This study aims to develop and evaluate supervised machine learning models for predicting customer brand loyalty and satisfaction based on selected behavioral, attitudinal, and programmatic attributes. This paper presents a lightweight decision support application that leverages machine learning techniques—specifically, Artificial Neural Networks (ANN) and Support Vector Machines (SVM)—to predict key customer-related indicators: brand loyalty and satisfaction. The models were trained on behavioral and attitudinal inputs and achieved excellent predictive performance, with test accuracies reaching 100%. The novelty of this study lies in the deployment of these models within an intuitive graphical user interface (GUI), enabling real-time predictions by non-technical users. Unlike traditional approaches focused solely on algorithm development, this research demonstrates a practical implementation of computational intelligence for operational and tactical business decision-making. The tool supports managers in profiling customers, optimizing loyalty programs, and enhancing customer engagement strategies through accessible AI-powered insights.
In this research paper, we examine recurrent and linear neural networks to determine the relationship between theamount of data needed to achieve generalization and data dimensionality, as well as the relationship between datadimensionality and the necessary computational complexity. To achieve this, we also explore the optimal topologies for each network, discuss potential problems in their training, and propose solutions. In our experiments, the relationship between the amount of data needed to achieve generalization and data dimensionality was linear for feed-forward neural networks and exponential for recurrent ones. Our findings indicate that computational complexity exhibits anexponential growth pattern as the dimensionality of the data increases. We also compared the networks’ accuracy inboth distance approximation and classification to the most popular alternative, Siamese networks, which outperformed both linear and recurrent networks in classification despite having lower accuracy in exact distance approximation.
This paper examines methods to secure machine learning inference (ML inference) so that sensitive data remainsprivate and proprietary models are protected during remote processing. We review several approaches ranging fromcryptographic techniques like homomorphic encryption (HE) and secure multi-party computation (MPC) to hardwaresolutions such as trusted execution environments (TEEs) and complementary methods including differential privacyand split learning. Each method is analyzed in terms of security, efficiency, communication overhead, and scalability.Use cases in healthcare, finance, and education show how these techniques balance privacy with practical performance.We conclude by outlining open challenges and future directions for building robust, efficient privacy-preserving ML inference systems.
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.