Titanium alloys are considered one of the materials required in industries. They can be used in various fields due to their high strength and corrosion resistance, but titanium alloys are considered difficult to machine using traditional methods. EDM is used to machine a workpiece using electrical discharges to cut hard materials that are challenging to cut with traditional methods. Therefore, this paper focuses on the machine of a high-strength material-titanium alloy Ti-6Al-4V and studies the influence of cutting process variables on the metal removal rate, tool wear rate, and surface roughness of the samples. The samples matrix form is created depending on the design of experiments method. Pulse-on time, discharge current, and gap process variables with three levels create mathematical models to predict the responses without conducting practical experiments. The results proved that machining variables impacted the responses, which were proven through results data analysis and the interaction plots. Also, the maximum error between experimental and predicted values using the mathematical model was 0.022 (mg/min) for the MRR, 1.719 (mg/min) for the TWR, and 0.334 (μm) for the Ra.
2
Dostęp do pełnego tekstu na zewnętrznej witrynie WWW
Wszędzie słyszymy dziś o sztucznej inteligencji, coraz bardziej na sile przybiera też hasło Przemysł 4.0. Warto zadać sobie pytanie, czym właściwie on jest i jak możemy w odniesieniu do niego dopasować TPM.
In recent years, social networks have struggled to meet user protection and fraud prevention requirements under unpredictable risks. Anonymity features are widely used to help individuals maintain their privacy, but they can also be exploited for malicious purposes. In this study, we develop a machine learning-driven de-anonymization system for social networks, with a focus on feature selection, hyperparameter tuning, and dimensionality reduction. Using supervised learning techniques, the system achieves high accuracy in identifying user identities from anonymized datasets. In experiments conducted on real and synthetic data, the optimized models consistently outperform baseline methods on average. Even in cases where they do not, significant improvements in precision are observed. Ethical considerations surrounding de-anonymization are thoroughly discussed, including the responsibility of implementation to maintain a balance between privacy and security. By proposing a scalable and effective framework for analyzing anonymized data in social networks, this research contributes to improved fraud detection and strengthened Internet security.
PL
W ostatnich latach sieci społecznościowe zmagają się z problemem spełnienia wymagań dotyczących ochrony użytkowników i zapobiegania oszustwom w warunkach nieprzewidywalnych zagrożeń. Funkcje anonimowości są powszechnie stosowane, aby pomóc użytkownikom zachować prywatność, ale mogą być również wykorzystywane do celów złośliwych. W niniejszym badaniu opracowaliśmy system deanonimizacji oparty na uczeniu maszynowym, przeznaczony dla sieci społecznościowych, koncentrując się na selekcji cech, dostrajaniu hiperparametrów i redukcji wymiarowości. Dzięki technikom uczenia nadzorowanego system osiąga wysoką dokładność w identyfikowaniu tożsamości użytkowników z anonimizowanych zbiorów danych. W eksperymentach przeprowadzonych na rzeczywistych i syntetycznych danych zoptymalizowane modele konsekwentnie przewyższały metody bazowe średnio. Nawet w przypadkach, gdy tak się nie działo, zaobserwowano znaczące poprawy w zakresie precyzji. Kwestie etyczne związane z deanonimizacją zostały dokładnie omówione, w tym odpowiedzialność za wdrożenie w celu utrzymania równowagi między prywatnością a bezpieczeństwem. Proponując skalowalny i efektywny model analizy anonimizowanych danych w sieciach społecznościowych, badanie to przyczynia się do poprawy wykrywania oszustw i wzmocnienia bezpieczeństwa w Internecie.
The development of the photovoltaic (PV) sector in Poland, as a crucial component of the renewable energy transition strategy, faces a challenge related to limited user awareness of operational risks. Our study addresses this gap by presenting the development and implementation of an interactive dashboard for the National Fire Department’s Decision Support System (SWD PSP), aimed at optimizing the safety of PV installations through data analysis and the formulation of preventive strategies. Utilizing data from fire protection units, this tool enables the monitoring of incidents and identification of potential threats, while simultaneously increasing societal awareness. The dashboard, leveraging advanced data visualization techniques, provides easy navigation and dynamic presentation of statistics, facilitating quick responses to changing data and potential hazards. The results of our study highlight the significance of such tools in enhancing the safety of using PV installations, which can contribute to the further development of the renewable Energy sector, ensuring its safety and efficiency.
This study aimed to determine the effectiveness of problem-based learning (PBL) on identified courses in improving the performance of maritime students. This study utilized pretest-posttest non-equivalent group design. Respondents were 480 BSMT students gathered using a match group design. Instrument used was a 45-item researcher-made multiple-choice test that has undergone content validity and reliability testing. Statistical tools used were mean and standard deviation for descriptive data analysis, Mann-Whitney test and Wilcoxon-Signed ranks for inferential data analysis, and Cohen’s d effect size, to determine the effectiveness of PBL. Results showed that the experimental and control group pretest performance before the intervention is described as poor and fair, while excellent and very good thereafter. No significant difference in the pretest scores of experimental and control groups. No significant differences in the posttest scores of experimental and control groups in NGEC 9, NAV 5, and SEAM 6, while there were significant differences in the posttest scores of experimental and control groups in NAV 2, NAV 4, and NAV 7. Significant differences were noted in the pretest and posttest scores of the experimental and control groups in all identified courses. The mean gain score of the experimental group in all identified courses is higher than the control group. No significant difference in the mean gains of experimental and control groups, for NGEC 9, NAV 5, and for SEAM 6 but significantly different was noted in NAV 2, NAV 4, and NAV 7. Based on the effect size results, PBL is highly effective on NAV 2 and NAV 7 compared to the traditional method. These results confirm how effective the PBL approach is as a teaching style in all identified courses. PBL approach is highly recommended for all maritime courses.
The fishing shipyard in Banda Aceh City is a privately owned shipyard and is managed in a family manner. The shipyard here is active in carrying out maintenance, repair and construction of new ships when there is demand from consumers. Shipyards in Banda Aceh City generally make ships made of wood. The problem that is currently being faced is that there are many abandoned ships due to lack of finance, natural resources, human resources and the environmental, this is an obstacle to the progress and development of shipyards. The purpose of this study is to determine the inhibiting factors that exist in shipyards in the city of Banda Aceh and find alternative solutions to these problems. The method used in this study is a survey method used to look at existing symptoms and collect data on factors related to research variables and then analyzed using the Fuzzy AHP method. The results of this study indicate that the financial inhibiting factor is the most influential factor in shipyards with a resulting value of 0.4635, the inhibiting factor of Natural Resources is worth 0.35675, the inhibiting factor of Human Resources is worth 0.2865 and the inhibiting factor from the environment is the inhibiting factor which is the lowest or less influential with a value of 0.14325. The alternative solutions to financial problems are capital loans and investments. An alternative for natural resources is the addition of a minimum stock to anticipate stock scarcity and delays in the delivery of materials and tools. The alternative for human resources is the existence of an office, organizational structure, and division of tasks as well as awareness of occupational health and safety. As for the alternatives for the environment, namely the need for buildings or installation of tarpaulins for areas where ships are built, good land management and studies of other natural impacts.
Artykuł przedstawia podejście do identyfikacji rodzaju szkła oparte na teorii zbiorów przybliżonych w programie RSES. Przedstawiono teoretyczne podstawy tej metody, opisano proces analizy danych oraz zaprezentowano wyniki identyfikacji rodzaju szkła.
EN
The article presents an approach to glass type identification based on rough set theory in the RSES program. The theoretical basis of this method is presented, the data analysis process is described and the results of glass type identification are presented.
W artykule przeprowadzono analizę zbioru danych za pomocą dwóch metod walidacji krzyżowej. Wykorzystano program RSES do identyfikacji kluczowych właściwości i relacji w zbiorze. Wyniki wykazują wpływ niektórych parametrów na potencjalną dokładność wyników.
EN
This article presents an analysis of a dataset using two cross-validation methods. The RSES program was employed to identify key properties and relationships within the dataset. The results indicate the impact of certain parameters on the potential accuracy of the outcomes.
W artykule przeprowadzono analizę zbioru danych za pomocą dwóch metod walidacji krzyżowej. Wykorzystano program RSES do identyfikacji kluczowych właściwości i relacji w zbiorze. Wyniki wykazują wpływ niektórych parametrów na potencjalną dokładność wyników.
EN
This article presents an analysis of a dataset using two cross-validation methods. The RSES program was employed to identify key properties and relationships within the dataset. The results indicate the impact of certain parameters on the potential accuracy of the outcomes.
W artykule przedstawiono rezultaty badań dotyczących zastosowania sztucznych sieci neuronowych ANN (ang. Artificial Neural Networks) w klasyfikacji sygnałów radioelektronicznych. W przeprowadzonych eksperymentach użyto ANNs o architekturze jednokierunkowej (ang. feedforward) oraz rekurencyjnej (ang. recurrent). Celem artykułu było określenie wydajności wymienionych algorytmów przy użyciu stosowanych w dziedzinowej literaturze miar, opisujących jakość modeli predykcyjnych. W rozdziale drugim szczegółowo scharakteryzowano rodzaje sztucznych sieci neuronowych, będących obiektami zainteresowania przedmiotowych badań. W tym samym rozdziale przedstawiono również zastosowane miary jakości algorytmów rozpoznawania wzorców. Następnie zostały przedstawione otrzymane rezultaty. Na ich podstawie autorzy artykułu dokonali analizy wydajności wymienionych typów sztucznych sieci neuronowych w procesie klasyfikacji sygnałów radioelektronicznych. Ostatecznie przedstawiono wnioski płynące z opracowanej charakterystyki porównawczej oraz wskazano kierunki dalszych badań.
EN
This article presents the results of a study on the application of Artificial Neural Networks (ANNs) in the classification of radio signals. ANNs with feedforward and recurrent architectures were used in the conducted experiments. The purpose of the article was to determine the performance of the aforementioned algorithms using measures used in the field literature to describe the quality of predictive models. Chapter two characterizes in detail the types of artificial neural networks, which are the objects of interest of this research. The same chapter also presents the measures used to describe the quality of pattern recognition algorithms. Then the obtained results were presented. Based on them, the authors of the article analyzed the performance of the mentioned types of artificial neural networks in the process of classification of radio-electronic signals. Finally, the conclusions of the developed comparative characteristics are presented, and the directions for further research are indicated.
A new model four-parameter model called the odd generalized exponential power hazard rate (OGE-PHR) distribution has been introduced. Some statistical properties for OGE-PHR are obtained. The moments, quantile, mode, reliability, and order statistics are discussed. Estimation of parameters, maximum likelihood technique is employed. Two real data sets are discussed with applications.
Besides clustering and classification, detection of atypical elements (outliers, rare elements) is one of the most fundamental problems in contemporary data analysis. However, contrary to clustering and classification, an atypical element detection task does not possess any natural quality (performance) index. The subject of the research presented here is the creation of one. It will enable not only evaluation of the results of a procedure for atypical element detection, but also optimization of its parameters or other quantities. The investigated quality index works particularly well with frequency types of such procedures, especially in the presence of substantial noise. Using a nonparametric approach in the design of this index practically frees the proposed method from the distribution in the dataset under examination. It may also be successfully applied to multimodal and multidimensional cases.
The considered methods make it possible to develop the structure of diagnostic systems based on neural networks and implement decision support systems in classification diagnostic problems. The study uses general special methods of data mining and the principles of constructing an artificial intelligence system based on neural networks. The problems that arise when filling knowledge bases and training neural networks are highlighted. Methods for developing models of intelligent data processing for diagnostic purposes based on neural networks are proposed. The authors developed and verified an activation function for intermediate neural levels, which allows the use of weighting coefficients as probabilities of diagnostic processes and avoids the problem of local minima when using gradient descent methods. The authors identified special problems that may arise during the practical implementation of a decision support system and the development of knowledge bases. An original activation function for intermediate layers is proposed, obtained based on the modernization of the Gaussian error function. The experience of using the considered methods and models allows us to implement artificial intelligence diagnostic systems in various classification problems.
PL
Rozważane metody pozwalają na opracowywanie struktury systemów diagnostycznych opartych na sieciach neuronowych oraz wdrażanie systemów wspomagania decyzji w klasyfikacji problemów diagnostycznych. W pracy zastosowano ogólnie specjalistyczne metody eksploracji danych oraz zasady budowy systemu sztucznej inteligencji opartego na sieciach neuronowych.. Zwrócono uwagę na problemy pojawiające się przy wypełnianiu baz wiedzy i szkoleniu sieci neuronowych. Zaproponowano metody opracowywania modeli inteligentnego przetwarzania danych do celów diagnostycznych w oparciu o sieci neuronowe. Autorzy opracowali i zweryfikowali funkcję aktywacji dla pośrednich poziomów neuronowych, która pozwala na wykorzystanie współczynników ważących jako prawdopodobieństw procesów diagnostycznych i pozwala uniknąć problemu minimów lokalnych przy stosowaniu metod gradientowego opadania. Autorzy zidentyfikowali szczególne problemy, które mogą pojawić się podczas praktycznego wdrażania systemu wspomagania decyzji i rozwoju baz wiedzy. Zaproponowano oryginalną funkcję aktywacji warstw pośrednich, otrzymaną w oparciu o modernizację funkcji błędu Gaussa. Doświadczenie w stosowaniu rozważanych metod i modeli pozwala na wdrażanie systemów diagnostycznych sztucznej inteligencji w różnych problemach klasyfikacyjnych.
Rains are one of the complementary components of the hydrological cycle, so engineers must be able to determine as much as possible in order to design facilities dealing with the assembly, transportation and storage of rains.. The objective of this research is to comprehensive analysis the data of the depth of annual rainfall (mm) of the Babylon Station for the purpose of finding the characteristics of the distributions of observed frequency. In this paper data of annual rainfall depth (mm) by taking maximum value from one year’s data as well as the rate of data values for one year from 1991 to 2021 for one stock station in Iraq, Babylon for the purpose of creating the characteristics of the distribution of observed frequency. An attempt was made to fit three of the available theoretical distributions, the Normal, Log Normal and Gamma distributions. The Chi-Square, Kolmogorov-Smirnov and Anderson-Darling indices examined for the purpose of comparing theoretical distributions with viewed distributions. Gumbel’s extreme value distribution, the Normal and the Log Normal distribution were used to know the suitability of the data and for the periods of 5, 10, 15 and 50 years. In the remainder of this research, grants of intensity-durationfrequency (IDF) curves of the rainfall were obtained and repeated for the rainfall of the Babylon observation station for 15, 30 and 60 minutes.
Modeling the behavior and shape of space objects is widely used in modern astrophysical research methods. Such studies are often used to determine the shape and modeling of physical parameters of variable stars and asteroids. Therefore, based on the database of photometric observations of resident space objects (RSO) available in the Laboratory of Space Research of Uzhhorod National University, it was decided to find a means for modeling light curves to confirm the shape of objects and determine the parameters of their rotation by analogy with objects in deep space. We attempted to use Blender software to model the RSO synthetic light curves (LCs). While Blender has been a popular open-source software among animators and visual effects artists, in recent years, it has also become a tool for researchers: for example, it is used for visualizing astrophysical datasets and generating asteroid light curves. In the process of modeling, we used all the advantages of Blender software such as Python scripting and used GPU. We made synthetic LCs for two objects – TOPEX/Poseidon and COSMOS-2502. A 3D model for Topex/Poseidon was available on the NASA website, but after research of official datasheets, we figured out that the available 3D model requires corrections in the dimensions of the RSO body and solar panel. A 3D model of COSMOS-2502 was made according to available information from the internet. A manual modeling process was performed according to well-known RSO’s self-rotation parameters. For example, we also show the results of LC modeling using the Markov chain Monte Carlo (MCMC) method. All synthetic LCs obtained in the research process are well correlated with real observed LCs.
The existence of long-range dependencies in many natural systems was a very important discovery that introduced many interesting challenges and explanations about the systems behaviour. In the case of man-made systems such dependencies can also be visible, and one example is computer systems. Because the studies focused on long-range statistical dependencies in computer systems, particularly in the context of system performance counters, are not very common in the literature, this paper undertook an investigation of statistical long-range dependencies present in cache memory data represented as time series. Based on the time series collected during computer system processing by internal system tools, it will be seen that in the case of cache memory modelling, statistical models with long-term dependencies should be used. The following paper sections show how to collect data, analyse, and build an appropriate model.
Purpose: The goal of the paper is to analyze the main features, benefits and problems with the diagnostic analytics usage. Design/methodology/approach: Critical literature analysis. Analysis of international literature from main databases and polish literature and legal acts connecting with researched topic. Findings: The paper discusses the concept of diagnostic analytics, which is a powerful tool for organizations to understand the underlying factors and reasons behind specific outcomes or events. By analyzing historical data and applying statistical techniques, organizations can identify root causes, patterns, and correlations that explain past events. This understanding enables informed decision-making, performance improvement, risk mitigation, enhanced customer insights, process optimization, resource allocation, and continuous improvement. Nevertheless, there are several challenges associated with diagnostic analytics. Firstly, the analysis process can be time-consuming due to the need for thorough examination and interpretation of data. Additionally, real-time insights may be limited as diagnostic analytics primarily focuses on historical data. Issues related to data quality and availability may also arise, impacting the accuracy and reliability of the analysis. Furthermore, diagnostic analytics lacks predictive capabilities, making it more challenging to anticipate future outcomes. The complexity of analysis, data privacy and security concerns, risks of bias and misinterpretation, and difficulties in identifying causal relationships further add to the challenges organizations face. Originality/value: Detailed analysis of all subjects related to the problems connected with the diagnostic analytics.
Purpose: The goal of the paper is to analyze the main features, benefits and problems with the descriptive analytics usage. Design/methodology/approach: Critical literature analysis. Analysis of international literature from main databases and polish literature and legal acts connecting with researched topic. Findings: The paper discusses the concept of descriptive analytics, which involves collecting, cleaning, and summarizing historical data from various sources to provide a clear and concise summary that can aid in decision-making. The paper explains the importance of descriptive analytics as the foundation for other types of data analytics, and highlights the steps involved in its implementation, including data collection, cleaning and preparation, exploration and visualization, analysis, interpretation, and reporting. The paper also mentions the advantages of descriptive analytics, such as identifying trends and patterns, optimizing processes, improving decision-making, and simplifying communication, while cautioning businesses about the potential pitfalls and challenges of this approach, such as limited predictive power, incomplete data, data privacy concerns, biased results, and overreliance on historical data. The paper emphasizes the importance of understanding these issues to ensure that the insights generated are relevant, accurate, and useful. Originality/value: Detailed analysis of all subjects related to the problems connected with the descriptive analytics.
Purpose: The goal of the paper is to analyze the main features, benefits and problems with the prospective analytics usage. Design/methodology/approach: Critical literature analysis. Analysis of international literature from main databases and polish literature and legal acts connecting with researched topic. Findings: Prescriptive analytics aims to assist businesses in making informed decisions that optimize desired outcomes or minimize undesired ones. It goes beyond predicting future outcomes and provides recommendations on the best actions to achieve desired goals while considering potential risks and uncertainties. Prescriptive analytics finds applications in various domains such as supply chain management, financial planning, healthcare, marketing, and operations management. It empowers businesses to make data-driven decisions, optimize resource allocation, enhance efficiency, and gain a competitive advantage. Considered the highest level of analytics, prescriptive analytics combines historical data, real-time information, optimization techniques, and decision models to generate actionable recommendations. Originality/value: Detailed analysis of all subjects related to the problems connected with the prospective analytics.
Purpose: The goal of the paper is to analyze the main features, benefits and problems with the real-time analytics usage. Design/methodology/approach: Critical literature analysis. Analysis of international literature from main databases and polish literature and legal acts connecting with researched topic. Findings: The paper focus on the advantages and disadvantages of real-time analytics. The ability to process and analyze data in real-time allows organizations to quickly identify trends and patterns, optimize their operations, and allocate resources more efficiently. Additionally, real-time analytics helps businesses identify new revenue opportunities and optimize their pricing strategies, monitor user behavior, detect security threats, and react without delay. However, real-time analytics can be expensive to implement, require technical expertise, and generate false positives. Proper data quality, security measures, and system scaling are also essential for effective implementation. The vague definition of real-time and the requirement to collect detailed requirements from all stakeholders can also present challenges to businesses. Originality/value: Detailed analysis of all subjects related to the problems connected with the real-time analytics.
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.