Purpose: Managing a pandemic in individual countries is a concern not only of governments but also of WHO and the entire international community. The pandemic knows no bounds. In this context, India is a special country - with a huge population and a very large diversity of cultural, geographic, economic, poverty levels, and pandemic management methods. In this work, we try to assess the sum of the impact of these factors on the state of the epidemic by creating a ranking of Indian states from the least to the most endangered. Design/methodology/approach: As a method of creating such a ranking, we take into account two very, in our opinion, objective variables - the number of deaths and the number of vaccinations per million inhabitants of the region. In order not to make the usually controversial ascribing of weights to these factors, we relate them to the selected reference region - here to the capital city - Delhi. We apply a logical principle - the more vaccinations, the better and the more deaths - the worse. Findings: The results are rather surprising. Many small regions are safe regions, such as Andaman, Tripura or Sikkim, many large or wealthy states are at the end of this ranking, such as Delhi, Maharashtra, Uttar Pradesh, Bihar, and Tamil Nadu. What was found in the course of the work? This will refer to analysis, discussion, or results. Originality/value: The method enables an indirect assessment of the quality of pandemic management in a given region of the country. It can be used for any country or even a group of countries or a continent. According to this criterion, the best state/region is intuitively the safest for residents. A small number of deaths and a large number of vaccinations may positively indicate the state of public health and good management of the fight against the pandemic by local and/or central authorities.
We use different methods to evalulate performance of our works, and always look for better method to do this. One of the most available methods to measure performance is using statistical datas. To do this, we have to be sure about our datas are sufficient or not and how much we can trust these data sets to measure performance. In this study we will test statistical data sets of Kocaeli Fire Brigade by using WEKA and its algorithms.
PL
W badaniach użyto różnych metod ocen wyników prowadzonych badań poszukując jednocześnie lepszejmetody analizy. Jedną z najbardziej dostępnych metod pomiaru wydajności jest wykorzystanie danych statystycznych. Aby to zrobić, należy mieć pewność, czy analizowne dane są wystarczające, oraz w jakim spotniu możena ufać zbiorom danych w celu pomiaru wydajności. W tym badaniu przetestowano zestaw danych statystycznych Straży Pożarnej Kocaeli za pomocą WEKA i jej algorytmów klasyfikacyjnych.
3
Dostęp do pełnego tekstu na zewnętrznej witrynie WWW
The article presents the results of work carried out on the design of an intelligent building equipped with a centralized resource management system, an internal device-free detection and navigation system and machine intelligence increasing the capabilities of all other subsystems. The main functionalities of the project included presence detection, facility energy optimization, user identification and resource management, as well as access control and work time recording.
PL
W artykule przedstawiono wyniki prac prowadzonych nad projektem inteligentnego budynku wyposażonego w scentralizowany system zarządzania zasobami, wewnętrzny system detekcji i nawigacji bez użycia urządzenia oraz inteligencję maszynową zwiększającą możliwości wszystkich pozostałych podsystemów. Główne funkcjonalności projektu obejmowały detekcję obecności, optymalizację energetyczną obiektu, identyfikację użytkowników i zarządzanie zasobami, a także kontrolę dostępu i rejestrację czasu pracy.
4
Dostęp do pełnego tekstu na zewnętrznej witrynie WWW
A recent evolutionary optimization algorithm, Barnacles Mating Optimizer (BMO) algorithm is proposed to solve one of the optimal reactive power dispatch (ORPD) problems viz. loss minimization in power system. The concept of Hardy-Weinberg principle and sperm-cast process of barnacles is adopted in BMO to balance the exploitation and exploration in solving the optimization problem. Optimal reactive power dispatch (ORPD) on the other hand is one of the complex optimization problems in power system operation. BMO is utilized to obtain the optimal combination of control variables such as generator voltages, transformer tap setting and injected MVAR or known as reactive compensation devices to achieve the minimum losses in the power system. To show the effectiveness of proposed BMO, it is tested on IEEE-30 bus system which consists of 25 control variables and also has been tested on the large system of power network viz. IEEE-118 bus system. The obtained results from BMO are compared with other well-known optimization algorithms in the literature. The obtained comparison results indicate that proposed BMO is effective to reach minimum loss for ORPD problem.
PL
Zaproponowano najnowszy ewolucyjny algorytm optymalizacji, algorytm Barnacles Mating Optimizer (BMO), aby rozwiązać jeden z problemów z optymalnym rozprowadzaniem mocy biernej (ORPD), a mianowicie. minimalizacja strat w systemie elektroenergetycznym. Koncepcja zasady Hardy'ego-Weinberga i procesu odlewania nasienia pąkli została przyjęta w BMO w celu zrównoważenia eksploatacji i eksploracji w rozwiązaniu problemu optymalizacji. Natomiast optymalne dysponowanie mocą bierną (ORPD) jest jednym ze złożonych problemów optymalizacji pracy systemu elektroenergetycznego. BMO służy do uzyskania optymalnej kombinacji zmiennych sterujących, takich jak napięcia generatora, ustawienie zaczepów transformatora i wstrzykiwany MVAR lub znane jako urządzenia kompensacji reaktywnej, w celu osiągnięcia minimalnych strat w systemie elektroenergetycznym. Aby pokazać skuteczność proponowanego BMO, został przetestowany na systemie magistrali IEEE-30, który składa się z 25 zmiennych sterujących, a także został przetestowany na dużym systemie sieci energetycznej, a mianowicie. System magistrali IEEE118. Otrzymane wyniki z BMO są porównywane z innymi znanymi algorytmami optymalizacyjnymi w literaturze. Uzyskane wyniki porównawcze wskazują, że proponowane BMO jest skuteczne w osiąganiu minimalnych strat związanych z problemem ORPD.
The aim of the presented project was to create a comprehensive building management system equipped with a network of wireless and energy-efficient sensors that collect data about users and on their basis control final devices such as lighting, ventilation, air conditioning and heating. In the presented system, end devices can be both products offered by the market (commercial) and proprietary solutions (own). This is to allow the adaptation of commercial radio communication protocols with high integration capabilities and common occurrence. In addition, the system has been enriched with an innovative system of tracking and building navigation and access control, which are supported by a network of radio beacons and radio-tomographic imaging technology (RTI). The whole system is to be supervised by computational intelligence learned from scratch.
PL
Celem prezentowanego projektu było stworzenie kompleksowego systemu zarządzania budynkiem wyposażonego w sieć bezprzewodowych i energooszczędnych czujników, które zbierają dane o użytkownikach i na ich podstawie sterują urządzeniami końcowymi, takimi jak oświetlenie, wentylacja, klimatyzacja i ogrzewanie. W prezentowanym systemie urządzeniami końcowymi mogą być zarówno produkty oferowane przez rynek (komercyjne), jak i rozwiązania autorskie (własne). Ma to na celu umożliwienie adaptacji komercyjnych protokołów komunikacji radiowej o dużych możliwościach integracyjnych i powszechnym występowaniu. Dodatkowo system został wzbogacony o innowacyjny system śledzenia i nawigacji po budynkach oraz kontroli dostępu, które są wspomagane przez sieć radiolatarni oraz technologię obrazowania radiowo-tomograficznego (RTI). Nad całością systemu ma czuwać inteligencja obliczeniowa wyuczona od podstaw.
This paper presents an overview of the applications of computational intelligence techniques, viz. artificial neural networks, fuzzy inference systems, and genetic algorithms, for the design of biomaterials with improved performance. These techniques are basically used for developing data-driven models and for optimization. The paper introduces the domain of biomaterials and how they can be designed using computational intelligence techniques. Then a brief description of the tools is made, followed by the applications of the tools in various domains of biomaterials. The applications range in all classes of materials ranging from alloys to composites. There are examples of applications for the surface treatment of biomaterials, materials for drug delivery systems, materials for scaffolds and even in implant design. It is found the tools can be effectively used for designing new and improved biomaterials.
7
Dostęp do pełnego tekstu na zewnętrznej witrynie WWW
In real time, the speech signal received contains noise produced in the background andreverberations. These disturbances reduce the quality of speech; therefore, it is importantto eliminate the noise and increase the intelligibility and quality of speech signal. Speechenhancement is the primary task in any real-time application that handles speech signals.In the proposed method, the most effective and challenging noise, i.e., babble noise, isremoved, and the clean speech is recovered. The enhancement of the corrupted speechsignal is done by applying a deep neural network-based denoising algorithm in which theideal ratio mask is used to mask the noisy speech and separate the clean speech signal.In the proposed system, the speech signal corrupted by noise is enhanced. Evaluation ofenhanced speech signal by performance metrics such as short time objective intelligibilityand signal to noise ratio of the denoised speech show that the speech intelligibility andspeech quality are improved by the proposed method.
Computational intelligence (CI) can adopt/optimize important principles in the workflow of 3D printing. This article aims to examine to what extent the current possibilities for using CI in the development of 3D printing and reverse engineering are being used, and where there are still reserves in this area. Methodology: A literature review is followed by own research on CI-based solutions. Results: Two ANNs solving the most common problems are presented. Conclusions: CI can effectively support 3D printing and reverse engineering especially during the transition to Industry 4.0. Wider implementation of CI solutions can accelerate and integrate the development of innovative technologies based on 3D scanning, 3D printing, and reverse engineering. Analyzing data, gathering experience, and transforming it into knowledge can be done faster and more efficiently, but requires a conscious application and proper targeting.
9
Dostęp do pełnego tekstu na zewnętrznej witrynie WWW
Diabetes Mellitus (DM) belongs to the ten diseases group with the highest mortality rate globally, with an estimated 578 million cases by 2030, according to the World Health Organization (WHO). The disease manifests itself through different disorders, where vasculopathy shows a chronic relationship with diabetic ulceration events in distal extremities, being temperature a biomarker that can quantify the risk scale. According to the above, an analysis is performed with standing thermography images, finding temperature patterns that do not follow a particular distribution in patients with DM. Therefore, the modern medical literature has taken the use of Computer-Aided Diagnosis (CAD) systems as a plausible option to increase medical analysis capabilities. In this sense, we proposed to study three state-of-the-art deep learning (DL) architectures, experimenting with convolutional, residual, and attention (Transformers) approaches to classify subjects with DM from diabetic foot thermography images. The models were trained under three conditions of data augmentation. A novel method based on modifying the images through the change of the amplitude in the Fourier Transform is proposed, being the first work to perform such synergy in the characterization of risk in ulcers through thermographies. The results show that the proposed method allowed reaching the highest values, reaching a perfect classification through the convolutional neural network ResNet50v2, promising for limited data sets in thermal pattern classification problems.
A Computational Intelligence (CI) approach is one of the main trending and potent data dealing out and processing instruments to unravel and resolve difficult and hard reliability crisis and it takes an important position in intelligent reliability analysis and management of data. Nevertheless, just few little broad reviews have recapitulated the current attempts of Computational Intelligence (CI) in reliability assessment in power systems. There are many methods in reliability assessment with the aim to prolong the life cycles of a system, to maximize profit and predict the life cycle of assets or systems within an organization especially in electric power distribution systems. Sustaining an uninterrupted electrical energy supply is a pointer of affluence and nationwide growth. The general background of reliability assessment in power system distribution using computational intelligence, some computational intelligence techniques, reliability engineering, literature reviews, theoretical or conceptual frameworks, methods of reliability assessment and conclusions was discussed. The anticipated and proposed technique has the aptitude to significantly reduce the needed period for reliability investigation in distribution networks because the distribution network needs an algorithm that can evaluate, assess, measure and update the reliability indices and system performance within a short time. It can also manage outages data on assets and on the entire system for quick and rapid decisions making as well as can prevent catastrophic failures. Those listed above would be taken care of if the proposed method is utilized. This overview or review may be deemed as valuable assistance for anybody doing research.
Obecnym wyzwaniem jest zbadanie i optymalizacja obliczeniowych miar wypalenia zawodowego w celu obiektywnego określenia najlepszego sposobu obliczania satysfakcji z pracy, wypalenia zawodowego i predyktorów zamiaru odejścia z pracy w różnych grupach zawodowych.Celem badań prezentowanych w artykule był przegląd badań w zakresie obliczeniowego określanie zależności między doświadczaniem stresu w pracy a występowaniem objawów wypalenia zawodowego.
EN
The current challenge is to explore and optimise computational measures of burnout to objectively determine the best way to calculate job satisfaction, job burnout and predictors of intention to quit across occupational groups.The aim of the research presented in this article was to review studies in the field of computational determinationof the relationship between the experience of stress at work and the occurrence of symptoms of professional burnout.
Wypalenie zawodowe (ang. burnout) powstaje w wyniku długotrwałego narażenia na stres związany z pracą. Przejawia się ono w emocjonalnym wyczerpaniu, depersonalizacji i spadku osiągnięć osobistych. W literaturze jest wiele doniesień na temat wypalenia zawodowego wśród pracowników systemu opieki zdrowotnej, ale niewiele jest badań wśród fizjoterapeutów czy informatyków, nie mówiąc już o analizie wypalenia zawodowego za pomocą metod sztucznej inteligencji. Celem niniejszego badania jest wypełnienie tej luki.
EN
Burnout is caused by prolonged exposure to work-related stress. It manifests itself in emotional exhaustion, depersonalisation and reduced personal achievement. There are many reports in the literature on burnout among healthcare professionals, but there are few studies among physiotherapists or IT professionals, let alone analysing burnout using artificial intelligence methods. This study aims to fill this gap.Keywords: computational intelligence, artificial neural networks, professional burnout, commitment to work, motivation to work.
This paper presents a new approach to the existing training of marine control engineering professionals using artificial intelligence. We use optimisation strategies, neural networks and game theory to support optimal, safe ship control by applying the latest scientific achievements to the current process of educating students as future marine officers. Recent advancements in shipbuilding, equipment for robotised ships, the high quality of shipboard game plans, the cost of overhauling, dependability, the fixing of the shipboard equipment and the requesting of the safe shipping and environmental protection, requires constant information on recent equipment and programming for computational intelligence by marine officers. We carry out an analysis to determine which methods of artificial intelligence can allow us to eliminate human subjectivity and uncertainty from real navigational situations involving manoeuvring decisions made by marine officers. Trainees learn by using computer simulation methods to calculate the optimal safe traverse of the ship in the event of a possible collision with other ships, which are mapped using neural networks that take into consideration the subjectivity of the navigator. The game-optimal safe trajectory for the ship also considers the uncertainty in the navigational situation, which is measured in terms of the risk of collision. The use of artificial intelligence methods in the final stage of training on ship automation can improve the practical education of marine officers and allow for safer and more effective ship operation.
The paper analyses the process of post-mining displacements generated by underground mining. Innovative mathematical structures for the modeling of hazard field emission were developed as strong solutions to partial differential equations in R3+1. Moreover, a stochastic equation in L2(Ω) (probabilistic space) was defined and applied as a model that takes into account the randomness of the process. Monitoring of a mining area based on solutions in the GNSS technology and classical geodesy supports the analysis of topological transformations of a given subspace. The data was archived and stored in digital form and then analyzed in many ways. The quality of the representation (measurements and modeling) was estimated with the use of incremental statistics. Thus, obtained distributions of density function are not ranked as normal distribution. The performed analyses make it possible to predict the optimal scenarios for post-mining environmental hazards.
PL
W artykule przeanalizowano proces przemieszczeń pogórniczych generowanych przez górnictwo podziemne. Innowacyjne struktury matematyczne do modelowania emisji pola zagrożenia opracowano jako silne rozwiązania równań różniczkowych cząstkowych w R3+1. Ponadto zdefiniowano i zastosowano równanie stochastyczne w L2(Ω) (przestrzeni probabilistycznej) jako model uwzględniający losowość procesu. Monitoring obszaru górniczego, w oparciu o rozwiązania w technologii GNSS i klasycznej geodezji, wspomaga analizę przekształceń topologicznych danej podprzestrzeni. Dane archiwizowano i przechowywano w formie cyfrowej, a następnie analizowano na wiele sposobów. Jakość reprezentacji (pomiary i modelowanie) oszacowano za pomocą statystyk przyrostowych. Tak uzyskane rozkłady funkcji gęstości nie są klasyfikowane jako rozkład normalny. Przeprowadzone analizy pozwalają przewidzieć optymalne scenariusze zagrożeń dla środowiska pogórniczego.
The article presents research on the use of Monte-Carlo Tree Search (MCTS) methods to create an artificial player for the popular card game “The Lord of the Rings”. The game is characterized by complicated rules, multi-stage round construction, and a high level of randomness. The described study found that the best probability of a win is received for a strategy combining expert knowledge-based agents with MCTS agents at different decision stages. It is also beneficial to replace random playouts with playouts using expert knowledge. The results of the final experiments indicate that the relative effectiveness of the developed solution grows as the difficulty of the game increases.
16
Dostęp do pełnego tekstu na zewnętrznej witrynie WWW
Computational Intelligence (CI) is a computer science discipline encompassing the theory, design, development and application of biologically and linguistically derived computational paradigms. Traditionally, the main elements of CI are Evolutionary Computation, Swarm Intelligence, Fuzzy Logic, and Neural Networks. CI aims at proposing new algorithms able to solve complex computational problems by taking inspiration from natural phenomena. In an intriguing turn of events, these nature-inspired methods have been widely adopted to investigate a plethora of problems related to nature itself. In this paper we present a variety of CI methods applied to three problems in life sciences, highlighting their effectiveness: we describe how protein folding can be faced by exploiting Genetic Programming, the inference of haplotypes can be tackled using Genetic Algorithms, and the estimation of biochemical kinetic parameters can be performed by means of Swarm Intelligence. We show that CI methods can generate very high quality solutions, providing a sound methodology to solve complex optimization problems in life sciences.
17
Dostęp do pełnego tekstu na zewnętrznej witrynie WWW
W artykule dokonano przeglądu metod i modeli prognostycznych dedykowanych średnioterminowemu prognozowaniu obciążeń elektroenergetycznych. Opisano metody modelowania warunkowego i autonomicznego, modele klasyczne, modele inteligencji obliczeniowej i uczenia maszynowego oraz modele oparte na podobieństwie obrazów.
EN
The article reviews the methods and models of the medium-term load forecasting. Methods of conditional and autonomous modeling, classic models, computational intelligence and machine learning models are described, as well as pattern similarity-based models.
Recently, the lungs have been extensively examined as a route for delivering drugs (active pharmaceutical ingredients, APIs) into the bloodstream; this is mainly due to the possibility of the noninvasive administration of macromolecules such as proteins and peptides. The absorption mechanisms of chemical compounds in the lungs are still not fully understood, which makes pulmonary formulation composition development challenging. This manuscript presents the development of an empirical model capable of predicting the excipients’ influence on the absorption of drugs in the lungs. Due to the complexity of the problem and the not-fully-understood mechanisms of absorption, computational intelligence tools were applied. As a result, a mathematical formula was established and analyzed. The normalized root-mean-squared error (NRMSE) and R2 of the model were 4.57%, and 0.83, respectively. The presented approach is beneficial both practically by developing an in silico predictive model and theoretically by gaining knowledge of the influence of APIs and excipient structure on absorption in the lungs.
19
Dostęp do pełnego tekstu na zewnętrznej witrynie WWW
Diagnosis, being the first step in medical practice, is very crucial for clinical decision making. This paper investigates state-of-the-art computational intelligence (CI) techniques applied in the field of medical diagnosis and prognosis. The paper presents the performance of these techniques in diagnosing different diseases along with the detailed description of the data used. This paper includes basic as well as hybrid CI techniques that have been used in recent years so as to know the current trends in medical diagnosis domain. The paper presents the merits and demerits of different techniques in general as well as application specific context. This paper discusses some critical issues related to the medical diagnosis and prognosis such as uncertainties in the medical domain, problems in the medical data especially dealing with time-stamped (temporal) data, and knowledge acquisition. Moreover, this paper also discusses the features of good CI techniques in medical diagnosis. Overall, this review provides new insight for future research requirements in the medical diagnosis domain.
20
Dostęp do pełnego tekstu na zewnętrznej witrynie WWW
W pracy zaproponowano wykorzystanie, opartego na transformacjach ortogonalnych i biortogonalnych, modelu uczenia maszynowego do syntezy procesora realizującego funkcję dodawania i mnożenia liczb rzeczywistych. Ze względu na cechy bezstratności oraz realizację zasady superpozycji model ten można zakwalifikować jako system kwantowego przetwarzania sygnałów.
EN
The goal of this paper is to present a universal machine learning model using orthogonal and biorthogonal transformations based on Hurwitz-Radon matrices. This model was used to synthesize a processor that performs the function of adding and multiplying real numbers. Due to the lossless features and implementation of the superposition principle, the model can be qualified as a quantum signal processing system.
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.