Ograniczanie wyników
Czasopisma help
Autorzy help
Lata help
Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników

Znaleziono wyników: 751

Liczba wyników na stronie
first rewind previous Strona / 38 next fast forward last
Wyniki wyszukiwania
Wyszukiwano:
w słowach kluczowych:  genetic algorithm
help Sortuj według:

help Ogranicz wyniki do:
first rewind previous Strona / 38 next fast forward last
EN
A methodology is proposed for modifying computer ontologies (CO) for electronic courses (EC) in the field of information and communication technologies (ICT) for universities, schools, extracurricular institutions, as well as for the professional retraining of specialists. The methodology includes the modification of CO by representing the formal ontograph of CO in the form of a graph and using techniques for working with the graph to find optimal paths on the graph using applied software (SW). A genetic algorithm (GA) is involved in the search for the optimal CO. This will lead to the division of the ontograph into branches and the ability to calculate the best trajectory in a certain sense through the EC educational material, taking into account the syllabus. An example is considered for the ICT course syllabus in terms of a specific topic covering the design and use of databases. It is concluded that for the full implementation of this methodology, a tool is needed that automates this procedure for developing EC and/or electronic textbooks. An algorithm and a prototype of software tools are also proposed, integrating machine methods of working with CO and graphs.
EN
Improving production processes includes not only activities concerning manufacturing itself, but also all the activities that are necessary to achieve the main objectives. One such activity is transport, which, although a source of waste in terms of adding value to the product, is essential to the realization of the production process. Over the years, many methods have been developed to help manage supply and transport in such a way as to reduce it to the necessary minimum. In the paper, the problem of delivering components to a production area using trains and appropriately laid-out carriages was described. It is a milk run stop locations problem (MRSLP), whose proposed solution is based on the use of heuristic algorithms. Intelligent solutions are getting more and more popular in the industry because of the possible advantages they offer, especially those that include the possibility of finding an optimum local solution in a relatively short time and the prevention of human errors. In this paper, the applicability of three algorithms – tabu search, genetic algorithm, and simulated annealing – was explored.
EN
The production of functional parts, including those employed by the biomedical industry has been achieved a promising candidate in Fused Deposition Modelling (FDM). The essential properties of these biomedical parts which manufactured by additive manufacturing as compared to some other conventional manufacturing processes depend on structural and process parameters rather than material properties alone. Regarding to the evaluation the flexural strength of medical-grade, Polymethylmethacrylate PMMA has been received relatively very little investigation to date. PMMA is a biocompatible filament that be used in manufacturing of patient-specific implants such as dental prosthesis and orthopaedic implants. The proposed work explores the effect of three process parameters that vary with respect of three levels on the flexural strength. These levels can be specified by layer height (120, 200, 280 µm), infill density (40, 65, 90 %) and skewing angle (0º, 45º, 90º) on the flexural strength of medical-grade PMMA. Maximum and minimum flexural strength that be obtained in this work about (93 and 57 MPa) respectively. The analysis of variance (ANOVA) results shows that the most effective factor is the layer height followed by infill density. The flexural strength rises significantly with decreases layer height and the skewing angle is in zero direction. The process parameters have been optimized through utilizing of genetic algorithms. The optimal results that emerged based on genetic algorithm technique are approximately (276 μm) as layer height, (46 %) infill density and skewing angle (89 º) which maximize the flexural strength to (97 MPa) at crossover for ten generation.
EN
Purpose: The aim of this paper is to present a combination of advanced algorithms for finding optimal solutions together with their tests for a permutation flow-shop problem with the possibilities offered by a simulation environment. Four time-constrained algorithms are tested and compared for a specific problem. Design/methodology/approach: Four time-constrained algorithms are tested and compared for a specific problem. The results of the work realisation of the algorithms are transferred to a simulation environment. The entire solution proposed in the work is composed as a parallel environment to the real implementation of the production process. Findings: The genetic algorithm generated the best solution in the same specified short time. By implementing the adopted approach, the correct cooperation of the FlexSim simulation environment with the R language engine was obtained. Research limitations/implications: The genetic algorithm generated the best solution in the same specified short time. By implementing the approach, a correct interaction between the FlexSim simulation environment and the R language engine was achieved. Practical implications: The solution proposed in this paper can be used as an environment to test solutions proposed in production. Simulation methods in the areas of logistics and production have for years attracted the interest of the scientific community and the wider industry. Combining the achievements of science in solving computationally complex problems with increasingly sophisticated algorithms, including artificial intelligence algorithms, with simulation methods that allow a detailed overview of the consequences of changes made seems promising. Originality/value: The original concept of cooperation between the R environment and the FlexSim simulation software for a specific problem was presented.
EN
It is essential to check whether the surgical robot end effector is safe to use due to phenomena such as linear buckling and mechanical resonance. The aim of this research is to build an multi criteria optimization model based on such criteria as the first natural frequency, buckling factor and mass, with the assumption of the basic constraint in the form of a safety factor. The calculations are performed for a serial structure of surgical robot end effector with six degrees of freedom ended with a scalpel. The calculation model is obtained using the finite element method. The issue of multi-criteria optimization is solved based on the response surface method, Pareto fronts and the genetic algorithm. The results section illustrates deformations of a surgical robot end effector occurring during the resonance phenomenon and the buckling deformations for subsequent values of the buckling coefficients. The dependencies of the geometrical dimensions on the criteria are illustrated with the continuous functions of the response surface, i.e. metamodels. Pareto fronts are illustrated, based on which the genetic algorithm finds the optimal quantities of the vector function. The conducted analyzes provide a basis for selecting surgical robot end effector drive systems from the point of view of their generated inputs.
EN
In the field of ocean engineering, the task of spatial hull modelling is one of the most complicated problems in ship design. This study presents a procedure applied as a generative approach to the design problems for the hull geometry of small vessels using elements of concurrent design with multi-criteria optimisation processes. Based upon widely available commercial software, an algorithm for the mathematical formulation of the boundary conditions, the data flow during processing and formulae for the optimisation processes are developed. As an example of the application of this novel approach, the results for the hull design of a sailing yacht are presented.
EN
Grinding is commonly responsible for the liberation of valuable minerals from host rocks but can entail high costs in terms of energy and medium consumption, but a tower mill is a unique power-saving grinding machine over traditional mills. In a tower mill, many operating parameters affect the grinding performance, such as the amount of slurry with a known solid concentration, screw mixer speed, medium filling rate, material-ball ratio, and medium properties. Thus, 25 groups of grinding tests were conducted to establish the relationship between the grinding power consumption and operating parameters. The prediction model was established based on the backpropagation “BP” neural network, further optimized by the genetic algorithm GA to ensure the accuracy of the model, and verified. The test results show that the relative error of the predicted and actual values of the backpropagation “BP” neural network prediction model within 3% was reduced to within 2% by conducting the generic algorithm backpropagation “GA-BP” neural network. The optimum grinding power consumption of 41.069 kWh/t was obtained at the predicted operating parameters of 66.49% grinding concentration, 301.86 r/min screw speed, 20.47% medium filling rate, 96.61% medium ratio, and 0.1394 material-ball ratio. The verifying laboratory test at the optimum conditions, produced a grinding power consumption of 41.85 kWh/t with a relative error of 1.87%, showing the feasibility of using the genetic algorithm and BP neural network to optimize the grinding power consumption of the tower mill.
EN
A robust dual color images watermarking algorithm is designed based on quaternion discrete fractional angular transform (QDFrAT) and genetic algorithm. To guarantee the watermark security, the original color watermark image is encrypted with a 4D hyperchaotic system. A pure quaternion matrix is acquired by performing the discrete wavelet transform (DWT), the block division and the discrete cosine transform on the original color cover image. The quaternion matrix is operated by the QDFrAT to improve the robustness and the security of the watermarking scheme with the optimal transform angle and the fractional order. Then the singular value matrix is obtained by the quaternion singular value decomposition (QSVD) to further enhance the scheme’s stability. The encryption watermark is also processed by DWT and QSVD. Afterward, the singular value matrix of the encryption watermark is embedded into the singular value matrix of the host image by the optimal scaling factor. Moreover, the values to balance imperceptibility and robustness are optimized with a genetic algorithm. It is shown that the proposed color image watermarking scheme performs well in imperceptibility, security, robustness and embedding capacity.
EN
The paper considers the production scheduling problem in a hybrid flow shop environment with sequence-dependent setup times and the objectives of minimizing both the makespan and the total tardiness. The multi-objective genetic algorithm is applied to solve this problem, which belongs to the non-deterministic polynomial-time (NP)-hard class. In the structure of the proposed algorithm, the initial population, neighborhood search structures and dispatching rules are studied to achieve more efficient solutions. The performance of the proposed algorithm compared to the efficient algorithm available in literature (known as NSGA-II) is expressed in terms of the data envelopment analysis method. The computational results confirm that the set of efficient solutions of the proposed algorithm is more efficient than the other algorithm.
EN
The approach described in this paper uses evolutionary algorithms to create multiple-beam patterns for a concentric circular ring array (CCRA) of isotropic antennas using a common set of array excitation amplitudes. The flat top, cosec2, and pencil beam patterns are examples of multiple-beam patterns. All of these designs have an upward angle of θ = 0◦. All the patterns are further created in three azimuth planes (φ = 0◦, 5◦, and 10◦). To create the necessary patterns, non-uniform excitations are used in combination with evenly spaced isotropic components. For the flat top and cosecant-squared patterns, the best combination of common components, amplitude and various phases is applied, whereas the pencil beam pattern is produced using the common amplitude only. Differential evolutionary algorithm (DE), genetic algorithm (GA), and firefly algorithm (FA) are used to generate the best 4-bit discrete magnitudes and 5-bit discrete phases. These discrete excitations aid in lowering the feed network design complexity and the dynamic range ratio (DRR). A variety of randomly selected azimuth planes are used to verify the excitations as well. With small modifications in the desired parameters, the patterns are formed using the same excitation. The results proved both the efficacy of the suggested strategy and the dominance of DE over GA as well as FA.
11
EN
In this paper, two different architectures based on completely and sectionally clustered arrays are proposed to improve the array patterns. In the wholly clustered arrays, all elements of the ordinary array are divided into multiple unequal ascending clusters. In the sectionally clustered arrays, two types of architectures are proposed by dividing a part of the array into clusters based on the position of specific elements. In the first architecture of sectionally clustered arrays, only those elements that are located on the sides of the array are grouped into unequal ascending clusters, and other elements located in the center are left as individual and unoptimized items (i.e. uniform excitation). In the second architecture, only some of the elements close the center are grouped into unequal ascending clusters, and the side elements were left individually and without optimization. The research proves that the sectionally clustered architecture has many advantages compared to the completely clustered structure, in terms of the complexity of the solution. Simulation results show that PSLL in the side clustered array can be reduced to more than −28 dB for an array of 40 elements. The PSLL was −17 dB in the case of a centrally clustered array, whereas the complexity percentage in the wholly clustered array method was 12.5 %, while the same parameter for the partially clustered array method equaled 10%.
EN
The article presents research on animal detection in thermal images using the YOLOv5 architecture. The goal of the study was to obtain a model with high performance in detecting animals in this type of images, and to see how changes in hyperparameters affect learning curves and final results. This manifested itself in testing different values of learning rate, momentum and optimizer types in relation to the model’s learning performance. Two methods of tuning hyperparameters were used in the study: grid search and evolutionary algorithms. The model was trained and tested on an in-house dataset containing images with deer and wild boars. After the experiments, the trained architecture achieved the highest score for Mean Average Precision (mAP) of 83%. These results are promising and indicate that the YOLO model can be used for automatic animal detection in various applications, such as wildlife monitoring, environmental protection or security systems.
EN
There are two main approaches to tackle the challenge of finding the best filter or embedded feature selection (FS) algorithm: searching for the one best FS algorithm and creating an ensemble of all available FS algorithms. However, in practice, these two processes usually occur as part of a larger machine learning pipeline and not separately. We posit that, due to the influence of the filter FS on the embedded FS, one should aim to optimize both of them as a single FS pipeline rather than separately. We propose a meta-learning approach that automatically finds the best filter and embedded FS pipeline for a given dataset called FSPL. We demonstrate the performance of FSPL on n = 90 datasets, obtaining 0.496 accuracy for the optimal FS pipeline, revealing an improvement of up to 5.98 percent in the model’s accuracy compared to the second-best meta-learning method.
EN
Face recognition (FR) is one of the most active research areas in the field of computer vision. Convolutional neural networks (CNNs) have been extensively used in this field due to their good efficiency. Thus, it is important to find the best CNN parameters for its best performance. Hyperparameter optimization is one of the various techniques for increasing the performance of CNN models. Since manual tuning of hyperparameters is a tedious and time-consuming task, population based metaheuristic techniques can be used for the automatic hyperparameter optimization of CNNs. Automatic tuning of parameters reduces manual efforts and improves the efficiency of the CNN model. In the proposed work, genetic algorithm (GA) based hyperparameter optimization of CNNs is applied for face recognition. GAs are used for the optimization of various hyperparameters like filter size as well as the number of filters and of hidden layers. For analysis, a benchmark dataset for FR with ninety subjects is used. The experimental results indicate that the proposed GA-CNN model generates an improved model accuracy in comparison with existing CNN models. In each iteration, the GA minimizes the objective function by selecting the best combination set of CNN hyperparameters. An improved accuracy of 94.5% is obtained for FR.
EN
Caused by excess levels of nutrients and increased temperatures, freshwater cyanobacterial blooms have become a serious global issue. However, with the development of artificial intelligence and extreme learning machine methods, the forecasting of cyanobacteria blooms has become more feasible. We explored the use of multiple techniques, including both statistical [Multiple Regression Model (MLR) and Support Vector Machine (SVM)] and evolutionary [Particle Swarm Optimization (PSO), Genetic Algorithm (GA), and Bird Swarm Algorithm (BSA)], to approximate models for the prediction of Microcystis density. The data set was collected from Oubeira Lake, a natural shallow Mediterranean lake in the northeast of Algeria. From the correlation analysis of ten water variables monitored, six potential factors including temperature, ammonium, nitrate, and ortho-phosphate were selected. The performance indices showed; MLR and PSO provided the best results. PSO gave the best fitness but all techniques performed well. BSA had better fitness but was very slow across generations. PSO was faster than the other techniques and at generation 20 it passed BSA. GA passed BSA a little further, at generation 50. The major contributions of our work not only focus on the modelling process itself, but also take into consideration the main factors affecting Microcystis blooms, by incorporating them in all applied models.
16
Content available remote Traveling salesman problem parallelization by solving clustered subproblems
EN
A method of parallelizing the process of solving the traveling salesman problem is suggested, where the solver is a heuristic algorithm. The traveling salesman problem parallelization is fulfilled by clustering the nodes into a given number of groups. Every group (cluster) is an open-loop subproblem that can be solved independently of other subproblems. Then the solutions of the respective subproblems are aggregated into a closed loop route being an approximate solution to the initial traveling salesman problem. The clusters should be enumerated such that then the connection of two “neighboring” subproblems (with successive numbers) be as short as possible. For this, the destination nodes of the open-loop subproblems are selected farthest from the depot and closest to the starting node for the subsequent subproblem. The initial set of nodes can be clustered manually by covering them with a finite regular-polygon mesh having the required number of cells. The efficiency of the parallelization is increased by solving all the subproblems in parallel, but the problem should be at least of 1000 nodes or so. Then, having no more than a few hundred nodes in a cluster, the genetic algorithm is especially efficient by executing all the routine calculations during every iteration whose duration becomes shorter.
EN
Rescheduling is the guarantee to maintain the reliable operation of production system process. In production system, the original scheduling scheme cannot be carried out when machine breaks down. It is necessary to transfer the production tasks in the failure cycle and replan the production path to ensure that the production tasks are completed on time and maintain the stability of production system. To address this issue, in this paper, we studied the event-driven rescheduling policy in dynamic environment, and established the usage rules of right-shift rescheduling and complete rescheduling based on the type of interference events. And then, we proposed the rescheduling decision method based on genetic algorithm for solving flexible job shop scheduling problem with machine fault interference. In addition, we extended the "mk" series of instances by introducing the machine fault interference information. The solution data show that the complete rescheduling method can respond effectively to the rescheduling of flexible job shop scheduling problem with machine failure interference.
EN
Brain tumors are fatal for majority of the patients, the different nature of the tumorcells requires the use of combined medical measures, and categorizing such tumors isa difficult task for radiologists. The diagnostic structures based on PCs have been offeredas an aid in diagnosing a brain tumor using magnetic resonance imaging (MRI). Generalfunctions are retrieved from the lowest layers of the neural network, and these lowestlayers are responsible for capturing low-level features and patterns in the raw input data,which can be particularly unique to the raw image. To validate this, the EfficientNetB3pre-trained model is utilized to classify three types of brain tumors: glioma, meningioma,and pituitary tumor. Initially, the characteristics of several EfficientNet modules are takenfrom the pre-trained EfficientNetB3 version to locate the brain tumor. Three types of braintumor datasets are used to assess each approach. Compared to the existing deep learningmodels, the concatenated functions of EfficientNetB3 and genetic algorithms give betteraccuracy. Tensor flow 2 and Nesterov-accelerated adaptive moment estimation (Nadam)are also employed to improve the model training process by making it quicker and better.The proposed technique using CNN attains an accuracy of 99.56%, a sensitivity of 98.9%,a specificity of 98.6%, an F-score of 98.9%, a precision of 98.9%, and a recall of 99.54%.
EN
The Brain-computer interface (BCI) is used to enhance the human capabilities. The hybridBCI (hBCI) is a novel concept for subtly hybridizing multiple monitoring schemes to maximize the advantages of each while minimizing the drawbacks of individual methods. Recently, researchers have started focusing on the Electroencephalogram (EEG) and ‘‘Functional Near-Infrared Spectroscopy” (fNIRS) based hBCI. The main reason is due to the development of artificial intelligence (AI) algorithms such as machine learning approaches to better process the brain signals. An original EEG-fNIRS based hBCI system is devised by using the non-linear features mining and ensemble learning (EL) approach. We first diminish the noise and artifacts from the input EEG-fNIRS signals using digital filtering. After that, we use the signals for non-linear features mining. These features are ‘‘Fractal Dimension” (FD), ‘‘Higher Order Spectra” (HOS), ‘‘Recurrence Quantification Analysis” (RQA) features, and Entropy features. Onward, the Genetic Algorithm (GA) is employed for Features Selection (FS). Lastly, we employ a novel Machine Learning (ML) technique using several algorithms namely, the ‘‘Naïve Bayes” (NB), ‘‘Support Vector Machine” (SVM), ‘‘Random Forest” (RF), and ‘‘K-Nearest Neighbor” (KNN). These classifiers are combined as an ensemble for recognizing the intended brain activities. The applicability is tested by using a publicly available multi-subject and multiclass EEG-fNIRS dataset. Our method has reached the highest accuracy, F1-score, and sensitivity of 95.48%, 97.67% and 97.83% respectively.
EN
The rational planning of land around rail transit stations in cities can effectively improve the convenience of transportation and economic development of cities. This paper briefly introduced the transit-oriented development (TOD) mode of urban planning. We constructed a hierarchical structure for evaluating the quality of land planning of urban rail transit stations through the analytic hierarchy process (AHP) method. The structure started from three large aspects, i.e., traffic volume, regional environmental quality, and regional economic efficiency, and every large aspect was divided into three small aspects. Then, an optimization model was established for land planning of rail transit stations. The land planning scheme was optimized by a genetic algorithm (GA). To enhance the optimization performance of the GA, it was improved by coevolution, i.e., plural populations iterated independently, and every population replaced the poor chromosomes in the other populations with its excellent chromosomes in the previous process. Finally, the Jinzhonghe street station in Hebei District, Tianjin city, was taken as a subject for analysis. The results suggested that the improved GA obtained a set of non-inferior Pareto solutions when solving a multi-objective optimization problem. The distribution of solutions in the set also indicated that any two objectives among traffic volume, environmental quality, and economic efficiency was improved at the cost of the remaining objectives. The land planning schemes optimized by the particle swarm optimization (PSO) algorithm, the traditional GA, and the improved GA, respectively, were superior than the initial scheme, and the optimized scheme of the improved GA was more in line with the characteristics of the TOD mode than the traditional one and the PSO algorithm, and the fitness value was also higher. In conclusion, the GA can be used to optimize the planning design of land in rail transit areas under the TOD mode, and the optimization performance of the GA can be improved by means of coevolution.
first rewind previous Strona / 38 next fast forward last
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.