Ograniczanie wyników
Czasopisma help
Autorzy help
Lata help
Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników

Znaleziono wyników: 2212

Liczba wyników na stronie
first rewind previous Strona / 111 next fast forward last
Wyniki wyszukiwania
Wyszukiwano:
w słowach kluczowych:  optimization
help Sortuj według:

help Ogranicz wyniki do:
first rewind previous Strona / 111 next fast forward last
EN
The article presents a method of calibration of material parameters of a numerical model based on a genetic algorithm, which allows to match the calculation results with measurements from the geotechnical monitoring network. This method can be used for the maintenance of objects managed by the observation method, which requires continuous monitoring and design alterations. The correctness of the calibration method has been verified on the basis of artificially generated data in order to eliminate inaccuracies related to approximations resulting from the numerical model generation. Using the example of the tailing dam model the quality of prediction of the selected measurement points was verified. Moreover, changes of factor of safety values, which is an important indicator for designing this type of construction, were analyzed. It was decided to exploit the case of dam of reservoir, which is under continuous construction, that is dam height is increasing constantly, because in this situation the use of the observation method is relevant.
PL
Proces utleniania osadu czynnego jest szeroko stosowany do usuwania zanieczyszczeń w oczyszczalniach ścieków. Rosnąca ilość przetwarzanych odpadów i marnotrawionej żywności prowadzi do wzrostu stężenia azotu całkowitego i fosforu całkowitego, co ma wpływ na pogorszenie jakości przetwarzanych ścieków. Do optymalizacji procesowej użyto algorytmu genetycznego. Do celów predykcji stężenia związków biogennych w osadzie czynnym zastosowano dwa różne modele uczenia maszynowego: sztuczną sieć neuronową (ANN-MLP) i regresyjną metodę wektorów nośnych (SVR). Wyniki pokazały, że oba modele można skutecznie zastosować do prognozowania stężenia związków biogennych, jednak model z użyciem ANN wykazał większą dokładność na etapie treningowym i walidacji niż SVR. Przewidywanie i optymalizacja składu związków biogennych w procesie tlenowego oczyszczania ścieków ma istotny wpływ na jakość uzyskiwanych oczyszczonych ścieków.
EN
A genetic algorithm was used to optimize the process. The concn. of nutrients in the activated sludge was predicted using 2 different machine learning models, artificial neural network (ANN-MLP) and support vector regression (SVR). Both models can be used effectively to predict nutrient concns., but the ANN model showed higher training and validation accuracy than the SVR.
EN
Calcite depression is the most effective physicochemical process to valorize fluorine mineral. This process is achieved by adsorption of tannic acid, as the commonly used reagent, onto calcite. Adsorption investigation is very important in mineral processing. The present work focuses on optimization of physicochemical parameters of tannic acid adsorption onto calcite. Experimental study is carried out by a response surface methodology based on Box-Behnken design. Obtained results are exploited to develop a statistical model. Analysis of variance and residuals are performed to check the significance of tested models. Among these models, Cox-Box model predicts very well the obtained experimental data. This model shows that initial tannic acid concentration and solution pH as well as their interactions are the most significant parameters. Optimal conditions are achieved using the obtained statistical model. The present investigation is an important preliminary step to better understand calcite flotation behavior using tannic acid as a depressant.
EN
Product prediction and process parameter optimization in the production process of activated carbon are very important for production. It can stabilize product quality and improve the economic efficiency of enterprises. In this paper, three process parameters of a carbonization furnace, namely feeding rate, rotation speed, and carbonization temperature, were adopted to build a quality optimization model for carbonized materials. First, an orthogonal test was designed to obtain the preliminary relationship between the process parameters and the quality indicators of a carbonized material and prepare data for modeling. Then, an improved SVR model was developed to establish the relationship between product quality indicators and process parameters. Finally, through the singlefactor experiments and the Monte Carlo method, the process parameters affecting the quality of a carbonized material were determined and optimized. This showed that a high-quality carbonized material could be obtained with a smaller feeding rate, larger rotation speed, and higher carbonization furnace temperature. The quality of activated carbon reached its maximum when the feeding rate was 1.0 t/h, the rotation speed was 90 r/h, and the temperature was 836°C. It can effectively improve the quality of carbonized materials.
EN
To increase their competitive advantage in turbulent marketplaces, contemporary manufacturers must show determination in seeking ways to: fulfill buyer orders with quality merchandise; meet deadlines; handle unexpected production disruptions; and lower the total relevant expense. To tackle the abovementioned challenges, this study explores an economic manufacturing quantity (EMQ) model with machine failure, overtime, and rework/disposal of nonconforming items; the goal is to find the best fabrication uptime that minimizes total relevant expenses. Specifically, we consider a production unit with overtime capacity as an operational feature that is linked to higher unit and setup costs. Further, its EMQ-based process is subject to random nonconforming items and failure rates. Extra screening separates the reworkable nonconforming items from scrap, and the rework is executed at the end of each cycle of regular fabrication. The failures follow a Poisson distribution, and a machine repair task starts as soon as a failure occurs; the fabrication of the lot that was interrupted resumes after the repair has been carried out. A decision model is built to capture the characteristics of the problem. Mathematical and optimization processes help in determining the optimal fabrication uptime. A numerical example not only illustrates the applicability of the research outcomes, but also reveals a diverse set of information about the individual or joint influences of deviations in mean-time-to-failure, overtime factors, and rework/disposal ratios linked to nonconforming rates related to the optimal replenishment uptime, total operating expenses, and various cost contributors; this facilitates better decision making.
EN
Major manufactures are moving towards a sustainability goal. This paper introduces the results of collaboration with the leading company in the packaging and advertising industry in Germany and Poland. The problem addresses the manufacturing planning problem in terms of minimizing the total cost of production. The challenge was to bring a new production planning method into cardboard manufacturing and paper processing which minimizes waste, improves the return of expenses, and automates daily processes heavily dependent on the production planners’ experience. The authors developed a module that minimizes the total cost, which reduces the overproduction and is used by the company’s manufacturing planning team. The proposed approach incorporates planning allowances rules to compromise the manufacturing requirements and production cost minimization.
EN
Background: Truck scheduling at cross-docking terminals has received much academic attention over the last three decades. A vast number of mixed-integer programming models have been proposed to assign trucks to dock-doors and time slots. Surprisingly, only a few models assume fixed outbound truck departures that are often applied in the less-than-truckload or small parcel and express delivery industry. To the best of our knowledge, none of these papers explore whether a discrete-time or continuous-time model formulation has a better computational performance. This paper attempts to close this research gap and tries to shed light on which type of formulation is advantageous. Therefore, a variant of the truck scheduling problem with fixed outbound departures is considered. This problem's objective is to find a feasible truck schedule that minimizes the number of delayed freight units. Methods: We propose two model formulations for the described variant of the truck scheduling problem with fixed outbound departures. Specifically, the problem is formulated as a discrete-time and a continuous-time mixed-integer programming model. Results: A computational experiment is conducted in order to assess the computational performance of the presented model formulations. We compare the discrete-time and continuous-time formulation in terms of both the solution quality and computational time. Conclusions: The computational results show that the proposed discrete-time model formulation can solve problem instances of medium size to proven optimality within less than one minute. The continuous-time model formulation, on the other hand, can solve small instances to optimality. However, it requires longer solution times than the discrete-time formulation. Furthermore, it is unable to solve medium-sized instances within a 5-minute time limit. Thus, it can be summarized that the proposed discrete-time model formulation is clearly superior to the continuous-time model formulation.
PL
Wstęp: Harmonogramowanie przewozów oraz cross-dockingu leży w zasięgu zainteresowania uczonych już od ponad 30 lat. W tym okresie zaproponowało wiele różnych modeli programistycznych tablic awizacyjnych. Jednak zaledwie kilka modeli bierze pod uwagę stałe załadunki, które często są stosowane w przewozach niepełno samochodowych oraz kurierskich. Według naszego rozeznania, żaden z dostępnych modeli nie stosuje modelowania czasem w sposób dyskretny lub ciągły dla uzyskania lepszego wyniku. Celem pracy jest uzupełnienie tej luki w badaniach. Dlatego też rozważono wariant problemu harmonogramowania przewozów ze stałymi załadunkami z celem nadrzędnym znalezienia takiego sposobu harmonogramowania aby minimalizował on liczbę opóźnionych przewozów. Metody: Zaproponowano dwa modele, opisujące harmonogramowanie przewozów ze stałymi załadunkami. Problem ten został sformułowany poprzez model programistyczny ze zmienną czasu w ujęciu dyskretnym i ciągłym. Wyniki: Przeprowadzono symulację komputerową w celu określenie działania opracowanych modeli. Porównano wyniki pod względem jakości uzyskanego wyniku oraz niezbędnego czasu dla obliczeń. Wnioski: Na podstawie uzyskanych wyników można stwierdzić, że proponowany model dyskretny może rozwiązywać problem średniej wielkości w czasie niższej niż minuta. Model oparty na czasie ciągłym uzyskał z kolei optymalizację przy małych przypadkach. Wymagało to jednak dłuższego czasu obliczeniowego. Dodatkowo nie uzyskano dla rozwiązań średniej wielkości czasu niższego od 5 minut. Dlatego też wysunięto wniosek, że model dyskretny jest lepszym w porównaniu z modelem ciągłym.
EN
The article presents theoretical and experimental investigation on properties of a composite material based on rubber. The approach presented in this research is an experimental measurement based on spectrum analysis combined with theoretical investigation held to describe a viscous-elastic behaviour of the material. The proposed mathematical model is represented by five rheological parameters of hybrid Maxwell and Kelvin-Voigt elements and includes an optimization task for determination of the stiffness and damping coefficients. In the proposed rheological model, not only the displacements are unknown but also forces described by second-order differential equations. Validation between the experimental measurement and theoretical investigation is made based on spectrum analysis.
PL
Przedstawiono wyniki analiz ram o rozpiętości 12, 15 i 18 m. Jako kryterium optymalizacji przyjęto ich minimalną masę. W obliczeniach wykorzystano metodę elementów skończonych. Podano wyniki obliczeń zawierającedane dotyczące masy konstrukcji, stopnia wytężenia prętów i spełnienia albo niespełnienia warunków stanu granicznego użytkowalności.
EN
The results of analyzes of frames with a span of 12, 15 and 18 m have been presented. Their minimum mass was assumed as the optimization criterion. The finite element method was used in the calculations. The results of calculations in the form of: structure mass, bar resistance coefficient and checking the SLS condition were present in the tables.
EN
The paper is devoted to a particular case of the nonlinear and nonautonomous control law design problem based on the application of the optimization approach. Close attention is paid to the controlled plants, which are presented by affine-control mathematical models characterized by integral quadratic functionals. The proposed approach to controller design is based on the optimal damping concept firstly developed by V.I. Zubov in the early 1960s. A modern interpretation of this concept allows us to construct effective numerical procedures of control law synthesis initially oriented to practical implementation. The main contribution is the proposition of a new methodology for selecting the functional to be damped. The central idea is to perform parameterization of a set of admissible items for this functional. As a particular case, a new method of this parameterization has been developed, which can be used for constructing an approximate solution to the classical optimization problem. Applicability and effectiveness of the proposed approach are confirmed by a practical numerical example.
EN
Along with the increase in computing power, new possibilities for the use of parametriccoupled analysis of fluid flow machines and metamodeling for many branches of industryand medicine have appeared. In this paper, the use of a new methodology for multi-objective optimization of a butterfly valve with the application of the fluid-structure interaction metamodel is presented. The optimization objective functions were to increasethe value of the KV valve’s flow coefficient while reducing the disk mass. Moreover, theequivalent von Mises stress was accepted as an additional constraint. The centred composite designs were used to plan the measuring point. Full second-order polynomials, non-parametric regression, Kriging metamodeling techniques were implemented. The optimization process was carried out using the multi-objectives genetic algorithm. For eachmetamodel, one of the optimization candidates was selected to verify its results. The besteffect was obtained using the Kriging method. Optimization allowed to improve the KVvalue by 37.6%. The metamodeling process allows for the coupled analysis of the fluidflow machines in a shorter time, although its main application is geometry optimization.
EN
In the present research, the wear behaviour of magnesium alloy (MA) AZ91D is studied and optimized. MA AZ91D is casted using a die-casting method. The tribology experiments are tested using pin-on-disc tribometer. The input parameters are sliding velocity (1‒3 m/s), load (1‒5 kg), and distance (0.5‒1.5 km). The worn surfaces are characterized by a scanning electron microscope (SEM) with energy dispersive spectroscopy (EDS). The response surface method (RSM) is used for modelling and optimising wear parameters. This quadratic equation and RSM-optimized parameters are used in genetic algorithm (GA). The GA is used to search for the optimum values which give the minimum wear rate and lower coefficient of friction. The developed equations are compared with the experimental values to determine the accuracy of the prediction.
EN
This work attempts to use nitrogen gas as a shielding gas at the cutting zone, as well as for cooling purposes while machining stainless steel 304 (SS304) grade by Computer Numerical Control (CNC) lathe. The major influencing parameters of speed, feed and depth of cut were selected for experimentation with three levels each. Totally 27 experiments were conducted for dry cutting and N2 gaseous conditions. The major influencing parameters are optimized using Taguchi and Firefly Algorithm (FA). The improvement in obtaining better surface roughness and Material Removal Rate (MRR) is significant and the confirmation results revealed that the deviation of the experimental results from the empirical model is found to be within 5%. A significant improvement of reduction of the specific cutting energy by 2.57% on average was achieved due to the reduction of friction at the cutting zone by nitrogen gas in CNC turning of SS 304 alloy.
EN
The mechanical and tribological properties of the Al/CNT composites could be controlled and improved by the method of its fabrication process. This research article deals with the optimization of mechanical and tribological properties of Al/CNT composites, which are fabricated using the mechanical alloying process with the different weight percentage of multi-walled CNT reinforcement. The phase change and the presence of CNT are identified using the X-Ray Diffraction (XRD) analysis. The influence of mechanical alloying process and the multi-walled CNT reinforcement on the mechanical, and tribological behaviours of the Al/CNT composites are studied. The optimal mechanical alloying process parameters and the weight percentage of multi-walled CNT reinforcement for the Al/CNT composite are identified using the Response Surface Methodology (RSM), which exhibits the better hardness, compressive strength, wear rate and Coefficient of Friction (CoF). The Al/CNT composite with 1.1 wt.% of CNT has achieved the optimal responses at the milling speed 301 rpm and milling time 492 minutes with the ball to powder weight ratio 9.7:1, which is 98% equal to the experimental result. This research also reveals that the adhesive wear is the dominant wear mechanism for the Al/CNT composite against EN31 stainless steel but the optimal Al/CNT composite with 1.1 wt.% of multi-walled CNT has experienced a mild abrasive wear.
EN
Objectives: This research work exclusively aims to develop a novel heart disease prediction framework including three major phases, namely proposed feature extraction, dimensionality reduction, and proposed ensemble-based classification. Methods: As the novelty, the training of NN is carried out by a new enhanced optimization algorithm referred to as Sea Lion with CanberraDistance (S-CDF) via tuning the optimal weights. The improved S-CDF algorithm is the extended version of the existing “Sea Lion Optimization (SLnO)”. Initially, the statistical and higher-order statistical features are extracted including central tendency, degree of dispersion, and qualitative variation, respectively. However, in this scenario, the “curse of dimensionality” seems to be the greatest issue, such that there is a necessity of dimensionality reduction in the extracted features. Hence, the principal component analysis (PCA)- based feature reduction approachis deployed here. Finally, the dimensional concentrated features are fed as the input to the proposed ensemble technique with “Support Vector Machine (SVM), Random Forest (RF), K-Nearest Neighbor (KNN)” with optimized Neural Network (NN) as the final classifier. Results: An elaborative analyses as well as discussion have been provided by concerning the parameters, like evaluation metrics, year of publication, accuracy, implementation tool, and utilized datasets obtained by various techniques. Conclusions: From the experiment outcomes, it is proved that the accuracy of the proposed work with the proposed feature set is 5, 42.85, and 10% superior to the performance with other feature sets like central tendency + dispersion feature, central tendency qualitative variation, and dispersion qualitative variation, respectively. Results: Finally, the comparative evaluation shows that the presented work is appropriate for heart disease prediction as it has high accuracy than the traditional works.
EN
Objectives: The main intention of this paper is to propose a new Improved K-means clustering algorithm, by optimally tuning the centroids. Methods: This paper introduces a new melanoma detection model that includes three major phase’s viz. segmentation, feature extraction and detection. For segmentation, this paper introduces a new Improved K-means clustering algorithm, where the initial centroids are optimally tuned by a new algorithm termed Lion Algorithm with New Mating Process (LANM), which is an improved version of standard LA. Moreover, the optimal selection is based on the consideration of multi-objective including intensity diverse centroid, spatial map, and frequency of occurrence, respectively. The subsequent phase is feature extraction, where the proposed Local Vector Pattern (LVP) and Grey-Level Co-Occurrence Matrix (GLCM)-based features are extracted. Further, these extracted features are fed as input to Deep Convolution Neural Network (DCNN) for melanoma detection. Results: Finally, the performance of the proposed model is evaluated over other conventional models by determining both the positive as well as negative measures. From the analysis, it is observed that for the normal skin image, the accuracy of the presented work is 0.86379, which is 47.83% and 0.245% better than the traditional works like Conventional K-means and PA-MSA, respectively. Conclusions: From the overall analysis it can be observed that the proposed model is more robust in melanoma prediction, when compared over the state-of-art models.
EN
Nowadays the automotive industry mostly prefers innovative solid-state welding technologies that would enable to welding of lightweight and high-performance materials. In this work, 3105-H18 Aluminium alloy (Al) and pure Copper (Cu) specimens with 0.5 mm thickness have been ultrasonically welded in a dissimilar (Al-Cu) manner. Optimization of process parameters of ultra-sonic welding has been carried out through full factorial method, three levels of variables considered for this experimental studies namely, weld pressure, amplitude, and time, also each variable interaction with welding strength has been studied. Additionally, micro-hardness and microstructure investigation in welded joints has been studied. The result shows that the weld strength greatly influenced weld amplitude at a medium and higher level of weld pressure. The interface micro-hardness of the welded joint has lower compared to the base metal.
EN
This work concerns the study of the coatings for the ultrasound frequency range as a quasi one-dimensional phononic crystal structure protecting a sea object against high resolution active sonar in the frequency range most commonly found for this type of equipment. The topology of the examined structure was optimized to obtain a band gap in the 2.2-2.3 MHz frequency band. For this purpose, a genetic algorithm was used, which allows for optimal distribution of individual elements of the ultrasound multilayer composite. By optimal distribution is meant to achieve a structure that will allow minimal reflectance in a given frequency range without height reflectance peaks with a small half width. Analysis of the wave propagation was made using the Transfer Matrix Method (TMM). As part of the research, 15 and 20-layer structures with reflectance at the level of 0.23% and 0.18%, respectively, were obtained. increasing the number of layers in the analyzed structures resulted in finding such a distribution in which a narrow band of low reflectance was obtained, such distributions could also be used as bandpass filters. The use of a genetic algorithm for designing allows to obtain modern coatings, the characteristics of which result from the structure.
EN
Economic Load Dispatch (ELD) is utilized in finding the optimal combination of the real power generation that minimizes total generation cost, yet satisfying all equality and inequality constraints. It plays a significant role in planning and operating power systems with several generating stations. For simplicity, the cost function of each generating unit has been approximated by a single quadratic function. ELD is a subproblem of unit commitment and a nonlinear optimization problem. Many soft computing optimization methods have been developed in the recent past to solve ELD problems. In this paper, the most recently developed population-based optimization called the Salp Swarm Algorithm (SSA) has been utilized to solve the ELD problem. The results for the ELD problem have been verified by applying it to a standard 6-generator system with and without due consideration of transmission losses. The finally obtained results using the SSA are compared to that with the Particle Swarm Optimization (PSO) algorithm. It has been observed that the obtained results using the SSA are quite encouraging.
PL
Ze względu na nieistnienie uniwersalnego algorytmu optymalizacji rozwiązującego wszystkie problemy naukowotechniczne opracowywanie nowych i wydajniejszych obliczeniowo algorytmów optymalizacyjnych wciąż jest popularnym zadaniem. Przeglądając literaturę z dziedziny optymalizacji można zauważyć trend tworzenia „wymyślnych” algorytmów opartych na procesach naturalnych. W artykule sprawdzono skuteczność nowopowstałych algorytmów meta-heurystycznych zainspirowanych życiem owadów i zwierząt – czarnych wdów (algorytm BWO) oraz szarego wilka (algorytm GWO). Skuteczność działania wybranych algorytmów porównano z klasycznym algorytmem quasi-Newtonowskim BFGS oraz strategią ewolucyjną CMA-ES, które charakteryzują się solidnym uwarunkowaniem matematycznym. W celach porównawczych wykorzystano 3 wybrane funkcje testowe. W ramach badań sprawdzono również wpływ liczby zmiennych decyzyjnych na czas uzyskiwania rozwiązania.
EN
Due to the lack of a universal optimization algorithm which solves all scientific and technical problems, developing new and more computationally efficient optimization algorithms is still a popular challenge. Reviewing the literature on optimization there is a trend to create "fancy" algorithms based on natural processes. The article examines the effectiveness of newly developed meta-heuristic algorithms inspired by insects and animals - black widows (BWO algorithm) and grey wolf (GWO algorithm). The effectiveness of the selected algorithms was compared with the classical quasi-Newtonian BFGS algorithm and the evolutionary strategy CMA-ES, which are characterized by a solid mathematical background. Three selected benchmark functions were used for comparison purposes. The study also included a test of the influence of the number of design variables on the time complexity.
first rewind previous Strona / 111 next fast forward last
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.