Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników

Znaleziono wyników: 2

Liczba wyników na stronie
first rewind previous Strona / 1 next fast forward last
Wyniki wyszukiwania
Wyszukiwano:
w słowach kluczowych:  function evaluations
help Sortuj według:

help Ogranicz wyniki do:
first rewind previous Strona / 1 next fast forward last
EN
Adaptive Particle Swarm Optimization (PSO) variants have become popular in recent years. The main idea of these adaptive PSO variants is that they adaptively change their search behavior during the optimization process based on information gathered during the run. Adaptive PSO variants have shown to be able to solve a wide range of difficult optimization problems efficiently and effectively. In this paper we propose a Repulsive Self-adaptive Acceleration PSO (RSAPSO) variant that adaptively optimizes the velocity weights of every particle at every iteration. The velocity weights include the acceleration constants as well as the inertia weight that are responsible for the balance between exploration and exploitation. Our proposed RSAPSO variant optimizes the velocity weights that are then used to search for the optimal solution of the problem (e.g., benchmark function). We compare RSAPSO to four known adaptive PSO variants (decreasing weight PSO, time-varying acceleration coefficients PSO, guaranteed convergence PSO, and attractive and repulsive PSO) on twenty benchmark problems. The results show that RSAPSO achives better results compared to the known PSO variants on difficult optimization problems that require large numbers of function evaluations.
2
Content available remote A Six-order Variant of Newton’s Method for Solving Nonlinear Equations
EN
A new variant of Newton's method based on contra harmonic mean has been developed and its convergence properties have been discussed. The order of convergence of the proposed method is six. Starting with a suitably chosen x0, the method generates a sequence of iterates converging to the root. The convergence analysis is provided to establish its sixth order of convergence. In terms of computational cost, it requires evaluations of only two functions and two first order derivatives per iteration. This implies that efficiency index of our method is 1.5651. The proposed method is comparable with the methods of Parhi, and Gupta [15] and that of Kou and Li [8]. It does not require the evaluation of the second order derivative of the given function as required in the family of Chebyshev-Halley type methods. The efficiency of the method is tested on a number of numerical examples. It is observed that our method takes lesser number of iterations than Newton’s method and the other third order variants of Newton´s method. In comparison with the sixth order methods, it behaves either similarly or better for the examples considered.
first rewind previous Strona / 1 next fast forward last
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.