Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników
Powiadomienia systemowe
  • Sesja wygasła!

Znaleziono wyników: 2

Liczba wyników na stronie
first rewind previous Strona / 1 next fast forward last
Wyniki wyszukiwania
Wyszukiwano:
w słowach kluczowych:  opposition-based learning
help Sortuj według:

help Ogranicz wyniki do:
first rewind previous Strona / 1 next fast forward last
EN
Differential Evolution (DE) algorithm is one of the popular evolutionary algorithms that is designed to find a global optimum on multi-dimensional continuous problems. In this paper, we propose a new variant of DE algorithm by combining a self-adaptive DE algorithm called dynNP-DE with Elite Opposition-Based Learning (EOBL) scheme. Since dynNP-DE algorithm uses a small number of population size in the later of the search process, the population diversity becomes low, and therefore premature convergence may occur. We have therefore extended an OBL scheme to dynNP-DE algorithm to overcome this shortcoming and improve the optimization performance. By combining EOBL scheme to dynNP-DE algorithm, the population diversity can be supplemented because not only the information of individuals but also their opposition information can be utilized. We measured the optimization performance of the proposed algorithm on CEC 2005 benchmark problems and breast cancer detection, which is a research field that has recently attracted a lot of attention. It was verified that the proposed algorithm could find better solutions than five state-of-the-art DE algorithms.
EN
Particle swarm optimization (PSO) is a population-based stochastic optimization technique that can be applied to solve optimization problems. However, there are some defects for PSO, such as easily trapping into local optimum, slow velocity of convergence. This paper presents the simple butterfly particle swarm optimization algorithm with the fitness-based adaptive inertia weight and the opposition-based learning average elite strategy (SBPSO) to accelerate convergence speed and jump out of local optimum. SBPSO has the advantages of the simple butterfly particle swarm optimizer to increase the probability of finding the global optimum in the course of searching. Moreover, SBPSO benefits from the simple particle swarm (sPSO) to accelerate convergence speed. Furthermore, SBPSO adopts the opposition-based learning average elite to enhance the diversity of the particles in order to jump out of local optimum. Additionally, SBPSO generates the fitness-based adaptive inertia weight ω to adapt to the evolution process. Eventually, SBPSO presents a approach of random mutation location to enhance the diversity of the population in case of the position out of range. Experiments have been conducted with eleven benchmark optimization functions. The results have demonstrated that SBPSO outperforms than that of the other five recent proposed PSO in obtaining the global optimum and accelerating the velocity of convergence.
first rewind previous Strona / 1 next fast forward last
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.