Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników

Znaleziono wyników: 1

Liczba wyników na stronie
first rewind previous Strona / 1 next fast forward last
Wyniki wyszukiwania
Wyszukiwano:
w słowach kluczowych:  sitness predator optimizer
help Sortuj według:

help Ogranicz wyniki do:
first rewind previous Strona / 1 next fast forward last
EN
In our previous work, Fitness Predator Optimizer (FPO) is proposed to avoid premature convergence for multimodal problems. In FPO, all of the particles are seen as predators. Only the competitive, powerful predator that are selected as an elite could achieve the limited opportunity to update. The elite generation with roulette wheel selection could increase individual independence and reduce rapid social collaboration. Experimental results show that FPO is able to provide excellent performance of global exploration and local minima avoidance simultaneously. However, to the higher dimensionality of multimodal problem, the slow convergence speed becomes the bottleneck of FPO. A dynamic team model is utilized in FPO, named DFPO to accelerate the early convergence rate. In this paper, DFPO is more precisely described and its variant, DFPO-r is proposed to improve the performance of DFPO. A method of team size selection is proposed in DFPO-r to increase population diversity. The population diversity is one of the most important factors that determines the performance of the optimization algorithm. A higher degree of population diversity is able to help DFPO-r alleviate a premature convergence. The strategy of selection is to choose team size according to the higher degree of population diversity. Ten well-known multimodal benchmark functions are used to evaluate the solution capability of DFPO and DFPO-r. Six benchmark functions are extensively set to 100 dimensions to investigate the performance of DFPO and DFPO-r compared with LBest PSO, Dolphin Partner Optimization and FPO. Experimental results show that both DFPO and DFPO-r could demonstrate the desirable performance. Furthermore, DFPO-r shows better robustness performance compared with DFPO in experimental study.
first rewind previous Strona / 1 next fast forward last
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.