The Particle Swarm Optimiser (PSO) is a population based stochastic optimisation algorithm, empirically shown to be efficient and robust. This paper provides a proof to show that the original PSO does not have guaranteed convergence to a local optimum. A flaw in the original PSO is identified which causes stagnation of the swarm. Correction of this flaw results in a PSO algorithm with guaranteed convergence to a local minimum. Further extensions with provable global convergence are also described. Experimental results are provided to elucidate the behavior of the modified PSO as well as PSO variations with global convergence.
2
Dostęp do pełnego tekstu na zewnętrznej witrynie WWW
Particle Swarm Optimisation (PSO) has proved to be a very useful algorithm to optimise unconstrained functions. This paper extends PSO to a Linear PSO (LPSO) to optimise functions constrained by a set of equality constraints of the form Ax = b. By initialising particles within a constrained hyperplane, the LPSO is guaranteed to `fly' only through this hyperplane. A criterion on the initial swarm stipulates when the optimum solution can possibly be reached. The Linear PSO is modified to the Converging Linear PSO, for which it is proved to always find at least a local minimum. Experimental results are given, which compare the LPSO and CLPSO with Genocop II.
3
Dostęp do pełnego tekstu na zewnętrznej witrynie WWW
Research on improving the performance of feedforward neural networks has concentrated mostly on the optimal setting of initial weights and learning parameters, sophisticated optimization techniques, architecture optimization, and adaptive activation functions. An alternative approach is presented in this paper where the neural network dynamically selects training patterns from a candidate training set during training, using the network's current attained knowledge about the target concept. Sensitivity analysis of the neural network output with respect to small input perturbations is used to quantify the informativeness of candidate patterns. Only the most informative patterns, which are those patterns closest to decision boundaries, are selected for training. Experimental results show a significant reduction in the training set size, without negatively influencing generalization performance and convergence characteristics. This approach to selective learning is then compared to an alternative where informativeness is measured as the magnitude in prediction error.
4
Dostęp do pełnego tekstu na zewnętrznej witrynie WWW
Research on improving the performance of feedforward neural networks has concentrated mostly on the optimal setting of initial weights and learning parameters, sophisticated optimization techniques, architecture optimization, and adaptive activation functions. An alternative approach is presented in this paper where the neural network dynamically selects training patterns from a candidate training set during training, using the network's current attained knowledge about the target concept. Sensitivity analysis of the neural network output with respect to small input perturbations is used to quantify the informativeness of candidate patterns. Only the most informative patterns, which are those patterns closest to decision boundaries, are selected for training. Experimental results show a significant reduction in the training set size, without negatively influencing generalization performance and convergence characteristics. This approach to selective learning is then compared to an alternative where informativeness is measured as the magnitude in prediction error.
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.