In this paper we develop a general convergence theory for nonmonotone line searches in optimization algorithms. The advantage of this theory is that it is applicable to various step size rules that have been published in the past decades. This gives more insight into the structure of these step size rules and points to several relaxations of the hypotheses. Furthermore, it can be used in the framework of discretized infinite-dimensional optimization problems like optimal control problems and ties the discretized problems to the original problem formulation.
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.