A class of minimax problems is considered. We approach it with the techniques of quasiconvex optimization, which includes most important nonsmooth and relaxed convex problems and has been intensively developed. Observing that there have been many contributions to various themes of minimax problems, but surprisingly very few on optimality conditions, the most traditional and developed topic in optimization, we establish both necessary and sufficient conditions for solutions and unique solutions. A main feature of this work is that the involved functions are relaxed quasi- convex in the sense that the sublevel sets need to be convex only at the considered point. We use star subdifferentials, which are slightly bigger than other subdifferentials applied in many existing results for minimization problems, but may be empty or too small in various situations. Hence, when applied to the special case of minimization problems, our results may be more suitable. Many examples are provided to illustrate the applications of the results and also to discuss the imposed assumptions.
2
Dostęp do pełnego tekstu na zewnętrznej witrynie WWW
This paper studies the convergence of a dual algorithm for solving minimax problems proposed by Zhang and Tang (1997), which is based on a penalty function of Bertsekas (1982). It proves that the dual algorithm is locally convergent with linear convergence rate under the commonly used assumptions. Numerical results are presented to show the effectiveness of this algorithm.
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.