PL EN


Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników
Tytuł artykułu

An Improved Alternating Direction Method of Multipliers for Matrix Completion

Identyfikatory
Warianty tytułu
Języki publikacji
EN
Abstrakty
EN
Matrix completion is widely used in information science fields such as machine learning and image processing. The alternating direction method of multipliers (ADMM), due to its ability to utilize the separable structure of the objective function, has become an extremely popular approach for solving this problem. But its subproblems can be computationally demanding. In order to improve computational efficiency, for large scale matrix completion problems, this paper proposes an improved ADMM by using convex combination technique. Under certain assumptions, the global convergence of the new algorithm is proved. Finally, we demonstrate the performance of the proposed algorithms via numerical experiments.
Rocznik
Strony
49--62
Opis fizyczny
Bibliogr. 24 poz., rys., tab.
Twórcy
autor
  • College of Mathematics and Statistics Taiyuan Normal University, Shanxi Key Laboratory for Intelligent Optimization Computing and Blockchain Technology, China
autor
  • College of Mathematics and Statistics Taiyuan Normal University, Shanxi Key Laboratory for Intelligent Optimization Computing and Blockchain Technology, China
autor
  • College of Mathematics and Statistics Taiyuan Normal University, Shanxi Key Laboratory for Intelligent Optimization Computing and Blockchain Technology, China
Bibliografia
  • [1] Argyriou A., Evgeniou T., Pontil M., Multi-task feature learning, Advances in Neural Information Processing Systems, 19, 2007, 41-48.
  • [2] Cai J. F., Candès E. J., Shen Z., A singular value thresholding algorithm for matrix completion, SIAM Journal on Optimization, 20, 4, 2010, 1956-1982.
  • [3] Candès E. J., Tao T., The power of convex relaxation: near-optimal matrix completion, IEEE Transactions on Information Theory, 56, 5, 2010, 2053-2080.
  • 4] Chen C., Chan R. H., Ma S., et al, Inertial proximal ADMM for linearly constrained separable convex optimization, SIAM Journal on Imaging Sciences, 8, 4, 2015, 2239-2267.
  • [5] Chen C., He B., Yuan X., Matrix completion via an alternating direction method, IMA Journal of Numerical Analysis, 32, 1, 2012, 227-245.
  • [6] Eckstein J., Some saddle-function splitting methods for convex programming, Optimization Methods and Software, 4, 1, 1994, 75-83.
  • [7] Fazel M., Matrix rank minimization with applications, PhD Thesis, Stanford University, 2002.
  • [8] Fazel M., Hindi H., Boyd S. P., Log-det heuristic for matrix rank minimization with applications to Hankel and Euclidean distance matrices, Proceedings of the 2003 American Control Conference, 2003.
  • [9] Fazel M., Pong T. K., Sun D., et al, Hankel matrix rank minimization with applications to system identification and realization, SIAM Journal on Matrix Analysis and Applications, 34, 3, 2013, 946-977.
  • [10] Gabay D., Mercier B., A dual algorithm for the solution of nonlinear variational problems via finite element approximation, Computers & Mathematics with Applications, 2, 1, 1976, 17-40.
  • [11] Glowinski R, Numerical methods for nonlinear variational problems, New York: Springer, 1984.
  • [12] Glowinski R., Le Tallec P., Augmented Lagrangian and operator-splitting methods in nonlinear mechanics, Society for Industrial and Applied Mathematics, 1989.
  • [13] Hale E. T., Yin W., Zhang Y., Fixed-point continuation for l1-minimization: methodology and convergence, SIAM Journal on Optimization, 19, 3, 2008, 1107-1130.
  • [14] He B., Ma F., Yuan X., Optimally linearizing the alternating direction method of multipliers for convex programming, Computational Optimization and Applications, 75, 2020, 361-388.
  • [15] Lin Z., Chen M., Ma Y., The augmented Lagrange multiplier method for exact recovery of corrupted low-rank matrices, ArXiv:1009.5055, 2010.
  • [16] Ma S., Goldfarb D., Chen L., Fixed point and Bregman iterative methods for matrix rank minimization, Mathematical Programming, 128, 2011, 321-353.
  • [17] Malitsky Y., Golden ratio algorithms for variational inequalities, Mathematical Programming, 184, 2020, 383-410.
  • [18] Ouyang Y., Chen Y., Lan G., et al, An accelerated linearized alternating direction method of multipliers, SIAM Journal on Imaging Sciences, 8, 1, 2015, 644-681. [19] Shefi R., Teboulle M., Rate of convergence analysis of decomposition methods based on the proximal method of multipliers for convex minimization, SIAM Journal on Optimization, 24, 1, 2014, 269-297.
  • [20] Spyromitros-Xioufis E., Tsoumakas G., Groves W., et al, Multi-target regression via input space expansion: treating targets as inputs, Machine Learning, 104, 2016, 55-98.
  • [21] Toh K. C., Yun S., An accelerated proximal gradient algorithm for nuclear norm regularized linear least squares problems, Pacific Journal of Optimization, 6, 3, 2010, 615-640.
  • [22] Tomasi C., Kanade T., Shape and motion from image streams: a factorization method, International Journal of Computer Vision, 9, 2, 1992, 137-154.
  • [23] Wang X., Yuan X., The linearized alternating direction method of multipliers for Dantzig selector, SIAM Journal on Scientific Computing, 34, 5, 2012, A2792-A2811.
  • [24] Xue H. Y., Zhang S., Cai D., Depth image inpainting: improving low rank matrix completion with low gradient regularization, IEEE Transactions on Image Processing, 26, 9, 2017, 4311-4320.
Typ dokumentu
Bibliografia
Identyfikator YADDA
bwmeta1.element.baztech-fc98b9cc-24c8-48da-b178-2a2d81879526
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.