Parallel matrix factorization for low-rank tensor completion

Yangyang Xu, Ruru Hao, Wotao Yin, and Zhixun Su

Published in Inverse Problems and Imaging

Overview

This paper introduces a new method that recovers missing entries of low-rank tensors. This problem is known as the low-rank tensor completion (LRTC) problem.

An approach for LRTC is to unfold the tensor as matrices and then apply nuclear-norm minimization to complete these matrices (and thus the tensor). It works well but is quite slow and cannot solve large-scale problems. To tackle this issue, instead of minimizing nuclear-norms, we recover the low-rank factorizations of those unfolding matrices. We call the approach “Tensor completion by parallel matrix factorization” (TMac). We found it faster and having much better recovery rate. An open question is its theoretical guarantee.

Our method can be generalized to the general recovery of low-rank tensors.

Our formulation and method

We aim at recovering a low-rank tensor mathbf{mathcal{M}}inmathbf{R}^{I_1timesldotstimes I_N} from partial observations mathbf{mathcal{B}}=mathcal{P}_Omega(mathbf{mathcal{M}}), where Omega is the index set of observed entries, and mathcal{P}_Omega keeps the entries in Omega and zeros out others. We apply low-rank matrix factorization to each mode unfolding of mathbf{mathcal{M}} by finding matrices mathbf{X}_ninmathbf{R}^{I_ntimes r_n},mathbf{Y}_nin mathbf{R}^{r_ntimesPi_{jneq n}I_j} such that mathbf{M}_{(n)}approxmathbf{X}_nmathbf{Y}_n for n=1,ldots,N, where r_n is the estimated rank, either fixed or adaptively updated. Introducing one common variable mathbf{mathcal{Z}} to relate these matrix factorizations, we solve the following model

mathop{mathrm{minimize}}_{mathbf{X},mathbf{Y},mathbf{mathcal{Z}}}sum_{n=1}^Nfrac{alpha_n}{2}|mathbf{X}_nmathbf{Y}_n-mathbf{Z}_{(n)}|_F^2,mathrm{s.t.} mathcal{P}_Omega(mathbf{mathcal{Z}})=mathbf{mathcal{B}},

where mathbf{X}=(mathbf{X}_1,ldots,mathbf{X}_N) and mathbf{Y}=(mathbf{Y}_1,ldots,mathbf{Y}_N). In the model, alpha_n, n=1,ldots,N, are weights and satisfy sum_nalpha_n=1. We use alternating least squares method to solve the model.

Numerical results

Our model is non-convex jointly with respect to mathbf{X},mathbf{Y} and mathcal{Z}. Although a global solution is not guaranteed, we demonstrate by numerical experiments that our algorithm can reliably recover a wide variety of low-rank tensors, such as the following phase transition plots. In the picture, each target tensor mathcal{M}=mathcal{C}times_1mathbf{A}_1times_2mathbf{A}_2times_3mathbf{A}_3, where mathcal{C}inmathbf{R}^{rtimes rtimes r} and mathbf{A}_ninmathbf{R}^{50times r}, forall n have Gaussian random entries. (a) FaLRTC: the tensor completion method in [2]. (b) MatComp: first reshape the tensor as a matrix and then use the matrix completion solver LMaFit in [3]. (c) TMac-fix: our method with alpha_n=frac{1}{3} and r_n fixed to r, forall n. (d) TMac-inc: our method with alpha_n=frac{1}{3} and using rank-increasing strategy starting from r_n = mathrm{round}(0.75r),forall n. (e) TMac-dec: our method withalpha_n=frac{1}{3} and using rank-decreasing strategy starting from r_n = mathrm{round}(1.25r),forall n.

The results show that our method performs much better than the other two methods.

 

[1] S. Gandy, B. Recht, and I. Yamada, Tensor completion and low-n-rank tensor recovery via convex optimization, Inverse Problems, 27(2011), p. 025010.

[2] J. Liu, P. Musialski, P. Wonka, and Jieping Ye, Tensor completion for estimating missing values in visual data, IEEE Transactions on Pattern Analysis and Machine Intelligence, (2013), pp. 208-220.

[3] Z. Wen, W. Yin, and Y. Zhang, Solving a low-rank factorization model for matrix completion by a nonlinear successive over-relaxation algorithm, Mathematical Programming Computation, (2012), pp. 1-29.

Citation

Y. Xu, R. Hao, W. Yin, and Z. Su, Parallel matrix factorization for low-rank tensor completion, Inverse Problems and Imaging, 9(2), 601-624, 2015. DOI: 10.3934/ipi.2015.9.601


« Back