iBet uBet web content aggregator. Adding the entire web to your favor.
iBet uBet web content aggregator. Adding the entire web to your favor.



Link to original content: https://doi.org/10.3390/sym11101277
A Unified Proximity Algorithm with Adaptive Penalty for Nuclear Norm Minimization
Next Article in Journal
Computational Modeling Methods for Deployable Structure Based on Alternatively Asymmetric Triangulation
Next Article in Special Issue
Some Fixed Point Theorems for Quadratic Quasicontractive Mappings
Previous Article in Journal
On Generalized Distance Gaussian Estrada Index of Graphs
Previous Article in Special Issue
On Some Iterative Numerical Methods for Mixed Volterra–Fredholm Integral Equations
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Unified Proximity Algorithm with Adaptive Penalty for Nuclear Norm Minimization

1
College of Mathematics and Computer Science, Gannan Normal University, Ganzhou 341000, China
2
Department of Mathematics, School of Science, Hangzhou Dianzi University, Hangzhou 310018, China
*
Author to whom correspondence should be addressed.
Symmetry 2019, 11(10), 1277; https://doi.org/10.3390/sym11101277
Submission received: 9 September 2019 / Revised: 4 October 2019 / Accepted: 8 October 2019 / Published: 11 October 2019
(This article belongs to the Special Issue Fixed Point Theory and Computational Analysis with Applications)

Abstract

:
The nuclear norm minimization (NNM) problem is to recover a matrix that minimizes the sum of its singular values and satisfies some linear constraints simultaneously. The alternating direction method (ADM) has been used to solve this problem recently. However, the subproblems in ADM are usually not easily solvable when the linear mappings in the constraints are not identities. In this paper, we propose a proximity algorithm with adaptive penalty (PA-AP). First, we formulate the nuclear norm minimization problems into a unified model. To solve this model, we improve the ADM by adding a proximal term to the subproblems that are difficult to solve. An adaptive tactic on the proximity parameters is also put forward for acceleration. By employing subdifferentials and proximity operators, an equivalent fixed-point equation system is constructed, and we use this system to further prove the convergence of the proposed algorithm under certain conditions, e.g., the precondition matrix is symmetric positive definite. Finally, experimental results and comparisons with state-of-the-art methods, e.g., ADM, IADM-CG and IADM-BB, show that the proposed algorithm is effective.

Graphical Abstract

1. Introduction

The rank minimization (RM) problem aims to recover an unknown low-rank matrix from very limited information. It has gained am increasing amount of attention rapidly in recent years, since it has a range of applications in many computer vision and machine learning areas, such as collaborative filtering [1], subspace segmentation [2], non-rigid structure from motion [3] and image inpainting [4]. This paper deals with the following rank minimization problem:
min X R m × n rank ( X ) s . t . A X = b ,
where A : R m × n R p is a linear map and the vector b R p is known. The matrix completion (MC) problem is a special case of the RM problem, where A is a sampling operator in the form of A X = X Ω , Ω { 1 , 2 , , m } × { 1 , 2 , , n } is an index subset, and X Ω is a vector formed by the entries of X with indices in Ω .
Although Label (1) is simple in form, directly solving it is an NP-hard problem due to the discrete nature of the rank function. One popular way is replacing the rank function with the nuclear norm, which is the sum of the singular values of a matrix. This technique is based on the fact that the nuclear norm minimization (NNM) is the tightest convex relaxation of the rank minimization problem [5]. The obtained new problem is given by
min X R m × n X * s . t . A X = b ,
where X * : = i = 1 r σ i ( X ) denotes the nuclear norm. It has been shown that recovering a low-rank matrix can be achieved by solving Label (2) [1,6].
In practical applications, the observed data may be corrupted with noise, namely b = A X + e , where e contains measurement errors dominated by certain normal distribution. In order to recover the low-rank matrix robustly, problem (2) should be modified to the following inequality constrained problem:
min X R m × n X * s . t . A X b 2 δ ,
where · 2 is the 2 norm of vector and the constant δ 0 is the noise level. When δ = 0 , problem (3) reduces to the noiseless case (2).
Alternatively, problems (2) and (3) can be rewritten as the nuclear norm regularized least-square (NNRLS) under some conditions:
min X R m × n X * + γ 2 A X b 2 2 ,
where γ > 0 is as given parameter.
The studies on the nuclear norm minimization problem are mainly along two directions. The first one is enhancing the precision of a low rank approximation via replacing the nuclear norm by a non-convex regularizer—for instance, the Schatten p-norm [7,8], the truncated nuclear norm [4,9], the log or fraction function based norm [10,11], and so on. The second one is improving the efficiency of solving problems (2), (3) and (4) and their variants. For instance, the authors in [12] treated the problem (2) as a standard linear semidefinite programming (SDP) problem, and proposed the solving algorithm from the dual problem. However, since the SDP solver uses second-order information, with the increase in the size of the matrix, the memory required to calculate the descending direction quickly becomes too large. Therefore, algorithms that use only first-order information are developed, such as the singular value thresholding (SVT) algorithm [13], the fixed point continuation algorithm (FPCA) [14], the accelerated proximal gradient Lagrangian (APGL) method [15], the proximal point algorithm based on indicator function (PPA-IF) [16], the augmented Lagrange multiplier (ALM) algorithm [17] and the alternating direction methods (ADM) [18,19,20,21].
In particular, Chen et al. [18] applied the ADM to solve the nuclear norm based matrix completion problem. Due to the simplicity of the linear mapping A , i.e., A A * = I , all of the ADM subproblems of the matrix completion problem can be solved exactly by an explicit formula; see [18] for details. Here, and hereafter A * and I represent the adjoint of A and the identity operator. However, for a generic A with A A * I , some of the resulting subproblems no longer have closed-form solutions. Thus, the efficiency of the ADM depends heavily on how to solve these harder subproblems.
To solve this difficulty, a common strategy is to introduce new auxiliary variables, e.g., in [19], one auxiliary variable was introduced for solving Label (2), while two auxiliary variables were introduced for Label (3). However, with more variables and more constraints, more memory is required and the convergence of ADM also becomes slower. Moreover, to update auxiliary variables, whose subproblems are least square problems, expensive matrix inversions are often necessary. Even worse, the convergence of ADM with more than two variables is not guaranteed. To mitigate these problems, Yang and Yuan [21] presented a linearized ADM to solve the NNRLS (4) as well as problems (2) and (3), where each subproblems admit explicit solutions. Instead of the linearized technique, Xiao and Jin [19] solve one least square subproblem iteratively by the Barzilai–Borwein (BB) gradient method [22]. Unlike [19], Jin et al. [20] used the linear conjugate gradient (CG) algorithm rather than BB to solve the subproblem.
In this paper, we further investigate the efficiency of ADM in solving the nuclear norm minimization problems. We first reformulate the problems (2), (3) and (4) into a unified form as follows:
min X R m × n f ( X ) + g ( A X ) ,
where f : R m × n R and g : R p R . In this paper, we always fix f ( · ) = · * . When considering problems (2) and (3), we choose g ( · ) = X B δ ( · b ) , where X B δ ( · ) denotes the indicator function over B δ : = { u R p u 2 δ } , i.e.,
X B δ ( x ) : = 0 , if x B δ , + , otherwise .
When considering problem (4), we choose g ( · ) = γ 2 · b 2 2 . As a result, for a general linear mapping A , we only need to solve such a problem whose objective function is a sum of two convex functions and one of them contains an affine transformation.
Motivated by the ADM algorithms above, we then present a unified proximity algorithm with adaptive penalty (PA-AP) to solve Label (5). In particular, we employ the proximity idea to deal with the problems encountered by the present ADM, by adding a proximity term to one of the subproblems. We term the proposed algorithm as a proximity algorithm because we can rewrite it as a fixed-point equation system of proximity operators of f and g. By analyzing the fixed-point equations and applying the “Condition-M” [23], the convergence of the algorithm is proved under some assumptions. Furthermore, to improve the efficiency, an adaptive tactic on the proximity parameters is put forward. This paper is closely related to the works [23,24,25,26]. However, this paper is motivated to improve ADM to solve the nuclear norm minimization problem with linear affine constraints.
The organization of this paper is as follows. In Section 2, a review of ADM and its application on NNM are provided. In addition, the properties about subdifferentials and proximity operators are introduced. To improve ADM, a proximity algorithm with adaptive penalty is proposed, and convergence of the proposed algorithm is obtained in Section 3. Section 4 demonstrates the performance and effectiveness of the algorithm through numerical experiment. Finally, we will make a conclusion in Section 5.

2. Preliminaries

In this section, we give a brief review on ADM and its applications to the NNM problem (2) developed in [19,20]. In addition, some preliminaries on subdifferentials and proximity operators are given. Throughout this paper, linear maps are denoted with calligraphic letters (e.g., A ), while capital letters represent matrices (e.g., A), and boldface lowercase letters represent vectors (e.g., x ).
We begin with introducing the ADM. The basic idea of ADM goes back to the work of Gabay and Mercier [27]. ADM is designed to solving the separable convex minimization problem:
min x , y θ 1 ( x ) + θ 2 ( y ) , s . t . A x + B y = c ,
where θ 1 : R s R , θ 2 : R t R are convex functions, and A R l × s , B R l × t and c R l . The corresponding augmented Lagrangian function is
L ( x , y , λ ) = θ 1 ( x ) + θ 2 ( y ) λ , A x + B y c + β 2 A x + B y c 2 2 ,
where λ R l is the Lagrangian multiplier and β > 0 is a penalty parameter. ADM is to minimize (8) first with respect to x , then with respect to y , and finally update λ iteratively, i.e.,
x k + 1 = arg min x L ( x , y k , λ k ) , y k + 1 = arg min y L ( x k + 1 , y , λ k ) , λ k + 1 = λ k β ( A x k + 1 + B y k + 1 c ) .
The main advantage of ADM is to make use of the separability structure of the objective function θ 1 ( x ) + θ 2 ( y ) .
To solve (2) based on ADM, the authors in [19,20] introduced an auxiliary variable Y, and equivalently transformed the original model into
min X , Y X * s . t . X Y = 0 , A Y = b .
The augmented Lagrangian function to (10) is
L ( X , Y , Z , z ) = X * Z , X Y + μ 2 X Y F 2 z , A Y b + γ 2 A Y b 2 2 ,
where Z R m × n , z R p are Lagrangian multipliers, μ , γ > 0 are penalty parameters and · denotes the Frobenius inner product, i.e., the matrices are treated like vectors. Following the idea of ADM, given ( X k , Y k ) , the next pair ( X k + 1 , Y k + 1 ) is determined by alternating minimizing (11),
X k + 1 = arg min X L ( X , Y k , Z k , z k ) , Y k + 1 = arg min Y L ( X k + 1 , Y , Z k , z k ) , Z k + 1 = Z k μ ( X k + 1 Y k + 1 ) , z k + 1 = z k γ ( A Y k + 1 b ) .
Firstly, X k + 1 can be updated by
X k + 1 = arg min X X * Z k , X Y k + μ 2 X Y k F 2 = arg min X X * + μ 2 X ( Y k + 1 μ Z k ) F 2 ,
which in fact corresponds to evaluating the proximal operator of · * , i.e., prox 1 β · * which is defined below.
Secondly, given ( X k + 1 , Z k , z k ) , Y k + 1 can be updated by
Y k + 1 = arg min Y Z k , X k + 1 Y + μ 2 X k + 1 Y F 2 z k , A Y b + γ 2 A Y b 2 2 = arg min Y μ 2 Y ( X k + 1 1 μ Z k ) F 2 + γ 2 A Y ( b + 1 γ z k ) 2 2 ,
which is a least square subproblem. Its solution can be found by solving a linear equation:
( μ I + γ A * A ) Y = μ X k + 1 Z k A * ( γ b + z k ) .
However, computing the matrix inverse ( μ I + γ A * A ) 1 is too costly to implement. Though [19,20] adopted inverse-free methods, i.e., BB and CG, to solve (14) iteratively, they are still inefficient, which will be shown in Section 4.
Next, we review definitions of subdifferential and proximity operator, which play an important role in the algorithm and convergence analysis. The subdifferential of a convex function θ at a point x R d is the set defined by
θ ( · ) ( x ) = { y R d : θ ( z ) θ ( x ) + < y , z x > , z R d } .
The conjugate function of θ is denoted by θ * , which is defined by
θ * ( y ) : = sup x { x , y θ ( x ) } .
For x dom ( θ ) , y dom ( θ * ) , it holds that
y θ ( · ) ( x ) x θ * ( · ) ( y ) ,
where dom ( · ) denotes the domain of a function.
For a given positive definite matrix H , the weighted inner product is defined by x , y H = H x , y . Furthermore, the proximity operator of θ at x with respect to H [23] is defined by
prox θ ( · ) , H ( x ) = arg min { θ ( u ) + 1 2 u x H 2 : u R d } .
If H = β I , then prox θ ( · ) , H ( · ) is reduced to
prox θ ( · ) , β I ( x ) = arg min { θ ( u ) + β 2 u x 2 2 : u R d } ,
and prox 1 β θ ( · ) ( · ) is short for prox θ ( · ) , β I ( · ) . A relation between subdifferentials and proximity operators is that
y θ ( · ) ( x ) x = prox θ ( · ) ( x + y ) ,
which is frequently used to construct fixed-point equations and prove convergence of the algorithm.

3. Proximity Algorithm with Adaptive Penalty

In Section 2, it is shown that the current ADM results in expensive matrix inverse computation when solving (2). Therefore, it is desirable to improve it. In this section, we propose a proximity algorithm with adaptive penalty (PA-AP) to solve the unified problem (5).

3.1. Proximity Algorithm

We derive our algorithm from ADM. First of all, we introduce an auxiliary variable y R p , and convert (5) to
min X , y f ( X ) + g ( y ) , s . t . A X y = 0 .
The augmented Lagrangian function of (19) is defined by
L ( X , y ) = f ( X ) + g ( y ) λ , A X y + μ 2 A X y 2 2 ,
where λ R p is the Lagrangian multiplier and μ > 0 is a penalty parameter. ADM first updates X by minimizing L ( X , y ) with y being fixed and then updates y with X fixed at its latest value, until some convergence criteria are satisfied. After some simplification, we get
X k + 1 = arg min X f ( X ) + μ 2 A X ( y k + 1 μ λ k ) 2 2 , y k + 1 = arg min y g ( y ) + μ 2 y ( A X k + 1 1 μ λ k ) 2 2 , λ k + 1 = λ k μ ( A X k + 1 y k + 1 ) .
Note that the subproblem of X k + 1 usually has no closed-form solutions when A is not the identity. In order to design efficient algorithms, we add a proximity term to this subproblem. More precisely, we propose the following algorithm:
X k + 1 = arg min X f ( X ) + μ 2 A X ( y k + 1 μ λ k ) 2 2 + 1 2 X X k Q 2 , y k + 1 = arg min y g ( y ) + μ 2 y ( A X k + 1 1 μ λ k ) 2 2 , λ k + 1 = λ k μ ( A X k + 1 y k + 1 ) ,
where Q R m × m is a symmetric positive definite matrix.
The next lemma shows that (22) is equivalent to a fixed-point equation of some proximity operators. The proof is similar to [25].
Lemma 1.
The problem (22) is equivalent to solving the following equations:
z k + 1 = prox μ g * ( · ) ( μ A X k + z k ) , X k + 1 = prox τ f ( · ) ( X k + 1 τ ( μ A * A + Q ) ( X k + 1 X k ) τ A * ( 2 z k + 1 z k ) ) ,
where τ > 0 is arbitrary.
Proof. 
By changing the order of iterations, the first-order optimality condition of (22) is
0 g ( · ) ( y k + 1 ) + μ ( y k + 1 A X k + 1 μ λ k ) , λ k + 1 = λ k μ ( A X k y k + 1 ) , 0 f ( · ) ( X k + 1 ) + μ A * ( A X k + 1 y k + 1 1 μ λ k + 1 ) + Q ( X k + 1 X k ) .
From the first line of (24) and (16), we obtain
μ y k + 1 μ g * ( · ) ( μ ( A X k y k + 1 ) λ k ) .
Denote z k : = λ k . Then,
z k + 1 = μ A X k + z k μ y k + 1 .
Using (18), it follows from (25) and (26) that
z k + 1 = prox μ g * ( · ) ( μ A X k + z k ) .
By (26), we have
μ A X k + z k = μ y k + 1 + μ ( A X k y k + 1 1 μ λ k ) = μ y k + 1 + z k + 1 ,
and thus
μ A * ( A X k + 1 y k + 1 1 μ λ k + 1 ) + Q ( X k + 1 X k ) = ( μ A * A + Q ) ( X k + 1 X k ) + A * ( 2 z k + 1 z k ) .
From (28) and the third line of (24), given τ > 0 , we have
τ ( μ A * A + Q ) ( X k + 1 X k ) τ A * ( 2 z k + 1 z k ) τ f ( · ) ( X k + 1 ) ,
which yields
X k + 1 = prox τ f ( · ) ( X k + 1 τ ( μ A * A + Q ) ( X k + 1 X k ) τ A * ( 2 z k + 1 z k ) ) .
Therefore, the results in (23) are achieved by combining (27) and (29). □
We now discuss the choice of Q. To make the first subproblem of (22) have closed-form solutions, in this paper, we simply choose Q = μ η I μ A * A . To make sure Q is positive definite, η must satisfy η > A 2 2 , where · 2 is the spectral norm. By substituting Q into (22), we obtain
X k + 1 = arg min X f ( X ) + μ η 2 X ( X k + A * ( λ k μ ( A X k y k ) ) μ η ) 2 2 , y k + 1 = arg min y g ( y ) + μ 2 y ( A X k + 1 1 μ λ k ) 2 2 , λ k + 1 = λ k μ ( A X k + 1 y k + 1 ) .
The subproblems in (30) can be solved explicitly based on proximity operators. Specifically, we have
X k + 1 = prox 1 μ η · * ( X k + A * ( λ k μ ( A X k y k ) ) μ η ) = U max { Σ 1 μ η , 0 } V T ,
where U Σ V T is the singular value decomposition (SVD) of X k + A * ( λ k μ ( A X k y k ) ) μ η , U R m × m , V R n × n , and Σ R m × n is a diagonal matrix containing the singular values.
Moreover, y k + 1 = prox 1 μ g ( · ) ( A X k + 1 1 μ λ k ) , which depends on the choice of g ( · ) . If  g ( · ) = X B δ ( · b ) , then
p r o x 1 μ g ( · ) ( x ) = x , x b B δ , b + δ x b 2 ( x b ) , o t h e r w i s e .
If g ( x ) = γ 2 x b 2 2 , then
p r o x 1 μ g ( · ) ( x ) = γ b + μ x γ + μ .

3.2. Adaptive Penalty

In previous proximity algorithms [23,28,29], the penalty parameter μ is usually fixed. In view of the linearized ADM, Liu et al. [26] presented an adaptive updating strategy for the penalty parameter μ . Motivated by it, we update μ by
μ k + 1 = min { μ m a x , ρ μ k } ,
where μ m a x is an upper bound of { μ k } . The value of ρ is defined as
ρ = ρ 0 , if μ k max { η X k + 1 X k F , y k + 1 y k 2 } / b 2 ϵ 1 , 1 , otherwise ,
where ρ 0 > 1 and ϵ 1 > 0 are given.
Based on the above analysis in Section 3.1 and Section 3.2, the proximity algorithm with adaptive penalty (abbr. PA-AP) for solving (5) can be outlined in Algorithm 1.
Algorithm 1 PA-AP for solving (5)
Input: Observation vector b , linear mapping A , and some parameters μ 0 , ρ 0 , ε 1 , ε 2 > 0 , μ m a x μ 0 > 0 .
Initialize: Set X 0 and y 0 to zero matrix and vector, respectively. Set μ = μ 0 and η > A 2 2 . Set k = 0 .
while not converged, do
  step 1: Update X k + 1 , y k + 1 and λ k + 1 in turn by (30).
  step 2: Update μ k + 1 by (34), and let μ μ k + 1 .
  step 3: k k + 1 .
end while

3.3. Convergence

In this section, we establish the convergence of (22). For problem (5), Li et al. [23] presented a general formula of fixed-point algorithms. Given two symmetric positive definite matrices S R p × p and T R m × m , denote
S A : = 0 A A * 0 , R : = S 0 0 T , E : = I + R 1 S A ,
where S A , R and E can be treated as linear maps which map ( R p ) × ( R m × n ) into itself.
Defining Z : = ( z , X ) ( R p ) × ( R m × n ) and T : = prox ( g * + f ) ( · ) , R , then the solution to (5) is equivalent to
Z = ( T E ) ( Z ) .
Furthermore, a multi-step proximity algorithm was proposed, which is
Z k + 1 = T ( E 0 Z k + 1 + R 1 i = 1 l M i Z k i + 1 ) ,
where E 0 = E R 1 ( S A M 0 ) and M 0 = i = 1 l M i . Let M : = { M i : i = 0 , 1 , , l } . Li et al. [23] proved that the sequence { Z k } generated by (35) converges to a solution of problem (5) if M satisfies the “Condition-M”, which refers to
(i)
M 0 = i = 1 l M i ,
(ii)
M 1 = M 2 = = M l 1 ,
(iii)
H : = M 0 + M l , H is symmetric positive definite,
(iv)
N ( H ) N ( M l ) N ( M l T ) ,
(v)
( H ) 1 2 M l ( H ) 1 2 2 < 1 2 ,
where N ( H ) and H are the null space and the Moore–Penrose pseudo-inverse matrix of H, respectively.
By checking the Condition-M, we prove the convergence of (22).
Theorem 1.
If Q is symmetric positive definite, and A ( A * A + 1 μ Q ) 1 2 2 < 1 , then the sequence generated by (22) converges to the solution of (5).
Proof. 
By comparing (23) and (35), it can be found that l = 2 and
E 0 = 0 0 2 τ A * I τ ( μ A * A + Q ) , R = 1 μ I 0 0 1 τ I , M 0 = M 1 = 1 μ I A A * μ A * A + Q , M 2 = 0 .
We clearly see that Item (i) and Item (ii) of Condition-M hold. Furthermore, H = M 0 + M 2 = M 0 , which is symmetric. By Lemma 6.2 in [23], H is positive definite if and only if ( μ A * A + Q ) 1 2 A * ( 1 μ ) 1 2 2 < 1 , which is equivalent to A ( A * A + 1 μ Q ) 1 2 2 < 1 . Hence, Item (iii) of Condition-M also holds.
Since H is positive definite, it yields that N ( H ) = { 0 } , which implies Item (iv) of Condition-M holds. Finally, M 2 = 0 implies that Item (v) holds. Consequently, the sequence generated by (23) converges to the solution of (5). The equivalence of (22) and (23) proves the result. □
Corollary 1.
Let Q = μ η I μ A * A . If η > A 2 2 , then the sequence generated by (30) converges to the solution of (5).
Proof. 
Since η > A 2 2 if and only if A ( A * A + 1 μ Q ) 1 2 2 < 1 , the result is true by Theorem 1.  □

4. Numerical Experiments

In this section, we present some numerical experiments to show the effectiveness of the proposed algorithm (PA-AP). To this end, we test algorithms to solve the nuclear norm minimization problem, the noiseless matrix completion problem ( δ = 0 ) , noisy matrix completion ( δ > 0 ) and low-rank image recovery problem. We compare PA-AP against the ADM [18], IADM-CG [20] and IADM-BB [19]. All experiments are performed under Windows 10 and MATLAB R2016 running on a Lenovo laptop with an Intel CORE i7 CPU at 2.7 GHz and 8 GB of memory. In the numerical experiments of the first two parts, we use randomly generated square matrices for simulations. We denote the true solution by X * R m × m . We generate the rank-r matrix X * as a product of X L X R T , where X L and X R are independent m × r matrices with i.i.d. Gaussian entries. For each test, the stopping criterion is
RelChg = X k X k 1 F X k 1 F ε 2 ,
where ε 2 > 0 . The algorithms are also forced to stop when the iteration number exceeds 10 3 .
Let X ^ be the solution obtained by the algorithms. We use the relative error to measure the quality of X ^ compared to the original matrix X * , i.e.,
RelErr = X ^ X * F X * F .
It is obvious that, in each iteration of computing X k + 1 , PA-AP contains an SVD computation that computes all singular values and singular vectors. However, we actually only need the ones that are bigger than 1 μ η . This causes the main computational load by using full SVD. Fortunately, this disadvantage can be smoothed by using the software PROPACK [30], which is designed to compute the singular values bigger than a threshold and the corresponding vectors. Although PROPACK can calculate the first fixed number of singular values, it cannot automatically determine the number of singular values greater than 1 μ η . Therefore, in order to perform a local SVD, we need to predict the number of singular values and vectors calculated in each iteration, which is expressed by s v k . We initialize s v 0 = 0.01 m , and update it in each iteration as follows:
s v k + 1 = s v p k + 1 , if s v p k < s v k , s v p k + 5 , if s v p k = s v k ,
where s v p k is the number of singular values in the s v k singular values that are bigger than 1 μ η .
We use r and p to represent the rank of an ( m × n ) matrix and the cardinality of the index set Ω , i.e., p = | Ω | , and use s r = p / ( m n ) to represent the sampling rate. The “degree of freedom” of a matrix with rank r is defined by dof = r ( m + n r ) . For PA-AP, we set ε 1 = 10 4 , μ 0 = 1 b 2 , μ m a x = max { 10 3 μ 0 , 10 2 } , ρ 0 = 1.7 , and η = 1.01 A 2 2 . In all the experimental results, the boldface numbers always indicate the best results.

4.1. Nuclear Norm Minimization Problem

In this subsection, we use PA-AP to solve the three types of problems including (2)–(4). The linear map A is chosen as a partial discrete cosine transform (DCT) matrix. Specifically, in the noiseless model (2), A is generated by the following MATLAB scripts:
i n d i c e s = r a n d s a m p l e ( m * n , p ) ; b = d c t 2 ( X * ) ; b = b ( i n d i c e s ) ,
which shows that A maps R m × n into R p . In the noise model (3), we further set
b = A X * + ω ,
where ω is the additive Gaussian noise of zero mean and standard deviation σ . In (3), the noise level δ is chosen as ω 2 .
The results are listed in Table 1, where the number of iterations (Iter) and CPU time in seconds (Time) besides RelErr are reported. To further illustrate the efficiency of PA-AP, we test problems with different matrix sizes and sampling rates ( s r ). In Table 2, we compare the PA-AP with IADM-CG and IADM-BB for solving the NNRLS problem (4). It shows that our method is more efficient than the other two methods, and thus it is suitable for solving large-scale problems.

4.2. Matrix Completion

This subsection adopts the PA-AP method to solve the noiseless matrix completion problem (2) and the noisy matrix completion problem (3) to verify its validity. The mapping A is a linear projection operator defined as A X * = X Ω , where X Ω is a vector formed by the components of X * with indices in Ω . The indicators of the selected elements are randomly arranged to form a column vector, and the index set of the first s r × m × n is selected to form the set Ω . For noisy matrix completion problems, we take δ = 10 2 .
In Table 3, we report the numerical results of PA-AP for noiseless and noisy matrix completion problems, taking m = n = 1000 and m = n = 2000 . Only the rank of the original matrix is considered to be r = 10 and r = 20 . As can be seen from Table 3, the PA-AP method can effectively solve these problems. Compared with the noiseless problem, PA-AP solves the noisy problems accuracy of the solution dropped. Moreover, the number of iterations and the running time decrease as s r increases.
To further verify the validity of the PA-AP method, it is compared with ADM, IADM-CG and IADM-BB. When RelChg is lower than 10 5 , the algorithms are set to terminate. The numerical results of the four methods for solving the noiseless and noisy MC problem are recorded in Table 4 and Table 5, from which we can see that the calculation time of the PA-AP method is much less than IADM-BB and IADM-CG, and the number of iterations and calculation time of PA-AP and ADM are almost the same, while our method is relatively more accurate. From the limited experimental data, the PA-AP method is shown to be more effective than the ADM, IADM-BB and IADM-CG.

4.3. Low-Rank Image Recovery

In the section, we turn to solve problem (2) for low-rank image recovery. The effectiveness of the PA-AP method is verified by testing three 512 × 512 grayscale images. First, the original images are transformed into low-rank images with rank 40. Then, we lose some elements from the low rank matrix to get the damaged image, and restore them by using the PA-AP, ADM, IADM-BB and IADM-CG, respectively. The iteration process is stopped when RelChg falls below 10 5 . The original images, the corresponding low-rank images, the damaged images, and the restored images by the PA-AP are depicted in Figure 1. Observing the figure, we clearly see that our algorithm performs well.
To evaluate the recovery performance, we employ the Peak Signal-to-Noise Ratio (PSNR), which is defined as
P S N R = 10 · l o g 10 X * 2 1 m n X * X ^ F 2 ,
where X * is the infinity norm of X * , defined as the maximum absolute value of the elements in X * . From the definition, higher PSNR indicates a better recovery result.
Table 6 shows the cost time, relative error and PSNR of recovery image by different methods. From Table 6, we can note that the PA-AP method is able to obtain higher PSNR as s r increases. Moreover, the running time of PA-AP is always much less than the other methods with different settings. Figure 2 shows the executing process of the different methods. From Figure 2, it is clear that our method can estimate the rank exactly after 30 iterations, and runs much less time before termination than other methods.

5. Conclusions and Future Work

In this paper, a unified model and algorithm for the matrix nuclear norm minimization problem are proposed. In each iteration, the proposed algorithm mainly includes computing matrix singular value decompositions and solving proximity operators of two convex functions. In addition, the convergence of the algorithm is also proved. A large number of experimental results and numerical comparisons show that the algorithm is superior to IADM-BB, IADM-CG and ADM algorithms.
The problem of tensor completion has been widely studied recently [31]. One of our future works is to extend the proposed PA-AP algorithm to tensor completion.

Author Contributions

Methodology, W.H. and W.Z.; Investigation, W.H., W.Z. and G.Y.; Writing–original draft preparation, W.H. and W.Z.; Writing–review and editing, W.H. and G.Y.; Software, W.Z.; Project administration, G.Y.

Funding

This research was funded by the National Natural Science Foundation of China (Nos. 61863001, 11661007, 61702244, 11761010 and 61562003), the National Natural Science Foundation of Jiangxi Province (Nos. 20181BAB202021, JXJG-18-14-11, and 20192BAB205086), the National Science Foundation of Zhejiang Province, China (LD19A010002), Research programme (Nos. 18zb04, YCX18B001) and the ’XieTong ChuangXin’ project of Gannan Normal University.

Acknowledgments

The authors would like to thank the anonymous reviewers that have highly improved the final version of this manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Candès, E.J.; Recht, B. Exact Matrix Completion via Convex Optimization. Found. Comput. Math. 2009, 9, 717–772. [Google Scholar] [Green Version]
  2. Liu, G.; Lin, Z.C.; Yu, Y. Robust subspace segmentation by low-rank representation. In Proceedings of the 27th International Conference on Machine Learning, Haifa, Israel, 21–24 June 2010; pp. 663–670. [Google Scholar]
  3. Dai, Y.; Li, H.; He, M. A simple prior-free method for non-rigid structure-from-motion factorization. Int. J. Comput. Vis. 2014, 107, 101–122. [Google Scholar] [CrossRef]
  4. Hu, W.; Wang, Z.; Liu, S.; Yang, X.; Yu, G.; Zhang, J.J. Motion capture data completion via truncated nuclear norm regularization. IEEE Signal Proc. Lett. 2018, 25, 258–262. [Google Scholar] [CrossRef]
  5. Lin, X.F.; Wei, G. Accelerated reweighted nuclear norm minimization algorithm for low rank matrix recovery. Signal Process. 2015, 114, 24–33. [Google Scholar] [CrossRef]
  6. Candès, E.J.; Plan, Y. Matrix completion with noise. Proc. IEEE 2010, 98, 925–936. [Google Scholar]
  7. Nie, F.; Huang, H.; Ding, C.H. Low-rank matrix recovery via efficient schatten p-norm minimization. In Proceedings of the Twenty-Sixth AAAI Conference on Artificial Intelligence, Toronto, ON, Canada, 22–26 July 2012; pp. 655–661. [Google Scholar]
  8. Mohan, K.; Fazel, M. Iterative reweighted algorithms for matrix rank minimization. J. Mach. Learn. Res. 2012, 13, 3441–3473. [Google Scholar]
  9. Zhang, D.; Hu, Y.; Ye, J.; Li, X.; He, X. Matrix completion by truncated nuclear norm regularization. In Proceedings of the Computer Vision and Pattern Recognition, Providence, RI, USA, 16–21 June 2012; pp. 2192–2199. [Google Scholar]
  10. Nie, F.; Hu, Z.; Li, X. Matrix completion based on non-convex low rank approximation. IEEE Trans. Image Process. 2019, 28, 2378–2388. [Google Scholar] [CrossRef]
  11. Cui, A.; Peng, J.; Li, H.; Zhang, C.; Yu, Y. Affine matrix rank minimization problem via non-convex fraction function penalty. J. Comput. Appl. Math. 2018, 336, 353–374. [Google Scholar] [CrossRef] [Green Version]
  12. Tütüncü, R.H.; Toh, K.C.; Todd, M.J. Solving semidenite-quadratic-linear programs using SDPT3. Math. Program. 2003, 95, 189–217. [Google Scholar] [CrossRef]
  13. Cai, J.F.; Candés, E.J.; Shen, Z. A singular value thresholding algorithm for matrix completion. SIAM J. Optim. 2010, 20, 1956–1982. [Google Scholar] [CrossRef]
  14. Ma, S.; Goldfarb, D.; Chen, L. Fixed point and bregman iterative methods for matrix rank minimization. Math. Progr. 2009, 128, 321–353. [Google Scholar] [CrossRef]
  15. Toh, K.C.; Yun, S. An accelerated proximal gradient algorithm for nuclear norm regularized least squares problems. Pac. J. Optim. 2010, 6, 615–640. [Google Scholar]
  16. Geng, J.; Wang, L.; Wang, X. Nuclear norm and indicator function model for matrix completion. J. Inverse Ill-Posed Probl. 2015, 24, 1–11. [Google Scholar] [CrossRef]
  17. Lin, Z.; Chen, M.; Ma, Y. The augmented Lagrange multiplier method for exact recovery of corrupted low-rank matrices. arXiv 2010, arXiv:1009.5055. [Google Scholar]
  18. Chen, C.; He, B.; Yuan, X.M. Matrix completion via an alternating direction method. IMA J. Numer. Anal. 2012, 32, 227–245. [Google Scholar] [CrossRef]
  19. Xiao, Y.H.; Jin, Z.F. An alternating direction method for linear-constrained matrix nuclear norm minimization. Numer. Linear Algebra 2012, 19, 541–554. [Google Scholar] [CrossRef]
  20. Jin, Z.F.; Wang, Q.; Wan, Z. Recovering low-rank matrices from corrupted observations via the linear conjugate gradient algorithm. J. Comput. Appl. Math. 2014, 256, 114–120. [Google Scholar] [CrossRef]
  21. Yang, J.; Yuan, X.M. Linearized augmented Lagrangian and alternating direction methods for nuclear minimization. Math. Comput. 2013, 82, 301–329. [Google Scholar] [CrossRef]
  22. Barzilai, J.; Borwein, J.M. Two point step size gradient method. IMA J. Numer. Anal. 1988, 4, 141–148. [Google Scholar] [CrossRef]
  23. Li, Q.; Shen, L.; Xu, Y. Multi-step fixed-point proximity algorithms for solving a class of optimization problems arising from image processing. Adv. Comput. Math. 2015, 41, 387–422. [Google Scholar] [CrossRef]
  24. Zhang, X.; Burger, M.; Osher, S. A unified prial-dual algorithm framework based on Bregman iteration. J. Sci. Comput. 2011, 46, 20–46. [Google Scholar] [CrossRef]
  25. Wang, J.H.; Meng, F.Y.; Pang, L.P.; Hao, X.H. An adaptive fixed-point proximity algorithm for solving total variation denoising models. Inform. Sci. 2017, 402, 69–81. [Google Scholar] [CrossRef]
  26. Lin, Z.C.; Liu, R.; Su, Z. Linearized alternating direction method with adaptive penalty for low-rank representation. Proc. Adv. Neural Inf. Process. Syst. 2011, 104, 612–620. [Google Scholar]
  27. Gabay, D.; Mercier, B. A dual algorithm for the solution of nonlinear variational problems via finite-element approximations. Comput. Math. Appl. 1976, 2, 17–40. [Google Scholar] [CrossRef]
  28. Chen, P.; Huang, J.; Zhang, X. A primal-dual fixed point algorithm for convex separable minimization with applications to image restoration. Inverse Probl. 2013, 29, 025011. [Google Scholar] [CrossRef]
  29. Micchelli, C.A.; Shen, L.; Xu, Y. Proximity algorithms for image models: Denoising. Inverse Probl. 2011, 27, 045009. [Google Scholar] [CrossRef]
  30. Larsen, R.M. PROPACK-Software for Large and Sparse SVD Calculations. Available online: http://sun.stanfor.edu/srmunk/PROPACK/ (accessed on 1 September 2019).
  31. Li, X.T.; Zhao, X.L.; Jiang, T.X.; Zheng, Y.B.; Ji, T.Y.; Huang, T.Z. Low-rank tensor completion via combined non-local self-similarity and low-rank regularization. Neurocomputing 2019, 267, 1–12. [Google Scholar] [CrossRef]
Figure 1. Original 512 × 512 images (Lena, Pirate, Cameraman) with full rank (first column); Corresponding low rank images with r = 40 (second column); Randomly masked images from rank 40 images with s r = 40 % (third column); Recovered images by PA-AP (last column).
Figure 1. Original 512 × 512 images (Lena, Pirate, Cameraman) with full rank (first column); Corresponding low rank images with r = 40 (second column); Randomly masked images from rank 40 images with s r = 40 % (third column); Recovered images by PA-AP (last column).
Symmetry 11 01277 g001
Figure 2. Convergence behavior of the four methods ( L e n a , s r = 0.4 , ε 2 = 10 5 ). The first subfigure is the estimated rank; the second is the relative error to the original matrix; and the last is the running time.
Figure 2. Convergence behavior of the four methods ( L e n a , s r = 0.4 , ε 2 = 10 5 ). The first subfigure is the estimated rank; the second is the relative error to the original matrix; and the last is the running time.
Symmetry 11 01277 g002
Table 1. PA-AP for noiseless and noisy DCT matrix ( m = n , ε 2 = 10 4 ).
Table 1. PA-AP for noiseless and noisy DCT matrix ( m = n , ε 2 = 10 4 ).
(n, r)p/dofsrProb. (3) ( δ = 0)Prob. (3) ( δ = 10−2)Prob. (4)
IterTimeRelErrIterTimeRelErrIterTimeRelErr
(128, 3)4.320.2862.033.942 × 10−3831.835.706 × 10−3832.148.050 × 10−3
(128, 3)8.640.4310.574.042 × 10−4330.695.535 × 10−3340.706.196 × 10−3
(128, 3)12.950.6200.441.376 × 10−4200.455.652 × 10−3200.485.714 × 10−3
(128, 3)17.270.8130.296.251 × 10−5120.345.952 × 10−3130.285.966 × 10−3
(256, 5)5.170.2532.043.178 × 10−4532.023.846 × 10−3532.084.353 × 10−3
(256, 5)10.340.4311.221.649 × 10−4311.164.424 × 10−3311.284.404 × 10−3
(256, 5)15.510.6200.781.139 × 10−4200.784.414 × 10−3200.764.419 × 10−3
(256, 5)20.680.8130.503.893 × 10−5130.524.458 × 10−3140.524.297 × 10−3
(512, 10)5.170.2557.632.397 × 10−4557.772.858 × 10−3547.862.939 × 10−3
(512, 10)10.340.4344.221.259 × 10−4344.553.068 × 10−3344.463.081 × 10−3
(512, 10)15.510.6222.851.195 × 10−4222.983.106 × 10−3222.963.177 × 10−3
(512, 10)20.680.8131.811.009 × 10−4131.853.129 × 10−3131.863.262 × 10−3
Table 2. Comparisons of PA-AP, IADM-CG and IADM-BB for DCT matrix ( m = n , ε 2 = 10 4 ).
Table 2. Comparisons of PA-AP, IADM-CG and IADM-BB for DCT matrix ( m = n , ε 2 = 10 4 ).
(n, r)p/dofsrPA-APIADM-CGIADM-BB
IterTimeRelErrIterTimeRelErrIterTimeRelErr
(500, 10)5.050.2548.652.881 × 10−33315.098.312 × 10−33926.659.448 × 10−3
(500, 10)10.100.4334.143.035 × 10−3238.454.433 × 10−32616.586.484 × 10−3
(500, 10)15.150.6222.153.20 × 10−3155.563.968 × 10−3189.814.581 × 10−3
(500, 10)20.200.8131.723.224 × 10−3104.793.292 × 10−3127.503.528 × 10−3
(1000, 20)5.050.27038.202.110 × 10−33369.238.806 × 10−340141.575.047 × 10−3
(1000, 20)10.100.43815.142.379 × 10−32241.343.442 × 10−32378.787.097 × 10−3
(1000, 20)15.150.6218.832.322 × 10−31730.523.362 × 10−31960.354.277 × 10−3
(1000, 20)20.200.8146.602.379 × 10−31325.363.665 × 10−32057.512.724 × 10−3
(2000, 20)5.050.274144.892.270 × 10−339581.641.039 × 10−2461399.318.316 × 10−3
(2000, 20)10.100.43362.682.348 × 10−322328.024.755 × 10−322628.665.090 × 10−3
(2000, 20)15.150.62134.602.368 × 10−314172.703.405 × 10−317438.014.527 × 10−3
(2000, 20)20.200.81323.402.401 × 10−320128.732.752 × 10−315359.593.260 × 10−3
Table 3. PA-AP for noiseless and noisy matrix completion problems ( m = n , ε 2 = 10 5 ).
Table 3. PA-AP for noiseless and noisy matrix completion problems ( m = n , ε 2 = 10 5 ).
(n, r)p/dofsrProb. (3) ( δ = 0)Prob. (3) ( δ = 10−2)Prob. (4)
IterTimeRelErrIterTimeRelErrIterTimeRelErr
(1000, 10)10.050.27621.942.439 × 10−57621.523.015 × 10−37622.925.245 × 10−4
(1000, 10)20.100.4409.281.159 × 10−5409.683.085 × 10−3399.557.436 × 10−4
(1000, 10)30.150.6225.721.061 × 10−6236.363.105 × 10−3236.108.251 × 10−4
(1000, 10)40.200.8123.882.859 × 10−6143.763.075 × 10−3143.748.668 × 10−4
(1000, 20)5.050.28739.103.037 × 10−58843.581.995 × 10−38743.715.469 × 10−4
(1000, 20)10.100.44613.021.277 × 10−54613.692.115 × 10−34613.477.467 × 10−4
(1000, 20)15.150.6257.561.230 × 10−5257.542.158 × 10−3257.648.232 × 10−4
(1000, 20)20.200.8144.869.617 × 10−6144.872.194 × 10−3144.888.624 × 10−4
(2000, 10)20.050.28046.732.538 × 10−58067.923.086 × 10−38054.297.422 × 10−4
(2000, 10)40.100.43726.021.299 × 10−53731.653.091 × 10−33727.208.641 × 10−4
(2000, 10)60.150.62319.309.672 × 10−62321.643.160 × 10−32320.029.024 × 10−4
(2000, 10)80.200.81311.992.584 × 10−61313.313.153 × 10−31311.719.276 × 10−4
(2000, 20)10.050.291108.162.853 × 10−591132.042.133 × 10−391129.307.461 × 10−4
(2000, 20)20.100.44042.541.146 × 10−54044.622.190 × 10−34042.498.674 × 10−4
(2000, 20)30.150.62427.078.683 × 10−62428.242.226 × 10−32427.439.058 × 10−4
(2000, 20)40.200.81315.813.772 × 10−61317.192.205 × 10−31316.189.268 × 10−4
Table 4. Comparisons of PA-AP, ADM, IADM-CG and IADM-BB for noiseless matrix completion.
Table 4. Comparisons of PA-AP, ADM, IADM-CG and IADM-BB for noiseless matrix completion.
(n, r)pp/dofsrPA-APADMIADM-CGIADM-BB
IterTimeRelErrIterTimeRelErrIterTimeRelErrIterTimeRelErr
(1024, 5)20923510.290.28017.522.218 × 10−58118.746.133 × 10−45258.413.266 × 10−350112.501.801 × 10−3
(1024, 5)31452930.800.35010.251.743 × 10−55611.603.988 × 10−43739.523.639 × 10−35196.333.023 × 10−3
(1024, 5)41954741.060.4377.571.036 × 10−5408.313.023 × 10−43636.772.439 × 10−33767.102.655 × 10−3
(1024, 5)52521351.330.5286.448.484 × 10−6317.082.600 × 10−43133.221.102 × 10−32957.781.414 × 10−3
(1024, 5)62873661.590.6224.896.311 × 10−6266.012.063 × 10−42122.912.330 × 10−33255.611.020 × 10−3
(1024, 5)73342971.860.7174.204.239 × 10−6215.421.738 × 10−44040.231.361 × 10−45349.851.743 × 10−4
(1024, 5)83851382.120.8133.173.840 × 10−6163.951.593 × 10−44129.954.171 × 10−43041.921.031 × 10−4
(1024, 5)94380192.390.9132.582.782 × 10−6122.581.386 × 10−41112.168.341 × 10−41219.256.523 × 10−4
(1024, 10)20946910.290.27715.942.341 × 10−57114.796.273 × 10−45150.902.646 × 10−353109.162.210 × 10−3
(1024, 10)31545715.440.35511.951.848 × 10−55211.634.264 × 10−44042.722.328 × 10−33979.822.258 × 10−3
(1024, 10)41944220.580.44010.661.005 × 10−53910.793.031 × 10−43233.681.268 × 10−33468.768.858 × 10−3
(1024, 10)52414525.730.5308.978.688 × 10−5319.492.525 × 10−43235.261.137 × 10−32955.391.536 × 10−3
(1024, 10)62955530.870.6235.267.085 × 10−6245.561.972 × 10−43937.554.376 × 10−43251.027.468 × 10−3
(1024, 10)73328536.020.7184.986.149 × 10−6195.131.817 × 10−42728.356.503 × 10−43048.983.295 × 10−4
(1024, 10)83865041.160.8144.532.647 × 10−6165.281.557 × 10−43133.235.794 × 10−41730.966.046 × 10−4
(1024, 10)94373846.310.9123.362.130 × 10−6123.621.370 × 10−43434.158.934 × 10−42340.623.884 × 10−4
Table 5. Comparisons of PA-AP, ADM, IADM-CG and IADM-BB for noisy matrix completion ( δ = 10 2 ) .
Table 5. Comparisons of PA-AP, ADM, IADM-CG and IADM-BB for noisy matrix completion ( δ = 10 2 ) .
(n, r)pp/dofsrPA-APADMIADM-CGIADM-BB
IterTimeRelErrIterTimeRelErrIterTimeRelErrIterTimeRelErr
(1024, 5)21015120.530.26116.964.345 × 10−36013.614.354 × 10−34095.669.546 × 10−345237.455.234 × 10−3
(1024, 5)31433230.800.34310.684.413 × 10−34310.884.448 × 10−32764.231.215 × 10−231164.525.319 × 10−3
(1024, 5)41870841.060.4318.954.416 × 10−3339.504.419 × 10−32559.756.410 × 10−324123.385.318 × 10−3
(1024, 5)52442951.330.5247.654.387 × 10−3257.744.402 × 10−32149.004.920 × 10−31997.814.791 × 10−3
(1024, 5)62873661.590.6195.254.452 × 10−3194.934.452 × 10−31940.695.035 × 10−31473.535.177 × 10−3
(1024, 5)73413171.860.7154.474.335 × 10−3174.564.337 × 10−31739.244.791 × 10−32176.874.524 × 10−3
(1024, 5)83847682.120.8123.644.444 × 10−3143.974.446 × 10−31939.521.184 × 10−31554.214.479 × 10−3
(1024, 5)94417092.390.9103.164.486 × 10−3112.944.557 × 10−3>1021.294.591 × 10−31036.214.531 × 10−3
(1024, 10)21011810.290.27720.563.017 × 10−37119.103.081 × 10−35362.983.894 × 10−348123.014.089 × 10−3
(1024, 10)31461415.440.35415.213.089 × 10−35215.273.119 × 10−34044.453.842 × 10−34185.273.816 × 10−3
(1024, 10)42019120.580.44011.533.060 × 10−33911.013.075 × 10−33236.463.306 × 10−33472.743.178 × 10−3
(1024, 10)52340525.730.5307.593.087 × 10−3317.703.097 × 10−35252.593.087 × 10−32854.283.647 × 10−3
(1024, 10)62893530.870.6236.043.061 × 10−3246.763.068 × 10−34445.743.061 × 10−33961.593.113 × 10−3
(1024, 10)73409636.020.7185.173.090 × 10−3194.973.095 × 10−33233.063.134 × 10−34772.073.116 × 10−3
(1024, 10)83850941.160.8144.263.121 × 10−3164.653.125 × 10−33131.853.175 × 10−33860.783.183 × 10−3
(1024, 10)94406846.310.9123.763.093 × 10−3123.543.096 × 10−33434.643.216 × 10−33345.153.095 × 10−3
Table 6. Comparisons of PA-AP, ADM, IADM-CG and IADM-BB for low-rank image recovery.
Table 6. Comparisons of PA-AP, ADM, IADM-CG and IADM-BB for low-rank image recovery.
NamesrPA-APADMIADM-CGIADM-BB
TimeRelErrPSNRTimeRelErrPSNRTimeRelErrPSNRTimeRelErrPSNR
Lena0.258.661.730 × 10−426.17127.861.751 × 10−426.61148.711.625 × 10−426.78161.591.633 × 10−426.78
0.446.093.984 × 10−584.6794.654.578 × 10−585.5294.861.191 × 10−476.06110.741.244 × 10−475.58
0.67.522.071 × 10−593.4255.922.717 × 10−592.6054.788.376 × 10−581.6565.759.504 × 10−580.40
0.83.411.025 × 10−5101.5937.321.771 × 10−599.0433.464.374 × 10−589.6239.096.070 × 10−586.65
Pirate0.254.231.561 × 10−426.03121.731.745 × 10−426.25140.401.643 × 10−426.41154.241.651 × 10−426.41
0.438.583.961 × 10−585.4292.334.905 × 10−586.2494.571.140 × 10−476.98112.778.735 × 10−579.81
0.68.151.313 × 10−597.9753.132.839 × 10−592.9254.814.376 × 10−586.8966.426.227 × 10−584.26
0.83.458.451 × 10−6104.6533.831.967 × 10−599.0529.416.704 × 10−591.7036.861.001 × 10−588.64
Cameraman0.250.511.717 × 10−424.10124.541.965 × 10−424.32145.182.159 × 10−424.43159.922.167 × 10−424.43
0.446.815.290 × 10−580.18102.315.570 × 10−581.30106.051.182 × 10−573.72115.481.313 × 10−473.00
0.610.032.389 × 10−590.9259.873.090 × 10−590.3658.436.099 × 10−583.0771.955.343 × 10−584.40
0.84.181.613 × 10−596.4637.862.036 × 10−596.0335.313.360 × 10−590.2245.532.106 × 10−593.04

Share and Cite

MDPI and ACS Style

Hu, W.; Zheng, W.; Yu, G. A Unified Proximity Algorithm with Adaptive Penalty for Nuclear Norm Minimization. Symmetry 2019, 11, 1277. https://doi.org/10.3390/sym11101277

AMA Style

Hu W, Zheng W, Yu G. A Unified Proximity Algorithm with Adaptive Penalty for Nuclear Norm Minimization. Symmetry. 2019; 11(10):1277. https://doi.org/10.3390/sym11101277

Chicago/Turabian Style

Hu, Wenyu, Weidong Zheng, and Gaohang Yu. 2019. "A Unified Proximity Algorithm with Adaptive Penalty for Nuclear Norm Minimization" Symmetry 11, no. 10: 1277. https://doi.org/10.3390/sym11101277

APA Style

Hu, W., Zheng, W., & Yu, G. (2019). A Unified Proximity Algorithm with Adaptive Penalty for Nuclear Norm Minimization. Symmetry, 11(10), 1277. https://doi.org/10.3390/sym11101277

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop